url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/24214
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24214/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24214/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24214/events
|
https://github.com/huggingface/transformers/pull/24214
| 1,753,625,722 |
PR_kwDOCUB6oc5S0ed2
| 24,214 |
Bump transformers from 3.5.1 to 4.30.0 in /examples/research_projects/bert-loses-patience
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@dependabot ignore this major version",
"OK, I won't notify you about version 4.x.x again, unless you re-open this PR. 😢"
] | 1,686 | 1,686 | 1,686 |
CONTRIBUTOR
| null |
Bumps [transformers](https://github.com/huggingface/transformers) from 3.5.1 to 4.30.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/huggingface/transformers/releases">transformers's releases</a>.</em></p>
<blockquote>
<h2>v4.30.0: 100k, Agents improvements, Safetensors core dependency, Swiftformer, Autoformer, MobileViTv2, timm-as-a-backbone</h2>
<h2>100k</h2>
<p>Transformers has just reached 100k stars on GitHub, and to celebrate we wanted to highlight 100 projects in the vicinity of <code>transformers</code> and we have decided to create an <a href="https://github.com/huggingface/transformers/blob/main/awesome-transformers.md">awesome-transformers</a> page to do just that.</p>
<p>We accept PRs to add projects to the list!</p>
<ul>
<li>Top 100 by <a href="https://github.com/LysandreJik"><code>@LysandreJik</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22912">#22912</a></li>
<li>Add LlamaIndex to awesome-transformers.md by <a href="https://github.com/ravi03071991"><code>@ravi03071991</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23484">#23484</a></li>
<li>add cleanlab to awesome-transformers tools list by <a href="https://github.com/jwmueller"><code>@jwmueller</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23440">#23440</a></li>
</ul>
<h2>4-bit quantization and QLoRA</h2>
<p>By leveraging the <code>bitsandbytes</code> library by <a href="https://github.com/TimDettmers"><code>@TimDettmers</code></a>, we add 4-bit support to <code>transformers</code> models!</p>
<ul>
<li>4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) by <a href="https://github.com/TimDettmers"><code>@TimDettmers</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23479">#23479</a></li>
</ul>
<h2>Agents</h2>
<p>The Agents framework has been improved and continues to be stabilized. Among bug fixes, here are the important new features that were added:</p>
<ul>
<li>Local agent capabilities, to load a generative model directly from <code>transformers</code> instead of relying on APIs.</li>
<li>Prompts are now hosted on the Hub, which means that anyone can fork the prompts and update them with theirs, to let other community contributors re-use them</li>
<li>We add an <code>AzureOpenAiAgent</code> class to support Azure OpenAI agents.</li>
</ul>
<ul>
<li>Add local agent by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23438">#23438</a></li>
<li>Enable prompts on the Hub by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23662">#23662</a></li>
<li>Add AzureOpenAiAgent by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a></li>
</ul>
<h2>Safetensors</h2>
<p>The <code>safetensors</code> library is a safe serialization framework for machine learning tensors. It has been audited and will become the default serialization framework for several organizations (Hugging Face, EleutherAI, Stability AI).</p>
<p>It has now become a core dependency of <code>transformers</code>.</p>
<ul>
<li>Making <code>safetensors</code> a core dependency. by <a href="https://github.com/Narsil"><code>@Narsil</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23254">#23254</a></li>
</ul>
<h2>New models</h2>
<h3>Swiftformer</h3>
<p>The SwiftFormer paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called ‘SwiftFormer’ is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2.</p>
<ul>
<li>Add swiftformer by <a href="https://github.com/shehanmunasinghe"><code>@shehanmunasinghe</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22686">#22686</a></li>
</ul>
<h3>Autoformer</h3>
<p>This model augments the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process.</p>
<ul>
<li>[Time-Series] Autoformer model by <a href="https://github.com/elisim"><code>@elisim</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/21891">#21891</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/huggingface/transformers/commit/fe861e578f50dc9c06de33cd361d2f625017e624"><code>fe861e5</code></a> [<code>GPT2</code>] Add correct keys on <code>_keys_to_ignore_on_load_unexpected</code> on all chil...</li>
<li><a href="https://github.com/huggingface/transformers/commit/b3e27a80578d022301611363b890107244e12354"><code>b3e27a8</code></a> Update the pin on Accelerate (<a href="https://redirect.github.com/huggingface/transformers/issues/24110">#24110</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/53e1f5cf66d320b9c809f3940c707b6fef435d2d"><code>53e1f5c</code></a> [<code>Trainer</code>] Correct behavior of <code>_load_best_model</code> for PEFT models (<a href="https://redirect.github.com/huggingface/transformers/issues/24103">#24103</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/17db177714b03103bb94cd71b7dd414bc63bffd5"><code>17db177</code></a> reset accelerate env variables after each test (<a href="https://redirect.github.com/huggingface/transformers/issues/24107">#24107</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/905892f09027cab690918c7766fea1bb51bcdd26"><code>905892f</code></a> Release: v4.30.0</li>
<li><a href="https://github.com/huggingface/transformers/commit/c3572e6bfba13ce6dc3fedb05cd1a946ea109576"><code>c3572e6</code></a> Add AzureOpenAiAgent (<a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/5eb3d3c7023ed0522d3c743ee2e13d896a3aa788"><code>5eb3d3c</code></a> Up pinned accelerate version (<a href="https://redirect.github.com/huggingface/transformers/issues/24089">#24089</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/d1c039e39864a41f6eb8b770a65f123c40164ea5"><code>d1c039e</code></a> fix accelerator prepare during eval only mode (<a href="https://redirect.github.com/huggingface/transformers/issues/24014">#24014</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/2c887cf8e0cb1ac96d28361ff3235a77f83c36ee"><code>2c887cf</code></a> Do not prepare lr scheduler as it as the right number of steps (<a href="https://redirect.github.com/huggingface/transformers/issues/24088">#24088</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/12298cb65c7e9d615b749dde935a0b4966f4ae49"><code>12298cb</code></a> fix executable batch size issue (<a href="https://redirect.github.com/huggingface/transformers/issues/24067">#24067</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/huggingface/transformers/compare/v3.5.1...v4.30.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24214/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24214/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24214",
"html_url": "https://github.com/huggingface/transformers/pull/24214",
"diff_url": "https://github.com/huggingface/transformers/pull/24214.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24214.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24213
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24213/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24213/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24213/events
|
https://github.com/huggingface/transformers/pull/24213
| 1,753,625,714 |
PR_kwDOCUB6oc5S0edu
| 24,213 |
Bump transformers from 3.5.1 to 4.30.0 in /examples/research_projects/bertabs
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24213). All of your documentation changes will be reflected on that endpoint.",
"@dependabot ignore this major version",
"OK, I won't notify you about version 4.x.x again, unless you re-open this PR. 😢"
] | 1,686 | 1,686 | 1,686 |
CONTRIBUTOR
| null |
Bumps [transformers](https://github.com/huggingface/transformers) from 3.5.1 to 4.30.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/huggingface/transformers/releases">transformers's releases</a>.</em></p>
<blockquote>
<h2>v4.30.0: 100k, Agents improvements, Safetensors core dependency, Swiftformer, Autoformer, MobileViTv2, timm-as-a-backbone</h2>
<h2>100k</h2>
<p>Transformers has just reached 100k stars on GitHub, and to celebrate we wanted to highlight 100 projects in the vicinity of <code>transformers</code> and we have decided to create an <a href="https://github.com/huggingface/transformers/blob/main/awesome-transformers.md">awesome-transformers</a> page to do just that.</p>
<p>We accept PRs to add projects to the list!</p>
<ul>
<li>Top 100 by <a href="https://github.com/LysandreJik"><code>@LysandreJik</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22912">#22912</a></li>
<li>Add LlamaIndex to awesome-transformers.md by <a href="https://github.com/ravi03071991"><code>@ravi03071991</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23484">#23484</a></li>
<li>add cleanlab to awesome-transformers tools list by <a href="https://github.com/jwmueller"><code>@jwmueller</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23440">#23440</a></li>
</ul>
<h2>4-bit quantization and QLoRA</h2>
<p>By leveraging the <code>bitsandbytes</code> library by <a href="https://github.com/TimDettmers"><code>@TimDettmers</code></a>, we add 4-bit support to <code>transformers</code> models!</p>
<ul>
<li>4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) by <a href="https://github.com/TimDettmers"><code>@TimDettmers</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23479">#23479</a></li>
</ul>
<h2>Agents</h2>
<p>The Agents framework has been improved and continues to be stabilized. Among bug fixes, here are the important new features that were added:</p>
<ul>
<li>Local agent capabilities, to load a generative model directly from <code>transformers</code> instead of relying on APIs.</li>
<li>Prompts are now hosted on the Hub, which means that anyone can fork the prompts and update them with theirs, to let other community contributors re-use them</li>
<li>We add an <code>AzureOpenAiAgent</code> class to support Azure OpenAI agents.</li>
</ul>
<ul>
<li>Add local agent by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23438">#23438</a></li>
<li>Enable prompts on the Hub by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23662">#23662</a></li>
<li>Add AzureOpenAiAgent by <a href="https://github.com/sgugger"><code>@sgugger</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a></li>
</ul>
<h2>Safetensors</h2>
<p>The <code>safetensors</code> library is a safe serialization framework for machine learning tensors. It has been audited and will become the default serialization framework for several organizations (Hugging Face, EleutherAI, Stability AI).</p>
<p>It has now become a core dependency of <code>transformers</code>.</p>
<ul>
<li>Making <code>safetensors</code> a core dependency. by <a href="https://github.com/Narsil"><code>@Narsil</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/23254">#23254</a></li>
</ul>
<h2>New models</h2>
<h3>Swiftformer</h3>
<p>The SwiftFormer paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called ‘SwiftFormer’ is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2.</p>
<ul>
<li>Add swiftformer by <a href="https://github.com/shehanmunasinghe"><code>@shehanmunasinghe</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/22686">#22686</a></li>
</ul>
<h3>Autoformer</h3>
<p>This model augments the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process.</p>
<ul>
<li>[Time-Series] Autoformer model by <a href="https://github.com/elisim"><code>@elisim</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/21891">#21891</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/huggingface/transformers/commit/fe861e578f50dc9c06de33cd361d2f625017e624"><code>fe861e5</code></a> [<code>GPT2</code>] Add correct keys on <code>_keys_to_ignore_on_load_unexpected</code> on all chil...</li>
<li><a href="https://github.com/huggingface/transformers/commit/b3e27a80578d022301611363b890107244e12354"><code>b3e27a8</code></a> Update the pin on Accelerate (<a href="https://redirect.github.com/huggingface/transformers/issues/24110">#24110</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/53e1f5cf66d320b9c809f3940c707b6fef435d2d"><code>53e1f5c</code></a> [<code>Trainer</code>] Correct behavior of <code>_load_best_model</code> for PEFT models (<a href="https://redirect.github.com/huggingface/transformers/issues/24103">#24103</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/17db177714b03103bb94cd71b7dd414bc63bffd5"><code>17db177</code></a> reset accelerate env variables after each test (<a href="https://redirect.github.com/huggingface/transformers/issues/24107">#24107</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/905892f09027cab690918c7766fea1bb51bcdd26"><code>905892f</code></a> Release: v4.30.0</li>
<li><a href="https://github.com/huggingface/transformers/commit/c3572e6bfba13ce6dc3fedb05cd1a946ea109576"><code>c3572e6</code></a> Add AzureOpenAiAgent (<a href="https://redirect.github.com/huggingface/transformers/issues/24058">#24058</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/5eb3d3c7023ed0522d3c743ee2e13d896a3aa788"><code>5eb3d3c</code></a> Up pinned accelerate version (<a href="https://redirect.github.com/huggingface/transformers/issues/24089">#24089</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/d1c039e39864a41f6eb8b770a65f123c40164ea5"><code>d1c039e</code></a> fix accelerator prepare during eval only mode (<a href="https://redirect.github.com/huggingface/transformers/issues/24014">#24014</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/2c887cf8e0cb1ac96d28361ff3235a77f83c36ee"><code>2c887cf</code></a> Do not prepare lr scheduler as it as the right number of steps (<a href="https://redirect.github.com/huggingface/transformers/issues/24088">#24088</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/12298cb65c7e9d615b749dde935a0b4966f4ae49"><code>12298cb</code></a> fix executable batch size issue (<a href="https://redirect.github.com/huggingface/transformers/issues/24067">#24067</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/huggingface/transformers/compare/v3.5.1...v4.30.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24213/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24213/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24213",
"html_url": "https://github.com/huggingface/transformers/pull/24213",
"diff_url": "https://github.com/huggingface/transformers/pull/24213.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24213.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24212
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24212/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24212/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24212/events
|
https://github.com/huggingface/transformers/issues/24212
| 1,753,611,240 |
I_kwDOCUB6oc5ohfvo
| 24,212 |
QLoRA Training does not give expected results
|
{
"login": "karths8",
"id": 47289950,
"node_id": "MDQ6VXNlcjQ3Mjg5OTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/47289950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/karths8",
"html_url": "https://github.com/karths8",
"followers_url": "https://api.github.com/users/karths8/followers",
"following_url": "https://api.github.com/users/karths8/following{/other_user}",
"gists_url": "https://api.github.com/users/karths8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/karths8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/karths8/subscriptions",
"organizations_url": "https://api.github.com/users/karths8/orgs",
"repos_url": "https://api.github.com/users/karths8/repos",
"events_url": "https://api.github.com/users/karths8/events{/privacy}",
"received_events_url": "https://api.github.com/users/karths8/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @pacman100 ",
"@karths8 did you manage to resolve the above issue?",
"No! Have not been able to solve it yet!",
"@karths8 @amdnsr \r\n\r\nI think the problem is rooted in the `find_all_linear_names` function, which is incompatible with `QLoRA` setting, the correct way to implement this function should be like following\r\n\r\n```\r\ndef find_all_linear_names(args, model):\r\n cls = bnb.nn.Linear4bit if args.bits == 4 else (bnb.nn.Linear8bitLt if args.bits == 8 else torch.nn.Linear)\r\n lora_module_names = set()\r\n for name, module in model.named_modules():\r\n if isinstance(module, cls):\r\n names = name.split('.')\r\n lora_module_names.add(names[0] if len(names) == 1 else names[-1])\r\n\r\n\r\n if 'lm_head' in lora_module_names: # needed for 16-bit\r\n lora_module_names.remove('lm_head')\r\n return list(lora_module_names)\r\n```\r\n\r\nBecause under `QLoRA` setting, the `torch.nn.Linear` module is replaced by `bnb.nn.Linearkbit` to realize the quantilization, the reason why your training loss is always the same is that there is no target module in your model at all.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,686 | 1,692 | 1,692 |
NONE
| null |
### System Info
transformers version: 4.30.0
Platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.29
Python version: 3.8.10
Huggingface_hub version: 0.15.1
Safetensors version: 0.3.1
PyTorch version (GPU?): 1.13.1 (True)
Tensorflow version (GPU?): not installed (NA)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using GPU in script?: Yes
Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Tried fine-tuning the [InstructCodeT5+](https://huggingface.co/Salesforce/instructcodet5p-16b) model using QLoRA and the loss is stuck at a particular value. I Followed the steps given in this example [notebook](https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zbf2-9VQTtGJ24k?usp=sharing#scrollTo=FuXIFTFapAMI) for QLoRA. Code for the experiment:
```
import pandas as pd
import os
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, BitsAndBytesConfig
import torch
from peft import LoraConfig, get_peft_model, prepare_model_for_int8_training, TaskType, prepare_model_for_kbit_training
from transformers import DataCollatorForSeq2Seq
import evaluate
import nltk
import numpy as np
from nltk.tokenize import sent_tokenize
from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments
from datasets import Dataset, DatasetDict
import argparse
import pickle
import json
parser = argparse.ArgumentParser(description='Options')
parser.add_argument('--dataset_dir', default='data', type=str, help="folder in which the dataset is stored")
parser.add_argument('--output_dir', default="lora-instructcodet5p", type=str, help="output directory for the model")
parser.add_argument('--results_dir', default="results", type=str, help="where the results should be stored")
args = parser.parse_args()
nltk.download("punkt")
tokenized_dataset = DatasetDict.load_from_disk(args.dataset_dir)
# Metric
metric = evaluate.load("rouge")
pad_tok = 50256
token_id="Salesforce/instructcodet5p-16b"
tokenizer = AutoTokenizer.from_pretrained(token_id)
# helper function to postprocess text
def postprocess_text(preds, labels):
preds = [pred.strip() for pred in preds]
labels = [label.strip() for label in labels]
# rougeLSum expects newline after each sentence
preds = ["\n".join(sent_tokenize(pred)) for pred in preds]
labels = ["\n".join(sent_tokenize(label)) for label in labels]
return preds, labels
def compute_metrics(eval_preds):
preds, labels = eval_preds
if isinstance(preds, tuple):
preds = preds[0]
for idx in range(len(preds)):
for idx2 in range(len(preds[idx])):
if preds[idx][idx2]==-100:
preds[idx][idx2] = 50256
decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)
# Replace -100 in the labels as we can't decode them.
labels = np.where(labels != pad_tok, labels, tokenizer.pad_token_id)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
# Some simple post-processing
decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels)
result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True)
result = {k: round(v * 100, 4) for k, v in result.items()}
prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds]
result["gen_len"] = np.mean(prediction_lens)
return result
def get_dict(predicts):
d = {}
for num in range(len(tokenized_dataset['test'])):
pred = tokenizer.decode([n for n in predicts[0][num] if n!=50256 and n!=-100])[1:]
d[num+1] = {'Question':tokenizer.decode([n for n in tokenized_dataset['test'][num]['input_ids'] if n!=50256]),
'Ground truth solution':tokenizer.decode([n for n in tokenized_dataset['test'][num]['labels'] if n!=50256]),
'Prediction': pred if pred else None}
return d
def find_all_linear_names(model):
cls = torch.nn.Linear
lora_module_names = set()
for name, module in model.named_modules():
if isinstance(module, cls):
names = name.split('.')
lora_module_names.add(names[0] if len(names) == 1 else names[-1])
if 'lm_head' in lora_module_names:
lora_module_names.remove('lm_head')
return list(lora_module_names)
def main():
device = 'cuda'
# huggingface hub model id
model_id="instructcodet5p-16b"
if not os.path.exists(model_id):
model_id=token_id
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
# load model from the hub
model = AutoModelForSeq2SeqLM.from_pretrained(model_id,
# torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True, decoder_start_token_id=1, pad_token_id=pad_tok, device_map="auto", quantization_config=bnb_config)
modules = find_all_linear_names(model)
# Define LoRA Config
lora_config = LoraConfig(
r=16,
lora_alpha=32,
target_modules=modules,
lora_dropout=0.05,
bias="none",
task_type=TaskType.SEQ_2_SEQ_LM
)
model = prepare_model_for_kbit_training(model, False)
# add LoRA adaptor
model = get_peft_model(model, lora_config)
model.print_trainable_parameters()
# we want to ignore tokenizer pad token in the loss
label_pad_token_id = pad_tok
# Data collator
data_collator = DataCollatorForSeq2Seq(
tokenizer,
model=model,
label_pad_token_id=label_pad_token_id,
pad_to_multiple_of=8
)
output_dir=args.output_dir
training_args = Seq2SeqTrainingArguments(
output_dir=output_dir,
per_device_train_batch_size=1,
# per_device_eval_batch_size=1,
predict_with_generate=True,
weight_decay=0.05,
# warmup_steps=200,
fp16=False, # Overflows with fp16
learning_rate=1e-4,
num_train_epochs=5,
logging_dir=f"{output_dir}/logs",
logging_strategy="epoch",
report_to="tensorboard",
push_to_hub=False,
# generation_max_length=200,
optim="paged_adamw_8bit",
lr_scheduler_type = 'constant'
)
# Create Trainer instance
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=tokenized_dataset["train"],
# eval_dataset=tokenized_dataset["validation"],
# compute_metrics=compute_metrics,
)
# train model
train_result = trainer.train()
if __name__ == '__main__':
main()
```
### Expected behavior
Output using QLoRA and the generations are empty during the evaluation stage:
```
{'loss': 6.9007, 'learning_rate': 0.0001, 'epoch': 1.0}
{'loss': 6.9007, 'learning_rate': 0.0001, 'epoch': 2.0}
{'loss': 6.9007, 'learning_rate': 0.0001, 'epoch': 3.0}
{'loss': 6.9007, 'learning_rate': 0.0001, 'epoch': 4.0}
{'loss': 6.9007, 'learning_rate': 0.0001, 'epoch': 5.0}
```
The same is working in a setting where I use LoRA instead where the loss is reducing and the generations are much better:
```
{'loss': 0.8144, 'learning_rate': 0.0001, 'epoch': 1.0}
{'loss': 0.0745, 'learning_rate': 0.0001, 'epoch': 2.0}
{'loss': 0.0391, 'learning_rate': 0.0001, 'epoch': 3.0}
{'loss': 0.0189, 'learning_rate': 0.0001, 'epoch': 4.0}
{'loss': 0.007, 'learning_rate': 0.0001, 'epoch': 5.0}
```
I understand that QLoRA can cause a little bit of performance drop but the results I get using this are close to nothing after fine-tuning. Any suggestions or help with this is greatly appreciated!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24212/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24212/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24211
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24211/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24211/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24211/events
|
https://github.com/huggingface/transformers/pull/24211
| 1,753,535,974 |
PR_kwDOCUB6oc5S0K8K
| 24,211 |
Tied params cleanup
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,686 | 1,686 | 1,686 |
COLLABORATOR
| null |
# What does this PR do?
This PR is the first in a series that aims to clean up the content of the variables `_keys_to_ignore_xxx` and the logic of warnings sent to the user resulting from them. In particular unless the model implement a hack like RoBERTa, a trained version with no shared version might not save the decoder...
One use of these variables is to detect "normal" shared weights and have a way to remove all but one of those shared weights for safetensors serialization, but the variable used (`_keys_to_ignore_unexepected`) is very noisy (it contains names of weights that are not shared). This PR introduces a variable `_tied_weights_keys` which will be used for that purpose (and in other places of the cleanup later on). It contains the named of the weights that are tied to other weights and that are safe to dismiss when saving (we won't dismiss them on the torch.save side since `torch.save` won't duplicate memory but will use it for `safetensors`). This PR also introduces a test that checks the content of this variable is correct.
Due to the very large number of models having potential shared weights, this PR limits itself at the introduction of this variable, proper fill for all existing models and test. The rest will follow in subsequent PRs.
cc @Narsil for info
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24211/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24211/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24211",
"html_url": "https://github.com/huggingface/transformers/pull/24211",
"diff_url": "https://github.com/huggingface/transformers/pull/24211.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24211.patch",
"merged_at": 1686670720000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24210
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24210/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24210/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24210/events
|
https://github.com/huggingface/transformers/pull/24210
| 1,753,425,185 |
PR_kwDOCUB6oc5SzyMo
| 24,210 |
Nah
|
{
"login": "jamesthesnake",
"id": 8227820,
"node_id": "MDQ6VXNlcjgyMjc4MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8227820?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamesthesnake",
"html_url": "https://github.com/jamesthesnake",
"followers_url": "https://api.github.com/users/jamesthesnake/followers",
"following_url": "https://api.github.com/users/jamesthesnake/following{/other_user}",
"gists_url": "https://api.github.com/users/jamesthesnake/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jamesthesnake/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jamesthesnake/subscriptions",
"organizations_url": "https://api.github.com/users/jamesthesnake/orgs",
"repos_url": "https://api.github.com/users/jamesthesnake/repos",
"events_url": "https://api.github.com/users/jamesthesnake/events{/privacy}",
"received_events_url": "https://api.github.com/users/jamesthesnake/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,686 | 1,686 | 1,686 |
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24210/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24210/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24210",
"html_url": "https://github.com/huggingface/transformers/pull/24210",
"diff_url": "https://github.com/huggingface/transformers/pull/24210.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24210.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24209
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24209/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24209/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24209/events
|
https://github.com/huggingface/transformers/pull/24209
| 1,753,418,215 |
PR_kwDOCUB6oc5Szwoh
| 24,209 |
fix(trainer): save the model config, tokenizer, and arguments when FSDP
|
{
"login": "calico-1226",
"id": 93032279,
"node_id": "U_kgDOBYuPVw",
"avatar_url": "https://avatars.githubusercontent.com/u/93032279?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/calico-1226",
"html_url": "https://github.com/calico-1226",
"followers_url": "https://api.github.com/users/calico-1226/followers",
"following_url": "https://api.github.com/users/calico-1226/following{/other_user}",
"gists_url": "https://api.github.com/users/calico-1226/gists{/gist_id}",
"starred_url": "https://api.github.com/users/calico-1226/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/calico-1226/subscriptions",
"organizations_url": "https://api.github.com/users/calico-1226/orgs",
"repos_url": "https://api.github.com/users/calico-1226/repos",
"events_url": "https://api.github.com/users/calico-1226/events{/privacy}",
"received_events_url": "https://api.github.com/users/calico-1226/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24209). All of your documentation changes will be reflected on that endpoint.",
"cc @pacman100 "
] | 1,686 | 1,686 | 1,686 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #24208
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24209/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/24209/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24209",
"html_url": "https://github.com/huggingface/transformers/pull/24209",
"diff_url": "https://github.com/huggingface/transformers/pull/24209.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24209.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24208
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24208/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24208/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24208/events
|
https://github.com/huggingface/transformers/issues/24208
| 1,753,410,749 |
I_kwDOCUB6oc5oguy9
| 24,208 |
The `Trainer` only save the model parameters when `is_fsdp_enabled` is True
|
{
"login": "calico-1226",
"id": 93032279,
"node_id": "U_kgDOBYuPVw",
"avatar_url": "https://avatars.githubusercontent.com/u/93032279?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/calico-1226",
"html_url": "https://github.com/calico-1226",
"followers_url": "https://api.github.com/users/calico-1226/followers",
"following_url": "https://api.github.com/users/calico-1226/following{/other_user}",
"gists_url": "https://api.github.com/users/calico-1226/gists{/gist_id}",
"starred_url": "https://api.github.com/users/calico-1226/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/calico-1226/subscriptions",
"organizations_url": "https://api.github.com/users/calico-1226/orgs",
"repos_url": "https://api.github.com/users/calico-1226/repos",
"events_url": "https://api.github.com/users/calico-1226/events{/privacy}",
"received_events_url": "https://api.github.com/users/calico-1226/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @pacman100 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,686 | 1,689 | 1,689 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.31.0.dev0
- Platform: Linux-5.4.0-42-generic-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The bug occurred when I was using `transformers.Trainer` to train a `LlamaForSequenceClassification` model with the FSDP arguments `--fsdp "full_shard auto_wrap --fsdp_transformer_layer_cls_to_wrap "LlamaDecoderLayer"`.
Specifically, when I used the `Trainer.save_model()` function to save the training results to `output_dir`, it only stored the model weights, without the corresponding model config, tokenizer, and training arguments. This issue only occurred when I trained the model using FSDP, but when not using FSDP, all of these components were saved correctly.
I located the corresponding code section in `Trainer.save_model()`
```python
elif (
ShardedDDPOption.ZERO_DP_2 in self.args.sharded_ddp
or ShardedDDPOption.ZERO_DP_3 in self.args.sharded_ddp
or self.fsdp is not None
or self.is_fsdp_enabled
):
if self.is_fsdp_enabled:
os.makedirs(output_dir, exist_ok=True)
self.accelerator.state.fsdp_plugin.save_model(self.accelerator, self.model, output_dir)
```
and section in `FullyShardedDataParallelPlugin.save_model()` of `accelerate-0.20.3`
```python
def save_model(self, accelerator, model, output_dir, model_index=0):
from torch.distributed.fsdp.fully_sharded_data_parallel import FullyShardedDataParallel as FSDP
from torch.distributed.fsdp.fully_sharded_data_parallel import StateDictType
if is_torch_version("<=", "1.13.5"):
with FSDP.state_dict_type(model, self.state_dict_type, self.state_dict_config):
state_dict = model.state_dict()
else:
FSDP.set_state_dict_type(model, self.state_dict_type, self.state_dict_config)
state_dict = model.state_dict()
if self.state_dict_type == StateDictType.FULL_STATE_DICT:
weights_name = f"{MODEL_NAME}.bin" if model_index == 0 else f"{MODEL_NAME}_{model_index}.bin"
output_model_file = os.path.join(output_dir, weights_name)
if accelerator.process_index == 0:
print(f"Saving model to {output_model_file}")
torch.save(state_dict, output_model_file)
print(f"Model saved to {output_model_file}")
else:
weights_name = (
f"{MODEL_NAME}_rank{accelerator.process_index}.bin"
if model_index == 0
else f"{MODEL_NAME}_{model_index}_rank{accelerator.process_index}.bin"
)
output_model_file = os.path.join(output_dir, weights_name)
print(f"Saving model to {output_model_file}")
torch.save(state_dict, output_model_file)
print(f"Model saved to {output_model_file}")
```
`FullyShardedDataParallelPlugin.save_model()` only saves the weights of the model, so we need to manually save other components.
### Expected behavior
Save the corresponding model config, tokenizer, and training arguments together with the trained model when using FSDP.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24208/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24208/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24207
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24207/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24207/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24207/events
|
https://github.com/huggingface/transformers/pull/24207
| 1,753,384,625 |
PR_kwDOCUB6oc5SzpJP
| 24,207 |
Add the number of `model` test failures to slack CI report
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The screenshot below shows the issue that motivates this PR\r\n\r\n<img width=\"506\" alt=\"Screenshot 2023-06-12 205537\" src=\"https://github.com/huggingface/transformers/assets/2521628/f6b322e2-a51e-48e9-a438-d1d431470637\">\r\n",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,686 | 1,686 | 1,686 |
COLLABORATOR
| null |
# What does this PR do?
We decided to add deepspeed (nightly version) CI job to past CI in #22393. Also `accelerate` is installed with the `main` branch.
This makes the deepspeed CI job have quite a lot of failures most of the time. Sometimes we need to wait DS team for a fix, and sometimes a fix is required from HF.
**Let's add an information regarding `number of model test failures` (i.e. not counting deepspeed CI job's failures) to the Slack CI report**, so we have a number indicating the progress in past CI and make me feel a bit more of fulfillment 🙏 .
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24207/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24207/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24207",
"html_url": "https://github.com/huggingface/transformers/pull/24207",
"diff_url": "https://github.com/huggingface/transformers/pull/24207.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24207.patch",
"merged_at": 1686598031000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24205
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24205/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24205/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24205/events
|
https://github.com/huggingface/transformers/pull/24205
| 1,753,098,474 |
PR_kwDOCUB6oc5SyqCK
| 24,205 |
Fix Debertav2 embed_proj
|
{
"login": "WissamAntoun",
"id": 44616226,
"node_id": "MDQ6VXNlcjQ0NjE2MjI2",
"avatar_url": "https://avatars.githubusercontent.com/u/44616226?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WissamAntoun",
"html_url": "https://github.com/WissamAntoun",
"followers_url": "https://api.github.com/users/WissamAntoun/followers",
"following_url": "https://api.github.com/users/WissamAntoun/following{/other_user}",
"gists_url": "https://api.github.com/users/WissamAntoun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WissamAntoun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WissamAntoun/subscriptions",
"organizations_url": "https://api.github.com/users/WissamAntoun/orgs",
"repos_url": "https://api.github.com/users/WissamAntoun/repos",
"events_url": "https://api.github.com/users/WissamAntoun/events{/privacy}",
"received_events_url": "https://api.github.com/users/WissamAntoun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey @WissamAntoun 👋 \r\n\r\nIf you check the config documentation for deberta, you see that there is no `embedding_size` attribute. There seems to be something wrong with the serialization of `almanach/camemberta-base-generator`",
"@gante Hey!\r\n\r\nThe thing is `camemberta-base-generator` needs to have embedding_size different from the hidden size, since it was trained using ELECTRA style and the generator needs to be smaller without messing the the embedding_size.\r\n\r\nAlso the code had support for the different sizes and already had the projection layer. But since it wasn't used by anyone, the bug with the MLM task was affecting anyone.\r\n\r\nActually it's now consistent with the code in the official deberta repo https://github.com/microsoft/DeBERTa/blob/master/DeBERTa/deberta/mlm.py#LL20C9-L20C84",
"@WissamAntoun makes sense -- since it was in the original implementation, I'll accept this PR 🤗 "
] | 1,686 | 1,686 | 1,686 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes an issue where loading a model with different hidden_size and embedding_size (such as `almanach/camemberta-base-generator`)) for masked language modeling won't work due to a size mismatch in the output projection in TF2 and Pytorch.
To replicate simply try loading the MLM model with the `DebertaV2ForMaskedLM`.
This colab shows that it now works https://colab.research.google.com/drive/1piUbXkmxNIhGdCWiN5Rx-s2-27FNwij7
Error from before:
<details>
<summary>Error</summary>
```python
---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
<ipython-input-1-049fda84e52f> in <cell line: 5>()
3 model_name = "almanach/camemberta-base-generator"
4 config = AutoConfig.from_pretrained(model_name)
----> 5 model = AutoModelForMaskedLM.from_pretrained(model_name,from_tf=True)
6 # model = AutoModelForMaskedLM.from_pretrained(model_name,config=config,from_tf=True)
7 tokenizer = AutoTokenizer.from_pretrained(model_name)
7 frames
/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
445 elif type(config) in cls._model_mapping.keys():
446 model_class = _get_model_class(config, cls._model_mapping)
--> 447 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
448 raise ValueError(
449 f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n"
/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1508 from .modeling_tf_pytorch_utils import load_tf2_checkpoint_in_pytorch_model
1509
-> 1510 model = load_tf2_checkpoint_in_pytorch_model(model, resolved_archive_file, allow_missing_keys=True)
1511 except ImportError:
1512 logger.error(
/usr/local/lib/python3.10/dist-packages/transformers/modeling_tf_pytorch_utils.py in load_tf2_checkpoint_in_pytorch_model(pt_model, tf_checkpoint_path, tf_inputs, allow_missing_keys)
316
317 if tf_inputs is not None:
--> 318 tf_model(tf_inputs, training=False) # Make sure model is built
319
320 load_tf_weights(tf_model, tf_checkpoint_path)
/usr/local/lib/python3.10/dist-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs)
68 # To get the full stack trace, call:
69 # `tf.debugging.disable_traceback_filtering()`
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb
/usr/local/lib/python3.10/dist-packages/transformers/modeling_tf_utils.py in run_call_with_unpacked_inputs(self, *args, **kwargs)
375 main_input = fn_args_and_kwargs.pop(main_input_name)
376 unpacked_inputs = input_processing(func, self.config, main_input, **fn_args_and_kwargs)
--> 377 return func(self, **unpacked_inputs)
378
379 # Keras enforces the first layer argument to be passed, and checks it through `inspect.getfullargspec()`. This
/usr/local/lib/python3.10/dist-packages/transformers/models/deberta_v2/modeling_tf_deberta_v2.py in call(self, input_ids, attention_mask, token_type_ids, position_ids, inputs_embeds, output_attentions, output_hidden_states, return_dict, labels, training, **kwargs)
1281 )
1282 sequence_output = outputs[0]
-> 1283 prediction_scores = self.mlm(sequence_output=sequence_output, training=training)
1284 loss = None if labels is None else self.hf_compute_loss(labels=labels, logits=prediction_scores)
1285
/usr/local/lib/python3.10/dist-packages/transformers/models/deberta_v2/modeling_tf_deberta_v2.py in call(self, sequence_output)
987
988 def call(self, sequence_output: tf.Tensor) -> tf.Tensor:
--> 989 prediction_scores = self.predictions(hidden_states=sequence_output)
990
991 return prediction_scores
/usr/local/lib/python3.10/dist-packages/transformers/models/deberta_v2/modeling_tf_deberta_v2.py in call(self, hidden_states)
973 seq_length = shape_list(hidden_states)[1]
974 hidden_states = tf.reshape(tensor=hidden_states, shape=[-1, self.hidden_size])
--> 975 hidden_states = tf.matmul(a=hidden_states, b=self.input_embeddings.weight, transpose_b=True)
976 hidden_states = tf.reshape(tensor=hidden_states, shape=[-1, seq_length, self.vocab_size])
977 hidden_states = tf.nn.bias_add(value=hidden_states, bias=self.bias)
InvalidArgumentError: Exception encountered when calling layer 'predictions' (type TFDebertaV2LMPredictionHead).
{{function_node __wrapped__MatMul_device_/job:localhost/replica:0/task:0/device:CPU:0}} Matrix size-incompatible: In[0]: [15,256], In[1]: [32008,768] [Op:MatMul]
Call arguments received by layer 'predictions' (type TFDebertaV2LMPredictionHead):
• hidden_states=tf.Tensor(shape=(3, 5, 256), dtype=float32)
```
</details>
- `transformers` version: 4.31.0.dev0
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu118 (False)
- Tensorflow version (GPU?): 2.12.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (cpu)
- Jax version: 0.4.10
- JaxLib version: 0.4.10
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@kamalkraj @Rocketknight1 @ArthurZucker @younesbelkada @gante
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24205/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24205/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24205",
"html_url": "https://github.com/huggingface/transformers/pull/24205",
"diff_url": "https://github.com/huggingface/transformers/pull/24205.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24205.patch",
"merged_at": 1686759893000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24204
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24204/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24204/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24204/events
|
https://github.com/huggingface/transformers/pull/24204
| 1,753,048,236 |
PR_kwDOCUB6oc5Sye7J
| 24,204 |
Skip RWKV test in past CI
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,686 | 1,686 | 1,686 |
COLLABORATOR
| null |
# What does this PR do?
In the past CI with torch 13.1 (and older), RWKV tests fail.
- The first failure is
```bash
test_model_parallelism
(line 114) RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)
```
- this affects 1x more subsequential tests
- skip this particular test will reduce a lot the number of failures.
- The docker env. of that past CI uses a base docker image shipped with `cuda 11.6`
- **In a docker image with cuda `11.8` but installing `torch==1.13+cu116`, no failure.**
Let's not take too much time on identifying what exactly cause the failure here, and just skip RWKV tests on past CI with torch < 2.0.
The goal is just to have a clean past CI report.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24204/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24204/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24204",
"html_url": "https://github.com/huggingface/transformers/pull/24204",
"diff_url": "https://github.com/huggingface/transformers/pull/24204.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24204.patch",
"merged_at": 1686586455000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24203
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24203/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24203/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24203/events
|
https://github.com/huggingface/transformers/pull/24203
| 1,753,044,020 |
PR_kwDOCUB6oc5SyeAg
| 24,203 |
Remove unnecessary aten::to overhead in llama
|
{
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Thanks @sgugger good catch, edited as suggested."
] | 1,686 | 1,686 | 1,686 |
COLLABORATOR
| null |
As per title, in `LlamaRotaryEmbedding` the [`cos_cached` and `sin_cached` buffers](https://github.com/huggingface/transformers/blob/08ae37c820395e91fc3aa8b801696de5002481d2/src/transformers/models/llama/modeling_llama.py#L94-L104) are not initialized on the right dtype, because `inv_freq` is always initialized on fp32 no matter the default pytorch dtype in use.
The result is that `cos_cached` and `sin_cached` do not obey `torch_dtype=torch.float16` or `torch_dtype=torch.float32`, resulting in unnecessary overheads when running in fp16:

I leave the `to()` in the `forward`, but it may be removed, WDYT? It is also fine as is, as it is now a no-op:

|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24203/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24203/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24203",
"html_url": "https://github.com/huggingface/transformers/pull/24203",
"diff_url": "https://github.com/huggingface/transformers/pull/24203.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24203.patch",
"merged_at": 1686586685000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24202
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24202/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24202/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24202/events
|
https://github.com/huggingface/transformers/pull/24202
| 1,753,041,208 |
PR_kwDOCUB6oc5SydZW
| 24,202 |
Remove unnecessary aten::to overhead in llama
|
{
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"woops wrong branch",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24202). All of your documentation changes will be reflected on that endpoint."
] | 1,686 | 1,686 | 1,686 |
COLLABORATOR
| null |
As per title, in `LlamaRotaryEmbedding` the [`cos_cached` and `sin_cached` buffers](https://github.com/huggingface/transformers/blob/08ae37c820395e91fc3aa8b801696de5002481d2/src/transformers/models/llama/modeling_llama.py#L94-L104) are not initialized on the right dtype, because `inv_freq` is always initialized on fp32 no matter the default pytorch dtype in use.
The result is that `cos_cached` and `sin_cached` do not obey `torch_dtype=torch.float16` or `torch_dtype=torch.float32`, resulting in unnecessary overheads:

I leave the `to()` in the `forward`, but it may be removed, WDYT? It is also fine as is, as it is now a no-op:

|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24202/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24202/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24202",
"html_url": "https://github.com/huggingface/transformers/pull/24202",
"diff_url": "https://github.com/huggingface/transformers/pull/24202.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24202.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24201
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24201/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24201/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24201/events
|
https://github.com/huggingface/transformers/pull/24201
| 1,753,007,914 |
PR_kwDOCUB6oc5SyWBV
| 24,201 |
Finish dataloader integration
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2155169140,
"node_id": "MDU6TGFiZWwyMTU1MTY5MTQw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/trainer",
"name": "trainer",
"color": "2ef289",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,686 | 1,686 | 1,686 |
CONTRIBUTOR
| null |
# What does this PR do?
Follow up to https://github.com/huggingface/transformers/pull/24028, which removes the TPU-specific dataloader bits.
The `MpDeviceLoader` does already what `Trainer` was doing before, it's just wrapped:
```python
class MpDeviceLoader(object):
"""Wraps an existing PyTorch DataLoader with background data upload.
This class should only be using with multi-processing data parallelism.
Args:
loader (:class:`torch.utils.data.DataLoader`): The PyTorch DataLoader to be
wrapped.
device (`torch.device`...): The device where the data has to be sent.
kwargs: Named arguments for the `ParallelLoader` constructor.
"""
def __init__(self, loader, device, **kwargs):
self._loader = loader
self._device = device
self._parallel_loader_kwargs = kwargs
def __iter__(self):
parallel_loader = ParallelLoader(self._loader, [self._device],
**self._parallel_loader_kwargs)
return parallel_loader.per_device_loader(self._device)
def __len__(self):
return len(self._loader)
```
So the native Accelerate integration will work just fine
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24201/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24201/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24201",
"html_url": "https://github.com/huggingface/transformers/pull/24201",
"diff_url": "https://github.com/huggingface/transformers/pull/24201.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24201.patch",
"merged_at": 1686590778000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24200
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24200/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24200/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24200/events
|
https://github.com/huggingface/transformers/pull/24200
| 1,752,906,344 |
PR_kwDOCUB6oc5SyAkI
| 24,200 |
Fix `_load_pretrained_model`
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This line what something I added in my previous PR thinking that we forgot to tie the weights. But I guess it was done intentionally as I see that one test failed and generated garbage value. Here the error that we get from the test: \r\n```\r\ntests/models/marian/test_modeling_marian.py:433: in _assert_generated_batch_equal_expected\r\n self.assertListEqual(self.expected_text, generated_words)\r\nE AssertionError: Lists differ: ['I like to read books', 'I like watching football'] != ['obliterat obliterat obliterat obliterat o[1345 chars]ɰɰɰ']\r\nE \r\nE First differing element 0:\r\nE 'I like to read books'\r\nE 'obliterat obliterat obliterat obliterat o[1214 chars]erat'\r\nE \r\nE Diff is 1556 characters long. Set self.maxDiff to None to see it.\r\n```\r\n\r\n\r\n\r\n> Could you explain a bit more why we need to remove this line? Thanks!\r\n\r\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"BTW, let's make the PR title a bit more precise, sth like `fix _load_pretrained_model` or anything you think more precise, @SunMarc .\r\nThanks!",
"Yeah, I will explore a little bit more why we have this weird behavior before merging."
] | 1,686 | 1,686 | 1,686 |
MEMBER
| null |
# What does this PR do ?
Fixes the following test from my old PR Add check for tied parameters (#24029):
`RUN_SLOW=1 python3 -m pytest -v tests/models/marian/test_modeling_marian.py::TestMarian_FI_EN_V2::test_batch_generation_en_fr`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24200/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24200/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24200",
"html_url": "https://github.com/huggingface/transformers/pull/24200",
"diff_url": "https://github.com/huggingface/transformers/pull/24200.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24200.patch",
"merged_at": 1686583867000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24199
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24199/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24199/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24199/events
|
https://github.com/huggingface/transformers/pull/24199
| 1,752,860,014 |
PR_kwDOCUB6oc5Sx2Po
| 24,199 |
Update `(TF)SamModelIntegrationTest`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Good catch, and sorry! I'll tell GPT-4 to watch that one in future, lol"
] | 1,686 | 1,686 | 1,686 |
COLLABORATOR
| null |
# What does this PR do?
- Add `require_tf`: otherwise, past CI's torch job (with only torch installed) will fail for this test class
- Add `TF` prefix: as usual convention
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24199/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24199/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24199",
"html_url": "https://github.com/huggingface/transformers/pull/24199",
"diff_url": "https://github.com/huggingface/transformers/pull/24199.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24199.patch",
"merged_at": 1686659295000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24198
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24198/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24198/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24198/events
|
https://github.com/huggingface/transformers/pull/24198
| 1,752,820,904 |
PR_kwDOCUB6oc5SxtfP
| 24,198 |
Generate: detect special architectures when loaded from PEFT
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24198). All of your documentation changes will be reflected on that endpoint."
] | 1,686 | 1,686 | 1,686 |
MEMBER
| null |
# What does this PR do?
Fixes #23686
As identified in [this comment](https://github.com/huggingface/transformers/issues/23686#issuecomment-1587285715), a PEFT-loaded BLOOM can't be used as an assistant with assisted generation.
BLOOM (and GPTBigCode) need special handling due to their different cache API, and the architecture detection code was incompatible with PEFT models. This PR adds the logic to detect these special architectures when loaded with PEFT.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24198/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24198/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24198",
"html_url": "https://github.com/huggingface/transformers/pull/24198",
"diff_url": "https://github.com/huggingface/transformers/pull/24198.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24198.patch",
"merged_at": 1686582381000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24197
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24197/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24197/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24197/events
|
https://github.com/huggingface/transformers/pull/24197
| 1,752,782,022 |
PR_kwDOCUB6oc5SxlBZ
| 24,197 |
Fix steps bugs in no trainer examples
|
{
"login": "Ethan-yt",
"id": 9592150,
"node_id": "MDQ6VXNlcjk1OTIxNTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/9592150?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ethan-yt",
"html_url": "https://github.com/Ethan-yt",
"followers_url": "https://api.github.com/users/Ethan-yt/followers",
"following_url": "https://api.github.com/users/Ethan-yt/following{/other_user}",
"gists_url": "https://api.github.com/users/Ethan-yt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ethan-yt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ethan-yt/subscriptions",
"organizations_url": "https://api.github.com/users/Ethan-yt/orgs",
"repos_url": "https://api.github.com/users/Ethan-yt/repos",
"events_url": "https://api.github.com/users/Ethan-yt/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ethan-yt/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,686 | 1,686 | 1,686 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #24186
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24197/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24197/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24197",
"html_url": "https://github.com/huggingface/transformers/pull/24197",
"diff_url": "https://github.com/huggingface/transformers/pull/24197.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24197.patch",
"merged_at": 1686584996000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24195
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24195/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24195/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24195/events
|
https://github.com/huggingface/transformers/pull/24195
| 1,752,654,509 |
PR_kwDOCUB6oc5SxJAW
| 24,195 |
Fix device issue in `OpenLlamaModelTest::test_model_parallelism`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,686 | 1,686 | 1,686 |
COLLABORATOR
| null |
# What does this PR do?
See the comments in the changes.
Currently, CI has a failure
```bash
src/transformers/models/open_llama/modeling_open_llama.py:740: in forward
logits = torch.einsum("blh,vh->blv", hidden_states, self.model.embed_tokens.weight)
...
...
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument mat2 in method wrapper_CUDA_bmm)
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24195/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24195/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24195",
"html_url": "https://github.com/huggingface/transformers/pull/24195",
"diff_url": "https://github.com/huggingface/transformers/pull/24195.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24195.patch",
"merged_at": 1686576087000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24194
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24194/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24194/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24194/events
|
https://github.com/huggingface/transformers/issues/24194
| 1,752,515,345 |
I_kwDOCUB6oc5odUMR
| 24,194 |
ONNX model conversion error
|
{
"login": "kobiche",
"id": 56874660,
"node_id": "MDQ6VXNlcjU2ODc0NjYw",
"avatar_url": "https://avatars.githubusercontent.com/u/56874660?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kobiche",
"html_url": "https://github.com/kobiche",
"followers_url": "https://api.github.com/users/kobiche/followers",
"following_url": "https://api.github.com/users/kobiche/following{/other_user}",
"gists_url": "https://api.github.com/users/kobiche/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kobiche/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kobiche/subscriptions",
"organizations_url": "https://api.github.com/users/kobiche/orgs",
"repos_url": "https://api.github.com/users/kobiche/repos",
"events_url": "https://api.github.com/users/kobiche/events{/privacy}",
"received_events_url": "https://api.github.com/users/kobiche/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @michaelbenayoun ",
"I ran this script and didn't get any error, maybe because I don't have a GPU. I put a breakpoint before the relevant line and found this:\r\n\r\n```\r\n(Pdb) self\r\nattention_scores defined in (%attention_scores : Float(*, *, *, *, strides=[768, 64, 8, 1], requires_grad=0, device=cpu) = onnx::Reshape(%612, %629), scope: transformers.models.deberta_v2.modeling_deberta_v2.DebertaV2Model::/transformers.models.deberta_v2.modeling_deberta_v2.DebertaV2Encoder::encoder/transformers.models.deberta_v2.modeling_deberta_v2.DebertaV2Layer::layer.0/transformers.models.deberta_v2.modeling_deberta_v2.DebertaV2Attention::attention/transformers.models.deberta_v2.modeling_deberta_v2.DisentangledSelfAttention::self # /home/alex/work/transformers/src/transformers/models/deberta_v2/modeling_deberta_v2.py:735:0\r\n)\r\n(Pdb) self.type().dtype()\r\ntorch.float32\r\n(Pdb) type(self.type()) is torch._C.TensorType is torch.TensorType\r\nTrue\r\n(Pdb) torch._C\r\n<module 'torch._C' from '/home/alex/work/transformers/venv/lib/python3.10/site-packages/torch/_C.cpython-310-x86_64-linux-gnu.so'>\r\n```\r\n\r\n<details>\r\n<summary>Click to show all the warnings I got which may or may not be relevant</summary>\r\n2023-06-20 18:59:48.309993: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory\r\n2023-06-20 18:59:48.310119: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory\r\n2023-06-20 18:59:48.310134: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.\r\nSpecial tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.\r\n/home/alex/work/transformers/src/transformers/convert_slow_tokenizer.py:457: UserWarning: The sentencepiece tokenizer that you are converting to a fast tokenizer uses the byte fallback option which is not implemented in the fast tokenizers. In practice this means that the fast version of the tokenizer can produce unknown tokens whereas the sentencepiece version would have converted these unknown tokens into a sequence of byte tokens matching the original piece of text.\r\n warnings.warn(\r\nSpecial tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.\r\n/home/alex/work/transformers/src/transformers/models/deberta_v2/modeling_deberta_v2.py:561: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.\r\n torch.tensor(mid - 1).type_as(relative_pos),\r\n/home/alex/work/transformers/src/transformers/models/deberta_v2/modeling_deberta_v2.py:565: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.\r\n torch.ceil(torch.log(abs_pos / mid) / torch.log(torch.tensor((max_position - 1) / mid)) * (mid - 1)) + mid\r\n/home/alex/work/transformers/src/transformers/models/deberta_v2/modeling_deberta_v2.py:724: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.\r\n scale = torch.sqrt(torch.tensor(query_layer.size(-1), dtype=torch.float) * scale_factor)\r\n/home/alex/work/transformers/src/transformers/models/deberta_v2/modeling_deberta_v2.py:724: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).\r\n scale = torch.sqrt(torch.tensor(query_layer.size(-1), dtype=torch.float) * scale_factor)\r\n/home/alex/work/transformers/src/transformers/models/deberta_v2/modeling_deberta_v2.py:803: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.\r\n scale = torch.sqrt(torch.tensor(pos_key_layer.size(-1), dtype=torch.float) * scale_factor)\r\n/home/alex/work/transformers/src/transformers/models/deberta_v2/modeling_deberta_v2.py:803: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).\r\n scale = torch.sqrt(torch.tensor(pos_key_layer.size(-1), dtype=torch.float) * scale_factor)\r\n/home/alex/work/transformers/src/transformers/models/deberta_v2/modeling_deberta_v2.py:815: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.\r\n scale = torch.sqrt(torch.tensor(pos_query_layer.size(-1), dtype=torch.float) * scale_factor)\r\n/home/alex/work/transformers/src/transformers/models/deberta_v2/modeling_deberta_v2.py:815: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).\r\n scale = torch.sqrt(torch.tensor(pos_query_layer.size(-1), dtype=torch.float) * scale_factor)\r\n/home/alex/work/transformers/src/transformers/models/deberta_v2/modeling_deberta_v2.py:816: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if key_layer.size(-2) != query_layer.size(-2):\r\n/home/alex/work/transformers/src/transformers/models/deberta_v2/modeling_deberta_v2.py:112: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.\r\n output = input.masked_fill(rmask, torch.tensor(torch.finfo(input.dtype).min))\r\n</details>",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,686 | 1,690 | 1,690 |
NONE
| null |
### System Info
- `transformers` version: 4.30.1
- Platform: Linux-5.4.0-126-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 1.10.1+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
I additionally have:
- onnx==1.12.0
- protobuf==3.19.6
### Who can help?
@lewtun
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
This is a follow up to the task #19320 where I tried to export the model mdeberta model to onnx format.
I saw that there is an issue with similar model configurations #16841, but still does not work for me.
Please can someone review this?
This is a sample code
```
from collections import OrderedDict
from typing import Mapping
from pathlib import Path
from transformers.onnx import export
from transformers.onnx import OnnxConfig
from transformers import AutoTokenizer, AutoModel, AutoConfig
config = AutoConfig.from_pretrained('microsoft/mdeberta-v3-base')
base_model = AutoModel.from_pretrained('microsoft/mdeberta-v3-base')
tokenizer = AutoTokenizer.from_pretrained('microsoft/mdeberta-v3-base')
class DebertaConfig(OnnxConfig):
@property
def inputs(self) -> Mapping[str, Mapping[int, str]]:
return OrderedDict(
[
("input_ids", {0: "batch", 1: 'sequ_length'}),
("attention_mask", {0: "batch", 1: 'sequ_length'}),
("token_lengths", {0: 'sent-count'}),
("word_ids", {0: "batch", 1: 'sequ_length'}),
]
)
@property
def outputs(self) -> Mapping[str, Mapping[int, str]]:
return OrderedDict(
[
("token_embeddings", {0: 'sent-count', 1: 'max_token_count', 2: 'token_embedding_size'}),
]
)
onnx_config = DebertaConfig(config)
onnx_path = Path('mdeberta.onxx')
onnx_inputs, onnx_outputs = export(tokenizer, base_model, onnx_config, 13, onnx_path)
```
The code raise the following exception:
> File "/.../site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 136, in symbolic
> g, self, r_mask, g.op("Constant", value_t=torch.tensor(torch.finfo(self.type().dtype()).min))
> AttributeError: 'torch._C.TensorType' object has no attribute 'dtype'
### Expected behavior
Serialized model in onnx format
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24194/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24194/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24193
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24193/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24193/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24193/events
|
https://github.com/huggingface/transformers/pull/24193
| 1,752,434,386 |
PR_kwDOCUB6oc5SwY1z
| 24,193 |
Update `GPTNeoXLanguageGenerationTest`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24193). All of your documentation changes will be reflected on that endpoint."
] | 1,686 | 1,686 | 1,686 |
COLLABORATOR
| null |
# What does this PR do?
Due to the changes in [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped/commits/main), we have to update the output value, despite the new value doesn't look great 😭
(The previous revisions seem to disappear ...I guess they rewrite the commit history on their Hub repo)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24193/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24193/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24193",
"html_url": "https://github.com/huggingface/transformers/pull/24193",
"diff_url": "https://github.com/huggingface/transformers/pull/24193.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24193.patch",
"merged_at": 1686577033000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24190
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24190/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24190/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24190/events
|
https://github.com/huggingface/transformers/pull/24190
| 1,752,299,330 |
PR_kwDOCUB6oc5Sv7P0
| 24,190 |
Fix `Wav2Vec2` CI OOM
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,686 | 1,686 | 1,686 |
COLLABORATOR
| null |
# What does this PR do?
After #23813, we get OOM for `tests/models/wav2vec2/test_modeling_wav2vec2.py::Wav2Vec2ModelIntegrationTest::test_wav2vec2_with_lm_invalid_pool` (when running the whole `wav2vec2` test suite)
Just doing some cleaning up and no more OOM.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24190/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24190/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24190",
"html_url": "https://github.com/huggingface/transformers/pull/24190",
"diff_url": "https://github.com/huggingface/transformers/pull/24190.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24190.patch",
"merged_at": 1686562745000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24189
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24189/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24189/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24189/events
|
https://github.com/huggingface/transformers/issues/24189
| 1,752,295,728 |
I_kwDOCUB6oc5ocekw
| 24,189 |
Problems while Running ImageGPT
|
{
"login": "chinge55",
"id": 33897366,
"node_id": "MDQ6VXNlcjMzODk3MzY2",
"avatar_url": "https://avatars.githubusercontent.com/u/33897366?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chinge55",
"html_url": "https://github.com/chinge55",
"followers_url": "https://api.github.com/users/chinge55/followers",
"following_url": "https://api.github.com/users/chinge55/following{/other_user}",
"gists_url": "https://api.github.com/users/chinge55/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chinge55/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chinge55/subscriptions",
"organizations_url": "https://api.github.com/users/chinge55/orgs",
"repos_url": "https://api.github.com/users/chinge55/repos",
"events_url": "https://api.github.com/users/chinge55/events{/privacy}",
"received_events_url": "https://api.github.com/users/chinge55/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@chinge55 Thanks for reporting! Indeed, the example was assuming the clusters were a numpy array, but were stored as a list of lists in the image processor. I've opened a PR to resolve "
] | 1,686 | 1,686 | 1,686 |
NONE
| null |
### System Info
- `transformers` version: 4.28.1
- Platform: Linux-4.15.0-197-generic-x86_64-with-glibc2.27
- Python version: 3.11.3
- Huggingface_hub version: 0.15.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I tried to run the code from the official example:
https://huggingface.co/docs/transformers/model_doc/imagegpt#transformers.ImageGPTForCausalImageModeling
```python
from transformers import AutoImageProcessor, ImageGPTForCausalImageModeling
import torch
import matplotlib.pyplot as plt
import numpy as np
image_processor = AutoImageProcessor.from_pretrained("openai/imagegpt-small")
model = ImageGPTForCausalImageModeling.from_pretrained("openai/imagegpt-small")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
# unconditional generation of 8 images
batch_size = 8
context = torch.full((batch_size, 1), model.config.vocab_size - 1) # initialize with SOS token
context = torch.tensor(context).to(device)
output = model.generate(
input_ids=context, max_length=model.config.n_positions + 1, temperature=1.0, do_sample=True, top_k=40
)
clusters = image_processor.clusters
height = image_processor.size["height"]
width = image_processor.size["width"]
samples = output[:, 1:].cpu().detach().numpy()
#Error line below
samples_img = [
np.reshape(np.rint(127.5 * (clusters[s] + 1.0)), [height, width, 3]).astype(np.uint8) for s in samples
] # convert color cluster tokens back to pixels
f, axes = plt.subplots(1, batch_size, dpi=300)
for img, ax in zip(samples_img, axes):
ax.axis("off")
ax.imshow(img)
```
Error on line: ```samples_img = [...]``` on the part ```clusters[s]```.
This is because:
```samples.shape = (8,1024)```
At that point, ```s.shape = (1024,)```.
So, ```Clusters[s]``` cannot index properly.
**Error Message**: TypeError: only integer scalar arrays can be converted to a scalar index
### Expected behavior
It should plot 8 different predictions from the ImageGPT Model.

|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24189/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24189/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24188
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24188/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24188/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24188/events
|
https://github.com/huggingface/transformers/pull/24188
| 1,752,170,173 |
PR_kwDOCUB6oc5Sve4i
| 24,188 |
Update `WhisperForAudioClassification` doc example
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,686 | 1,686 | 1,686 |
COLLABORATOR
| null |
# What does this PR do?
After the config file change in Hub commit `a7a63ecc2bd1015783dead844fced2af7531edd2` on `sanchit-gandhi/whisper-medium-fleurs-lang-id`, the doc example for `WhisperForAudioClassification` has to be updated.
Currently the test fails due to a different expected output value.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24188/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24188/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24188",
"html_url": "https://github.com/huggingface/transformers/pull/24188",
"diff_url": "https://github.com/huggingface/transformers/pull/24188.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24188.patch",
"merged_at": 1686589832000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24187
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24187/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24187/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24187/events
|
https://github.com/huggingface/transformers/pull/24187
| 1,752,160,694 |
PR_kwDOCUB6oc5Svczu
| 24,187 |
Fix push to hub
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,686 | 1,686 | 1,686 |
CONTRIBUTOR
| null |
# What does this PR do?
Previous PR #23920 wasn't correct, line 715 should not have been included.
This PR fixes that.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24187/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24187/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24187",
"html_url": "https://github.com/huggingface/transformers/pull/24187",
"diff_url": "https://github.com/huggingface/transformers/pull/24187.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24187.patch",
"merged_at": 1686574270000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24186
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24186/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24186/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24186/events
|
https://github.com/huggingface/transformers/issues/24186
| 1,752,097,059 |
I_kwDOCUB6oc5obuEj
| 24,186 |
In examples, complete steps not correct when load from checkpoint
|
{
"login": "Ethan-yt",
"id": 9592150,
"node_id": "MDQ6VXNlcjk1OTIxNTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/9592150?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ethan-yt",
"html_url": "https://github.com/Ethan-yt",
"followers_url": "https://api.github.com/users/Ethan-yt/followers",
"following_url": "https://api.github.com/users/Ethan-yt/following{/other_user}",
"gists_url": "https://api.github.com/users/Ethan-yt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ethan-yt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ethan-yt/subscriptions",
"organizations_url": "https://api.github.com/users/Ethan-yt/orgs",
"repos_url": "https://api.github.com/users/Ethan-yt/repos",
"events_url": "https://api.github.com/users/Ethan-yt/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ethan-yt/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Would you like to make a PR with a fix?",
"> Would you like to make a PR with a fix?\r\n\r\n\r\nDone!\r\nhttps://github.com/huggingface/transformers/pull/24197"
] | 1,686 | 1,686 | 1,686 |
CONTRIBUTOR
| null |
### System Info
https://github.com/huggingface/transformers/blob/8f093fb799246f7dd9104ff44728da0c53a9f67a/examples/pytorch/language-modeling/run_clm_no_trainer.py#L575
Should be:
```python
completed_steps = resume_step // args.gradient_accumulation_steps
```
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Enable grad acc.
1. Save the checkpoint.
1. Load from it.
### Expected behavior
The `completed_steps` should be divided by grad acc steps.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24186/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24186/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24185
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24185/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24185/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24185/events
|
https://github.com/huggingface/transformers/issues/24185
| 1,752,022,937 |
I_kwDOCUB6oc5obb-Z
| 24,185 |
GPT2 jit trace ERROR
|
{
"login": "HLearning",
"id": 11738124,
"node_id": "MDQ6VXNlcjExNzM4MTI0",
"avatar_url": "https://avatars.githubusercontent.com/u/11738124?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HLearning",
"html_url": "https://github.com/HLearning",
"followers_url": "https://api.github.com/users/HLearning/followers",
"following_url": "https://api.github.com/users/HLearning/following{/other_user}",
"gists_url": "https://api.github.com/users/HLearning/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HLearning/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HLearning/subscriptions",
"organizations_url": "https://api.github.com/users/HLearning/orgs",
"repos_url": "https://api.github.com/users/HLearning/repos",
"events_url": "https://api.github.com/users/HLearning/events{/privacy}",
"received_events_url": "https://api.github.com/users/HLearning/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"input_tuple = (inputs['input_ids'], inputs['attention_mask'])\r\n\r\nupdata\r\n\r\ninput_tuple = inputs['input_ids']"
] | 1,686 | 1,686 | 1,686 |
NONE
| null |
### System Info
version
transformers: 4.29.1
pytorch: 2.0.0+cu117
python: 3.8.8
code:
```
from transformers import GPT2Tokenizer, GPT2Model
import torch
import transformers
print(transformers.__version__)
print(torch.__version__)
model_name = 'gpt2'
input_text = ["nice to meet you " * 63 + "hello gpt."]
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
inputs = tokenizer(input_text, return_tensors="pt")
input_tuple = (inputs['input_ids'], inputs['attention_mask'])
model = GPT2Model.from_pretrained(model_name, torchscript=True).eval()
traced_model = torch.jit.trace(model, input_tuple)
print(traced_model.graph)
```
ERROR:
```
Traceback (most recent call last):
File "test_gpt.py", line 17, in <module>
traced_model = torch.jit.trace(model, input_tuple)
File "/home/hjl/.local/lib/python3.8/site-packages/torch/jit/_trace.py", line 794, in trace
return trace_module(
File "/home/hjl/.local/lib/python3.8/site-packages/torch/jit/_trace.py", line 1056, in trace_module
module._c._create_method_from_trace(
File "/home/hjl/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/hjl/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1488, in _slow_forward
result = self.forward(*input, **kwargs)
File "/home/hjl/.local/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 800, in forward
past_length = past_key_values[0][0].size(-2)
IndexError: Dimension specified as -2 but tensor has no dimensions
```
### Who can help?
@ArthurZucker @youn
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
code:
```python
from transformers import GPT2Tokenizer, GPT2Model
import torch
import transformers
print(transformers.__version__)
print(torch.__version__)
model_name = 'gpt2'
input_text = ["nice to meet you " * 63 + "hello gpt."]
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
inputs = tokenizer(input_text, return_tensors="pt")
input_tuple = (inputs['input_ids'], inputs['attention_mask'])
model = GPT2Model.from_pretrained(model_name, torchscript=True).eval()
traced_model = torch.jit.trace(model, input_tuple)
print(traced_model.graph)
```
ERROR:
```python
Traceback (most recent call last):
File "test_gpt.py", line 17, in <module>
traced_model = torch.jit.trace(model, input_tuple)
File "/home/hjl/.local/lib/python3.8/site-packages/torch/jit/_trace.py", line 794, in trace
return trace_module(
File "/home/hjl/.local/lib/python3.8/site-packages/torch/jit/_trace.py", line 1056, in trace_module
module._c._create_method_from_trace(
File "/home/hjl/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/hjl/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1488, in _slow_forward
result = self.forward(*input, **kwargs)
File "/home/hjl/.local/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 800, in forward
past_length = past_key_values[0][0].size(-2)
IndexError: Dimension specified as -2 but tensor has no dimensions
```
### Expected behavior
torch.jit.trace(gpt2model)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24185/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24185/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24184
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24184/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24184/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24184/events
|
https://github.com/huggingface/transformers/pull/24184
| 1,751,856,039 |
PR_kwDOCUB6oc5SubBK
| 24,184 |
typo: fix typos in CONTRIBUTING.md and deepspeed.mdx
|
{
"login": "zsj9509",
"id": 28675090,
"node_id": "MDQ6VXNlcjI4Njc1MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/28675090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zsj9509",
"html_url": "https://github.com/zsj9509",
"followers_url": "https://api.github.com/users/zsj9509/followers",
"following_url": "https://api.github.com/users/zsj9509/following{/other_user}",
"gists_url": "https://api.github.com/users/zsj9509/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zsj9509/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zsj9509/subscriptions",
"organizations_url": "https://api.github.com/users/zsj9509/orgs",
"repos_url": "https://api.github.com/users/zsj9509/repos",
"events_url": "https://api.github.com/users/zsj9509/events{/privacy}",
"received_events_url": "https://api.github.com/users/zsj9509/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24184). All of your documentation changes will be reflected on that endpoint."
] | 1,686 | 1,686 | 1,686 |
CONTRIBUTOR
| null |
# What does this PR do?
Fix the following two typos
+ Missing line break after the third item in the Pull Request Checklist section of the CONTRIBUTING.md.
+ A typo error in the deepspeed.mdx file
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24184/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24184/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24184",
"html_url": "https://github.com/huggingface/transformers/pull/24184",
"diff_url": "https://github.com/huggingface/transformers/pull/24184.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24184.patch",
"merged_at": 1686581038000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24183
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24183/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24183/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24183/events
|
https://github.com/huggingface/transformers/issues/24183
| 1,751,829,304 |
I_kwDOCUB6oc5oass4
| 24,183 |
Question in load datasets of train seq2seq model
|
{
"login": "xyx361100238",
"id": 19569322,
"node_id": "MDQ6VXNlcjE5NTY5MzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/19569322?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xyx361100238",
"html_url": "https://github.com/xyx361100238",
"followers_url": "https://api.github.com/users/xyx361100238/followers",
"following_url": "https://api.github.com/users/xyx361100238/following{/other_user}",
"gists_url": "https://api.github.com/users/xyx361100238/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xyx361100238/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xyx361100238/subscriptions",
"organizations_url": "https://api.github.com/users/xyx361100238/orgs",
"repos_url": "https://api.github.com/users/xyx361100238/repos",
"events_url": "https://api.github.com/users/xyx361100238/events{/privacy}",
"received_events_url": "https://api.github.com/users/xyx361100238/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I think this is just CPU workers spinning up, you can verify by asking on the datasets repo where you can get dedicated datasets help: https://github.com/huggingface/datasets",
"Yes,I have already asking on the datasets repo but no response yet",
"Feel free to gently ping `mariosasko` (he's usually very responsive)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,686 | 1,689 | 1,689 |
NONE
| null |
### System Info
- `transformers` version: 4.28.0.dev0
- Platform: Linux-5.4.0-149-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.16
- Huggingface_hub version: 0.13.2
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sanchit-gandhi @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I Was change official example scripts data load to my own code:
` if data_args.data_path is not None:
print(data_args.data_path)
raw_datasets = load_dataset("audiofolder", data_dir=data_args.data_path, cache_dir=model_args.cache_dir)
raw_datasets = raw_datasets.cast_column("audio", Audio(sampling_rate=16000))
raw_datasets = raw_datasets["train"].train_test_split(test_size=0.005, shuffle=True)`
The processing process is as follows:
1、Resolving data files
2、Downloading data files
3、Computing checksums
4、Downloading data files
5、Extracting data files
6、Generating train split
What caused the significant difference in step six?

### Expected behavior
load fast,need at least 1000+
> Generating train split: 388773 examples [32:24:45, 1574.04 examples/s]
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24183/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24183/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24182
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24182/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24182/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24182/events
|
https://github.com/huggingface/transformers/issues/24182
| 1,751,816,538 |
I_kwDOCUB6oc5oapla
| 24,182 |
About finetuning Whisper by multi-GPUs
|
{
"login": "LYPinASR",
"id": 112866899,
"node_id": "U_kgDOBro2Uw",
"avatar_url": "https://avatars.githubusercontent.com/u/112866899?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LYPinASR",
"html_url": "https://github.com/LYPinASR",
"followers_url": "https://api.github.com/users/LYPinASR/followers",
"following_url": "https://api.github.com/users/LYPinASR/following{/other_user}",
"gists_url": "https://api.github.com/users/LYPinASR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LYPinASR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LYPinASR/subscriptions",
"organizations_url": "https://api.github.com/users/LYPinASR/orgs",
"repos_url": "https://api.github.com/users/LYPinASR/repos",
"events_url": "https://api.github.com/users/LYPinASR/events{/privacy}",
"received_events_url": "https://api.github.com/users/LYPinASR/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @LYPinASR, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,686 | 1,689 | 1,689 |
NONE
| null |
### Feature request
I want to finetune Whisper by 4 GPUs, what should I do?
### Motivation
finetuning Whisper by multi-GPUs
### Your contribution
finetuning Whisper by multi-GPUs
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24182/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24182/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24181
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24181/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24181/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24181/events
|
https://github.com/huggingface/transformers/pull/24181
| 1,751,772,561 |
PR_kwDOCUB6oc5SuI93
| 24,181 |
Update README_zh-hans.md
|
{
"login": "CooperFu",
"id": 9016149,
"node_id": "MDQ6VXNlcjkwMTYxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9016149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CooperFu",
"html_url": "https://github.com/CooperFu",
"followers_url": "https://api.github.com/users/CooperFu/followers",
"following_url": "https://api.github.com/users/CooperFu/following{/other_user}",
"gists_url": "https://api.github.com/users/CooperFu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CooperFu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CooperFu/subscriptions",
"organizations_url": "https://api.github.com/users/CooperFu/orgs",
"repos_url": "https://api.github.com/users/CooperFu/repos",
"events_url": "https://api.github.com/users/CooperFu/events{/privacy}",
"received_events_url": "https://api.github.com/users/CooperFu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,686 | 1,687 | 1,686 |
CONTRIBUTOR
| null |
update document link
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24181/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24181/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24181",
"html_url": "https://github.com/huggingface/transformers/pull/24181",
"diff_url": "https://github.com/huggingface/transformers/pull/24181.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24181.patch",
"merged_at": 1686833441000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24180
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24180/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24180/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24180/events
|
https://github.com/huggingface/transformers/issues/24180
| 1,751,757,724 |
I_kwDOCUB6oc5oabOc
| 24,180 |
Bring back Transformer-encoder, LSTM-decoder models
|
{
"login": "Ubadub",
"id": 1286898,
"node_id": "MDQ6VXNlcjEyODY4OTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1286898?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ubadub",
"html_url": "https://github.com/Ubadub",
"followers_url": "https://api.github.com/users/Ubadub/followers",
"following_url": "https://api.github.com/users/Ubadub/following{/other_user}",
"gists_url": "https://api.github.com/users/Ubadub/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ubadub/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ubadub/subscriptions",
"organizations_url": "https://api.github.com/users/Ubadub/orgs",
"repos_url": "https://api.github.com/users/Ubadub/repos",
"events_url": "https://api.github.com/users/Ubadub/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ubadub/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @Ubadub, \r\n\r\nThere's many reasons a model might be removed or code deleted. Every piece of code in the library requires maintenance from us, and so it's not possible to support everything. We decide what to keep, remove or deprecate based on maintenance burden and how impactful it is for the community. The linked PRs and issues are from over 3 years ago, and to the best of my knowledge, the LSTM decoder models haven't been requested by other users since their deletion. \r\n\r\nThe great thing about open source is that anyone can build upon this library! If you're interested in adding this capability, you're welcome to develop in your own fork and share it here or on the forums for other users to find and use. It's now also possible to [share models directly on the hub](https://huggingface.co/docs/transformers/custom_models). ",
"Hi @amyeroberts ,\n\nThanks for your reply!\n\n> The great thing about open source is that anyone can build upon this library! If you're interested in adding this capability, you're welcome to develop in your own fork and share it here or on the forums for other users to find and use.\n\nAs mentioned, I have been looking into making the requisite changes myself with the hope of opening a PR for it. I opened this ticket first partly to see if others might express support/interest, and also in case there was some obvious or overt reason to not include such a model that might come up.\n\n> It's now also possible to [share models directly on the hub](https://huggingface.co/docs/transformers/custom_models). \n\nAlso a helpful suggestion, thank you.",
"@Ubadub At the moment, it's unlikely we'll merge this into the library unless we see a lot of demand from the community (which we'll measure through 👍's on this issue). In this case, it's still possible to develop on a fork and share your work by linking to it here, but it's not necessary to open a PR. \r\n\r\nYou might also be interested in our recently [added RWKV model](https://huggingface.co/docs/transformers/model_doc/rwkv), which rework the traditional transformer attention so that it can be used as an RNN. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,686 | 1,689 | 1,689 |
CONTRIBUTOR
| null |
### Feature request
HuggingFace used to support (albeit, with bugs), a "Model2LSTM" class that was closed with [this ticket](https://github.com/huggingface/transformers/issues/2849) / [this PR](https://github.com/huggingface/transformers/pull/2968). While [the code](https://github.com/huggingface/transformers/blob/90ab15cb7a8fcf8bf58c05453ddf1aa6a4fa00c1/src/transformers/modeling_encoder_decoder.py#L335) was buggy, I don't think deleting it was the right decision.
### Motivation
Ultimately, I guess, the motivation is... this is a real thing that exists and it seems to be within the scope of the `transformers` project. Concretely, this came up because I wanted to try and implement such a model, and discovered that `transformers` used to support it, but doesn't anymore.
### Your contribution
I'm willing to make code contributions towards this, and already began an exploratory analysis of the code base to see if this is possible, but as a team member I'm not fully aware of the reasons this was dropped in the first place, and what limits there might be on its feasibility.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24180/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24180/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24179
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24179/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24179/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24179/events
|
https://github.com/huggingface/transformers/issues/24179
| 1,751,673,029 |
I_kwDOCUB6oc5oaGjF
| 24,179 |
Loading a tokenizer from the Tokenizers library doesn't transfer over padding/truncation behavior correctly
|
{
"login": "Ubadub",
"id": 1286898,
"node_id": "MDQ6VXNlcjEyODY4OTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1286898?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ubadub",
"html_url": "https://github.com/Ubadub",
"followers_url": "https://api.github.com/users/Ubadub/followers",
"following_url": "https://api.github.com/users/Ubadub/following{/other_user}",
"gists_url": "https://api.github.com/users/Ubadub/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ubadub/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ubadub/subscriptions",
"organizations_url": "https://api.github.com/users/Ubadub/orgs",
"repos_url": "https://api.github.com/users/Ubadub/repos",
"events_url": "https://api.github.com/users/Ubadub/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ubadub/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] |
closed
| false | null |
[] |
[
"The responsible bit of code seems to be [in the `set_truncation_and_padding` function](https://github.com/huggingface/transformers/blob/main/src/transformers/tokenization_utils_fast.py#L319) of `PreTrainedTokenizerFast`. And it affects truncation, too, not just padding. Basically, this function erases the backend fast tokenizer's truncation/padding strategy, and then adds in the user-supplied overrides (e.g. as passed via `encode`).\r\n\r\nThis seems to me to be the opposite of what we want. The truncation/padding strategy should *start* with the backend fast tokenizer's truncation/padding strategy as-is, and if override arguments are provided, then those and only those arguments should be selectively overriden.\r\n\r\nDo the devs have thoughts on this? I am working on implementing the changes that I am proposing, but curious if there is a reason it was done this way.\r\n\r\nAlso, the function docstring for that function says\r\n\r\n> The provided tokenizer has no padding / truncation strategy before the managed section. If your tokenizer set a padding / truncation strategy before, then it will be reset to no padding / truncation when exiting the managed section.\r\n\r\nWhat does \"managed section\" refer to?",
"Hey! Thanks for reporting, I’ll have a look asap ",
"I should be able to get to this in the coming weeks, I don't think the fix is complicated as you isolated well. ",
"Hey! Started working on this, the problem is in the initialization. This is tricky because potentially breaking. Will be adding tests to make sure this is fixed",
"Fixed it! Make sure to use the latest version of transformers"
] | 1,686 | 1,700 | 1,690 |
CONTRIBUTOR
| null |
### System Info
Not especially relevant, but included for completeness:
- `transformers` version: 4.29.2
- Platform: macOS-13.2-x86_64-i386-64bit
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
(n.b.: originally posted a similar query in the [transformers forum](https://discuss.huggingface.co/t/padding-not-working-when-loading-a-tokenizer-trained-via-the-tokenizers-library-into-transformers/42326/1) but got no answer there.)
I trained a simple WhitespaceSplit/WordLevel tokenizer using the `tokenizers` library. I added padding by calling `enable_padding(pad_token="<pad>")` on the Tokenizer instance. Then I saved it to a JSON file and then loaded it into transformers using [the instructions here](https://huggingface.co/docs/transformers/fast_tokenizers):
```py
fast_tokenizer = PreTrainedTokenizerFast(tokenizer_file="tokenizer.json")
```
When using the `tokenizers.Tokenizer` object directly, `encode` correctly adds the padding tokens. However, if I try padding when tokenizing using the `PreTrainedTokenizerFast` instance, I get the exception:
```py
ValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`.
```
Sure enough, if I follow the instructions and add the pad token as a special token, it works. Alternatively, I can pass the argument `pad_token="<pad>"` to the `PreTrainedTokenizerFast` constructor call, to the same effect.
To reproduce the problem, you can use the code below. Most of it is from the [tokenizers Quicktour](https://huggingface.co/docs/tokenizers/quicktour), so you'll need to download the data files as per the instructions there (or modify `files` if using your own files). The rest is from the official transformers docs on [how to load a tokenizer from `tokenizers` into `transformers`](https://huggingface.co/docs/transformers/fast_tokenizers):
```py
from tokenizers import BpeTrainer, Tokenizer
from tokenizers.models import BPE
from tokenizers.pre_tokenizers import Whitespace
from transformers import PreTrainedTokenizerFast
files = [f"data/wikitext-103-raw/wiki.{split}.raw" for split in ["test", "train", "valid"]]
sentences = ["Hello, y'all!", "How are you 😁 ?"]
tokenizer = Tokenizer(BPE(unk_token="[UNK]"))
trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])
tokenizer.pre_tokenizer = Whitespace()
tokenizer.train(files, trainer)
# Enable padding
tokenizer.enable_padding(pad_id=3, pad_token="[PAD]")
# Now use this tokenizer to tokenize a couple of sentences.
output = tokenizer.encode_batch(sentences)
# The output is padded, as it should be:
print(output[0].tokens)
# ['Hello', ',', 'y', "'", 'all', '!']
print(output[1].tokens)
# ['How', 'are', 'you', '[UNK]', '?', '[PAD]']
# But now let's say we load the tokenizer into transformers- let's try loading it directly from the tokenizer object:
fast_tokenizer = PreTrainedTokenizerFast(tokenizer_object=tokenizer)
# Tokenize two strings of different token length with padding
fast_output = fast_tokenizer(sentences, padding=True)
```
This gives us the error:
```
Using pad_token, but it is not set yet.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/apatil/anaconda3/envs/lm-training/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2548, in __call__
encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)
File "/Users/apatil/anaconda3/envs/lm-training/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2634, in _call_one
return self.batch_encode_plus(
File "/Users/apatil/anaconda3/envs/lm-training/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2816, in batch_encode_plus
padding_strategy, truncation_strategy, max_length, kwargs = self._get_padding_truncation_strategies(
File "/Users/apatil/anaconda3/envs/lm-training/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2453, in _get_padding_truncation_strategies
raise ValueError(
ValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`.
```
We can resolve the issue by explicitly specifying the special tokens when initializing the `PreTrainedTokenizerFast`:
```py
fast_tokenizer = PreTrainedTokenizerFast(tokenizer_object=tokenizer, pad_token="[PAD]", unk_token="[UNK]")
# Now padding works as expected
fast_output = fast_tokenizer(sentences, padding=True)
print(fast_output[0].tokens)
# ['Hello', ',', 'y', "'", 'all', '!']
print(fast_output[1].tokens)
# ['How', 'are', 'you', '[UNK]', '?', '[PAD]']
```
The code above uses the `tokenizer_object` parameter to load the fast tokenizer as a `PreTrainedTokenizerFast` instance, but as you can confirm for yourselves, the same behavior occurs if you first save the tokenizer to file, then load it into `PreTrainedTokenizerFast` using the `tokenizer_file` parameter instead.
**First, I wanted to check- am I doing something wrong/missing something? Or is this just how it works?**
If the latter, as follows, an explanation of how I feel it should work and why.
### Expected behavior
I understand that I can get the desired behavior by either:
1. Add the pad token as a special token i.e. `fast_tokenizer.add_special_tokens({'pad_token': '[PAD]'})`.
2. Alternatively, I can pass the argument `pad_token='[PAD]'` to the `PreTrainedTokenizerFast` constructor call, to the same effect.
But I want the tokenizer to work *out of the box identically as the `tokenizer.Tokenizer` instance does* (to the extent that is reasonably possible), including in terms of padding behavior
I find it confusing and awkward that I have to enable padding for the `tokenizer.Tokenizer` instance, and then *again* for the `PreTrainedTokenizerFast` instance.
Imagine if your system architecture/workflow has two entirely different processes for tokenizing a document vs. training a model on it using `transformers` (as I imagine is often the case for people). Then you would need to hardcode the pad token in both locations, and if for some reason you wanted to change it, also update it in both locations.
On the other hand, if `PreTrainedTokenizerFast` really behaved exactly like the fast tokenizer it was created from, the training code could be entirely agnostic to how the tokenizer was created. All it would need was a path to the saved tokenizer config, and it could proceed without needing to know anything else. This is the behavior I think most people would naturally expect.
It could make sense to keep the `pad_token` parameter in the `PreTrainedTokenizerFast` *as an optional override*, or for cases where the fast tokenizer didn't have a padding token set, but the default should be to copy over the padding behavior as-is.
Put another way, the tokenizer object/config file should uniquely determine the tokenization behavior of a tokenizer, whether it is a `tokenizers.Tokenizer` instance or its equivalent `PreTrainedTokenizerFast` (to the extent it can; I understand some misalignment is probably inevitable, but this seems not to be one of those cases).
**Bottom line:** If the padding information is already in the tokenizer (or in the saved tokenizer config file), you should not need to explicitly specify the padding token again when transferring the tokenizer. This introduces a lot of totally unnecessary friction and leads to brittle code. The tokenizer object/config should be self-contained (i.e. I should not need to hardcode the pad token in two places), and information already encapsulated in the tokenizer object or its saved config file should be preserved on transfer.
EDIT: I later observed that the same behavior is true of truncation. See my followup comment for what I believe to be the responsible section of code.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24179/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24179/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24177
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24177/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24177/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24177/events
|
https://github.com/huggingface/transformers/pull/24177
| 1,751,538,501 |
PR_kwDOCUB6oc5StYYh
| 24,177 |
Generate: force caching on the main model, in assisted generation
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,686 | 1,686 | 1,686 |
MEMBER
| null |
# What does this PR do?
Fixes #23686
Caching is a requirement in assisted generation -- we even check for it ([here](https://github.com/huggingface/transformers/blob/8f093fb799246f7dd9104ff44728da0c53a9f67a/src/transformers/generation/utils.py#L1485)).
However, it was still possible for the main model to run without cache. This PR fixes it.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24177/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24177/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24177",
"html_url": "https://github.com/huggingface/transformers/pull/24177",
"diff_url": "https://github.com/huggingface/transformers/pull/24177.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24177.patch",
"merged_at": 1686575449000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24171
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24171/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24171/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24171/events
|
https://github.com/huggingface/transformers/issues/24171
| 1,751,510,378 |
I_kwDOCUB6oc5oZe1q
| 24,171 |
self_attention_mask
|
{
"login": "oroojlooy",
"id": 20797260,
"node_id": "MDQ6VXNlcjIwNzk3MjYw",
"avatar_url": "https://avatars.githubusercontent.com/u/20797260?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oroojlooy",
"html_url": "https://github.com/oroojlooy",
"followers_url": "https://api.github.com/users/oroojlooy/followers",
"following_url": "https://api.github.com/users/oroojlooy/following{/other_user}",
"gists_url": "https://api.github.com/users/oroojlooy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oroojlooy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oroojlooy/subscriptions",
"organizations_url": "https://api.github.com/users/oroojlooy/orgs",
"repos_url": "https://api.github.com/users/oroojlooy/repos",
"events_url": "https://api.github.com/users/oroojlooy/events{/privacy}",
"received_events_url": "https://api.github.com/users/oroojlooy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @ArthurZucker @younesbelkada ",
"Hey @oroojlooy thanks for opening an issue. Sorry didn't really have time to dive into this, could you try with the latest version of `transformers`. Could you also share you script on how you called the model! 🤗 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,686 | 1,692 | 1,692 |
NONE
| null |
### System Info
- `transformers` version: 4.29.2
- Platform: Linux-5.4.17-2136.318.7.1.el7uek.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.15.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sg
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When I call the `bigcode/starcoderbase` model, the output comes `nan`. I debugged the code and found that it happens in the `modeling_gpt_bigcode.py` where it gets the attention output in `GPTBigCodeAttention._attn()`.
Moving back to find the root of the issue I find that the `attention-mask` is edited in `GPTBigCodeModel.forward()` function as shown below. I could not understand the reason behind this modification of the `attention_mask` so though better I ask it here to see if it is intended or if there is any bug in that.
```
# Self-attention mask.
query_length = input_shape[-1]
key_length = past_length + query_length
self_attention_mask = self.bias[None, key_length - query_length : key_length, :key_length]
if attention_mask is not None:
self_attention_mask = self_attention_mask * attention_mask.view(batch_size, 1, -1).to(
dtype=torch.bool, device=self_attention_mask.device
)
# MQA models: (batch_size, query_length, n_heads, key_length)
# MHA models: (batch_size, n_heads, query_length, key_length)
attention_mask = self_attention_mask.unsqueeze(2 if self.multi_query else 1)
```
Note that this modification results in attention mask like `[False, False, ...., False]` which using it in the `_attn()` functions results in:
```
attn_weights = torch.where(attention_mask, attn_weights, mask_value)
attn_weights[1,1,1,:].max() = -\infty
attn_weights = softmax(attn_weights, dim=-1)
attn_weights[1,1,1,:].max() = [nan, ..., nan] (size of 2048)
```
### Expected behavior
As above.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24171/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24171/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24170
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24170/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24170/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24170/events
|
https://github.com/huggingface/transformers/issues/24170
| 1,751,425,862 |
I_kwDOCUB6oc5oZKNG
| 24,170 |
AttributeError:` 'str' object `has no attribute 'dtype'
|
{
"login": "flckv",
"id": 103381497,
"node_id": "U_kgDOBil5-Q",
"avatar_url": "https://avatars.githubusercontent.com/u/103381497?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flckv",
"html_url": "https://github.com/flckv",
"followers_url": "https://api.github.com/users/flckv/followers",
"following_url": "https://api.github.com/users/flckv/following{/other_user}",
"gists_url": "https://api.github.com/users/flckv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flckv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flckv/subscriptions",
"organizations_url": "https://api.github.com/users/flckv/orgs",
"repos_url": "https://api.github.com/users/flckv/repos",
"events_url": "https://api.github.com/users/flckv/events{/privacy}",
"received_events_url": "https://api.github.com/users/flckv/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @flckv, \r\n\r\nIn the issue info above, you mention using the official script, however it appears a custom dataset is being used. So that we can best help you, could you share a reproducible code snippet and full traceback of the error encountered? \r\n\r\nAs mentioned in a previous issue - #24143 - for general questions on how to adapt a script to a custom use case, please use the [forums](https://discuss.huggingface.co/). We try to reserve github issues for bugs and feature requests. ",
"@amyeroberts thanks for the reply\r\n\r\n## 1. Reproducible code snippet\r\n\r\nis [here](https://gist.github.com/flckv/0e01c9ee1167f2d1af18b811e98194c6#file-run_audio_classification-py-L258), the highlighted line shows the approach I used to load audio files with metadata csvs \r\n\r\n**Dataset structure:**\r\n\r\n\r\n\r\ncommand:\r\n\r\n> python run_audio_classification.py \\\r\n> --model_name_or_path facebook/wav2vec2-base \\\r\n> --output_dir l/users/flck/outputs/wav2vec2-base-s \\\r\n> --overwrite_output_dir \\\r\n> --remove_unused_columns False \\\r\n> --do_train \\\r\n> --do_eval \\\r\n> --fp16 \\\r\n> --learning_rate 3e-5 \\\r\n> --max_length_seconds 1 \\\r\n> --attention_mask False \\\r\n> --warmup_ratio 0.1 \\\r\n> --num_train_epochs 5 \\\r\n> --per_device_train_batch_size 32 \\\r\n> --gradient_accumulation_steps 4 \\\r\n> --per_device_eval_batch_size 32 \\\r\n> --dataloader_num_workers 4 \\\r\n> --logging_strategy steps \\\r\n> --logging_steps 10 \\\r\n> --evaluation_strategy epoch \\\r\n> --save_strategy epoch \\\r\n> --load_best_model_at_end True \\\r\n> --metric_for_best_model accuracy \\\r\n> --save_total_limit 3 \\\r\n> --seed 0 \\\r\n> --push_to_hub \\\r\n> --use_auth_token True\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n## 2. Full traceback of the error encountered\r\n\r\nis [here](https://gist.github.com/flckv/3cb6179b15e3571ea7421002ab65c5c2#file-error-py-L206 ), the highlighted lines show the log of the loaded dataset example \r\n\r\n",
"Hi @flckv, thanks for sharing that information.\r\n\r\nLooking at this it doesn't look like a bug in our code, but rather the dataset creation. As mentioned above, this is really a question for our forums. I'll give a suggestion below of where to begin, but unfortunately we don't have time to help you debug your own custom code. \r\n\r\nTo get things working I suggest two things: \r\n\r\n* Checking that the unaltered script runs with the default values as shown [in the README](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification#single-gpu). If this doesn't work, then there might be an issue on our side. In which case, report that in a new issue please. \r\n\r\n* Check that your loaded dataset and the dataset used in the example are comparable e.g. \r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n# Load in the default dataset from the example\r\nexample_dataset = load_dataset(\"superb\", \"asr\", split=\"train\")\r\n\r\n# Load in my dataset\r\nmy_dataset = load_dataset(\"audiofolder\", data_dir=\"/home/flck/hf38/transformers/examples/pytorch/audio-classification/s/data/s/s/\", split=\"train\")\r\n\r\n# Inspect the datasets\r\nprint(example_dataset)\r\nprint(my_dataset)\r\n\r\n# Check the values for a specific column that will be used during training\r\nprint(example_dataset['audio'])\r\nprint(my_dataset['audio'])\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"What worked for me was casting the labels:\r\n\r\n```\r\n data_files = {'train': 'train/train.csv', 'eval': 'test/test.csv'}\r\n raw_datasets = load_dataset('dataset', data_files=data_files)\r\n raw_datasets = raw_datasets.class_encode_column(\"sentiment\")\r\n```\r\n\r\n1. I call the test set \"eval\" in order to match what the script expects.\r\n\r\n3. Then I encode my label column (\"sentiment) as class.\r\n\r\n "
] | 1,686 | 1,693 | 1,689 |
NONE
| null |
### System Info
- `transformers` version: 4.30.0.dev0
- Platform: Linux-5.4.204-ql-generic-12.0-19-x86_64-with-glibc2.31
- Python version: 3.11.3
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1 (True)
Versions of relevant libraries:
[pip3] numpy==1.23.0
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchvision==0.15.2
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-service 2.4.0 py311h5eee18b_1
[conda] mkl_fft 1.3.6 py311ha02d727_1
[conda] mkl_random 1.2.2 py311ha02d727_1
[conda] numpy 1.23.0 pypi_0 pypi
[conda] pytorch 2.0.1 py3.11_cuda11.8_cudnn8.7.0_0 pytorch
[conda] pytorch-cuda 11.8 h7e8668a_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.0.2 py311_cu118 pytorch
[conda] torchtriton 2.0.0 py311 pytorch
[conda] torchvision 0.15.2 py311_cu118 pytorch
### Who can help?
@sgugger @sanchit-gandhi
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
## ERROR:
train_result = `trainer.train`(resume_from_checkpoint=checkpoint)
...
python3.11/site-packages/transformers/`feature_extraction_sequence_utils.py`", line 220, in pad
if value.dtype is np.dtype(np.float64):
^^^^^^^^^^^
AttributeError:` 'str' object `has no attribute 'dtype'
I am not sure which element of the dataset is read as 'str'
### 1. OFFICIAL SCRIPT: [transformers/examples/pytorch/audio-classification/run_audio_classification.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/audio-classification/README.md)
### 2. LOADED DATASET:
DatasetDict({
train: Dataset({
features: [`'audio',` `'label'`],
num_rows: 1280
})
validation: Dataset({
features: ['audio', 'label'],
num_rows: 160
})
test: Dataset({
features: ['audio', 'label'],
num_rows: 160
})
### 3. logger.info(raw_datasets['train'][0])
{'audio': {`'path'`: '/transformers/examples/pytorch/audio-classification/s/data/s/s/train/audio1.wav', `'array'`: array([0.02072144, 0.02767944, 0.03274536, ..., 0.00079346, 0.00088501,
0.00149536]), 'sampling_rate': 16000}, `'label'`: 'happy'}
### Expected behavior
load the dataset to model for training in train_result = `trainer.train`(resume_from_checkpoint=checkpoint)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24170/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24170/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24169
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24169/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24169/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24169/events
|
https://github.com/huggingface/transformers/issues/24169
| 1,751,407,744 |
I_kwDOCUB6oc5oZFyA
| 24,169 |
NLLB trunc translation
|
{
"login": "FranPuentes",
"id": 2001456,
"node_id": "MDQ6VXNlcjIwMDE0NTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2001456?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FranPuentes",
"html_url": "https://github.com/FranPuentes",
"followers_url": "https://api.github.com/users/FranPuentes/followers",
"following_url": "https://api.github.com/users/FranPuentes/following{/other_user}",
"gists_url": "https://api.github.com/users/FranPuentes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FranPuentes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FranPuentes/subscriptions",
"organizations_url": "https://api.github.com/users/FranPuentes/orgs",
"repos_url": "https://api.github.com/users/FranPuentes/repos",
"events_url": "https://api.github.com/users/FranPuentes/events{/privacy}",
"received_events_url": "https://api.github.com/users/FranPuentes/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @FranPuentes \r\nI tried to play a bit with the model and its behavior is quite interesting. If you managed to fit everything in a single line (without `.`) the model seems to successfully translate the entire sentence but it seems that the model stops generating (in your case) after the second sentence. \r\nAlso I advise you to run the generation in lower precision such as in 4bit so that you can use the largest model (if you run the script under a GPU device)\r\n\r\nBelow is the script I played with (4bit model after installing bitsandbytes)\r\n\r\n```python\r\n# pip install bitsandbytes\r\nimport sys,os;\r\n\r\nfrom transformers import AutoTokenizer, AutoModelForSeq2SeqLM;\r\n\r\nimport torch;\r\n\r\nNLLB_MODEL=\"facebook/nllb-200-3.3B\";\r\n#NLLB_MODEL=\"facebook/nllb-200-distilled-600M\";\r\n# NLLB_MODEL=\"facebook/nllb-200-distilled-1.3B\";\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(NLLB_MODEL);\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(NLLB_MODEL, torch_dtype=torch.float16, load_in_4bit=True);\r\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\");\r\n\r\n# model = model.to(device);\r\n\r\ndef translate(lang:str, text:str): \r\n inputs = tokenizer(text,return_tensors=\"pt\").to(device);\r\n tokens = model.generate(**inputs, forced_bos_token_id=tokenizer.lang_code_to_id[lang], max_new_tokens=4096);\r\n texts=tokenizer.batch_decode(tokens, skip_special_tokens=True);\r\n return texts[0];\r\n\r\nif __name__==\"__main__\":\r\n\r\n import readline;\r\n\r\n text=\"\"\"La Voz de Galicia es el cuarto periódico generalista de España. Posee una audiencia de 492.000 lectores en todo el país, según datos de la primera oleada del Estudio General de Medios de 2020. En el territorio gallego es la cabecera hegemónica. Su edición digital es la primera web informativa de la comunidad.\"\"\";\r\n text=translate(\"eng_Latn\", text);\r\n print(text);\r\n >>> La Voz de Galicia is the fourth generalist newspaper in Spain. It has an audience of 492,000 readers in the whole country, according to data from the first wave of the Estudio General de Medios de 2020.\r\n\r\n text=\"\"\"La Voz de Galicia es el cuarto periódico generalista de España. Posee una audiencia de 492.000 lectores en todo el país, según datos de la primera oleada del Estudio General de Medios de 2020, en el territorio gallego es la cabecera hegemónica, su edición digital es la primera web informativa de la comunidad.\"\"\";\r\n text=translate(\"eng_Latn\", text);\r\n print(text);\r\n >>> La Voz de Galicia is the fourth generalist newspaper in Spain. It has an audience of 492,000 readers in the whole country, according to data from the first wave of the Estudio General de Medios de 2020, in the Galician territory it is the hegemonic headquarters, its digital edition is the first informative web of the community.\r\n```\r\nI am really not sure about this behaviour, maybe it is related to the way NLLB models have been trained ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,686 | 1,690 | 1,690 |
NONE
| null |
### System Info
- `transformers` version: 4.28.1
- Platform: Linux-5.4.0-150-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.13.2
- Safetensors version: not installed
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker
@younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
#!/bin/python3
import sys,os;
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM;
import torch;
#NLLB_MODEL="facebook/nllb-200-3.3B";
#NLLB_MODEL="facebook/nllb-200-distilled-600M";
NLLB_MODEL="facebook/nllb-200-distilled-1.3B";
tokenizer = AutoTokenizer.from_pretrained(NLLB_MODEL);
model = AutoModelForSeq2SeqLM.from_pretrained(NLLB_MODEL);
device = torch.device("cuda" if torch.cuda.is_available() else "cpu");
model = model.to(device);
def translate(lang:str, text:str):
inputs = tokenizer(text,return_tensors="pt").to(device);
tokens = model.generate(**inputs, forced_bos_token_id=tokenizer.lang_code_to_id[lang], max_new_tokens=4096);
texts=tokenizer.batch_decode(tokens, skip_special_tokens=True);
return texts[0];
if __name__=="__main__":
import readline;
text="""La Voz de Galicia es el cuarto periódico generalista de España.
Posee una audiencia de 492.000 lectores en todo el país, según datos de la primera oleada del Estudio General de Medios de 2020.
En el territorio gallego es la cabecera hegemónica.
Su edición digital es la primera web informativa de la comunidad.""";
text=translate("eng_Latn", text);
print(text);
```
Its output is "La Voz de Galicia is the fourth generalist newspaper in Spain. It has an audience of 492,000 readers in the whole country, according to data from the first wave of the Estudio General de Medios of 2020." witch is only the first and second line. The same output when remove the carriage returns.
### Expected behavior
Translate all the text, not only the first and second line, for example:
"_**La Voz de Galicia is the fourth largest generalist newspaper in Spain. It has a readership of 492,000 readers throughout the country, according to data from the first wave of the General Media Study 2020.** In Galicia, it is the leading newspaper.
Its digital edition is the leading news website in the region._"
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24169/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24169/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24168
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24168/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24168/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24168/events
|
https://github.com/huggingface/transformers/pull/24168
| 1,751,398,788 |
PR_kwDOCUB6oc5Ss7z5
| 24,168 |
Update the RWKV documentation with fixes to spelling and wording
|
{
"login": "DevJake",
"id": 4295059,
"node_id": "MDQ6VXNlcjQyOTUwNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4295059?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DevJake",
"html_url": "https://github.com/DevJake",
"followers_url": "https://api.github.com/users/DevJake/followers",
"following_url": "https://api.github.com/users/DevJake/following{/other_user}",
"gists_url": "https://api.github.com/users/DevJake/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DevJake/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DevJake/subscriptions",
"organizations_url": "https://api.github.com/users/DevJake/orgs",
"repos_url": "https://api.github.com/users/DevJake/repos",
"events_url": "https://api.github.com/users/DevJake/events{/privacy}",
"received_events_url": "https://api.github.com/users/DevJake/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24168). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,686 | 1,689 | 1,689 |
NONE
| null |
# What does this PR do?
This PR fixes and performs the following in the RWKV documentation:
- Various words spelt incorrectly
- Grammatical issues
- Remove unnecessary verbosity in places
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24168/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24168/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24168",
"html_url": "https://github.com/huggingface/transformers/pull/24168",
"diff_url": "https://github.com/huggingface/transformers/pull/24168.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24168.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24167
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24167/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24167/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24167/events
|
https://github.com/huggingface/transformers/issues/24167
| 1,751,390,442 |
I_kwDOCUB6oc5oZBjq
| 24,167 |
Transformers
|
{
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,686 | 1,686 | 1,686 |
NONE
| null | null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24167/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24167/timeline
|
not_planned
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24166
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24166/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24166/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24166/events
|
https://github.com/huggingface/transformers/issues/24166
| 1,751,361,708 |
I_kwDOCUB6oc5oY6is
| 24,166 |
Decoding with skip_special_tokens=True doesn't remove pad token
|
{
"login": "Praful932",
"id": 45713796,
"node_id": "MDQ6VXNlcjQ1NzEzNzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/45713796?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Praful932",
"html_url": "https://github.com/Praful932",
"followers_url": "https://api.github.com/users/Praful932/followers",
"following_url": "https://api.github.com/users/Praful932/following{/other_user}",
"gists_url": "https://api.github.com/users/Praful932/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Praful932/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Praful932/subscriptions",
"organizations_url": "https://api.github.com/users/Praful932/orgs",
"repos_url": "https://api.github.com/users/Praful932/repos",
"events_url": "https://api.github.com/users/Praful932/events{/privacy}",
"received_events_url": "https://api.github.com/users/Praful932/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @ArthurZucker ",
"@ArthurZucker can confirm, but I suspect the pad token of the tokenizer is probably not defined correctly, in the notebook you have:\r\n\r\n```python\r\nprint(tokenizer.pad_token,tokenizer.pad_token_id)\r\n>>> [PAD] 32100\r\n```\r\n\r\nand in the generated text: \r\n```python\r\nprint(decoded_outputs)\r\n>>> ['<pad> bhubaneswar</s>', '<pad> Hippos are large animals</s><pad>']\r\n```\r\n\r\nYou probably need to update the tokenizer with the correct pad token and eos tokens, in your case respectively `<pad>` and `</s>`",
"As mentioned by Younes, the padding token is not correct, `tokenizer.additional_special_tokens` does not have the `<pad>` token, and the `tokenizier.pad_token` is not set to `<pad>`. Thus the token is an `AddedToken` which is why you can see it in the `tokenizer.added_tokens_encoder` for example, but it is not a special token, this it is skipped. ",
"Thanks for the prompt reply, cool So this is an issue with the model's tokenizer not having the right initial configuration? - and fixing it either by the [author on the model repo](https://huggingface.co/lmsys/fastchat-t5-3b-v1.0) or doing the following step by the user is the immediate remediation for this:\r\n`tokenizer.add_special_tokens({'pad_token' : \"<pad>\"})`\r\n\r\nI can confirm that this fixes the default behaviour"
] | 1,686 | 1,687 | 1,687 |
NONE
| null |
### System Info
- `transformers` version: 4.31.0.dev0
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu)
- Jax version: 0.4.10
- JaxLib version: 0.4.10
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: -
### Who can help?
@younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
collab nb - https://colab.research.google.com/drive/1VH-ZVITqJ5k6umvMXvQc2Suxzf6KEXyX?usp=sharing
### Expected behavior
I expected the <pad> token to not be present in the output, just the processed text
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24166/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24166/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24161
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24161/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24161/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24161/events
|
https://github.com/huggingface/transformers/issues/24161
| 1,751,278,827 |
I_kwDOCUB6oc5oYmTr
| 24,161 |
DetrAttention `is_decoder` is not defined
|
{
"login": "JayL0321",
"id": 31190549,
"node_id": "MDQ6VXNlcjMxMTkwNTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/31190549?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JayL0321",
"html_url": "https://github.com/JayL0321",
"followers_url": "https://api.github.com/users/JayL0321/followers",
"following_url": "https://api.github.com/users/JayL0321/following{/other_user}",
"gists_url": "https://api.github.com/users/JayL0321/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JayL0321/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JayL0321/subscriptions",
"organizations_url": "https://api.github.com/users/JayL0321/orgs",
"repos_url": "https://api.github.com/users/JayL0321/repos",
"events_url": "https://api.github.com/users/JayL0321/events{/privacy}",
"received_events_url": "https://api.github.com/users/JayL0321/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @NielsRogge ",
"Yes, seems like a left-over from copying from another model. Feel free to open a PR to remove the `is_decoder` parameter"
] | 1,686 | 1,686 | 1,686 |
CONTRIBUTOR
| null |
https://github.com/huggingface/transformers/blob/fe861e578f50dc9c06de33cd361d2f625017e624/src/transformers/models/detr/modeling_detr.py#LL502C9-L502C19
The `is_decoder` parameter is in the init function, but it is not reference in the body. While in `DetrDecoderLayer`, this parameter is initialized as `True`. Very confused, not sure whether there's something missing in `DetrAttention`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24161/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24161/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24159
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24159/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24159/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24159/events
|
https://github.com/huggingface/transformers/issues/24159
| 1,751,096,873 |
I_kwDOCUB6oc5oX54p
| 24,159 |
GPT2ForQuestionAnswering: how to use?
|
{
"login": "mkschreder",
"id": 4483721,
"node_id": "MDQ6VXNlcjQ0ODM3MjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4483721?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mkschreder",
"html_url": "https://github.com/mkschreder",
"followers_url": "https://api.github.com/users/mkschreder/followers",
"following_url": "https://api.github.com/users/mkschreder/following{/other_user}",
"gists_url": "https://api.github.com/users/mkschreder/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mkschreder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mkschreder/subscriptions",
"organizations_url": "https://api.github.com/users/mkschreder/orgs",
"repos_url": "https://api.github.com/users/mkschreder/repos",
"events_url": "https://api.github.com/users/mkschreder/events{/privacy}",
"received_events_url": "https://api.github.com/users/mkschreder/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @mkschreder, thanks for raising this issue. \r\n\r\nIn terms of the issue title - how to use - there's a more in-depth guide about question-answering in the [task documentation](https://huggingface.co/docs/transformers/v4.30.0/en/tasks/question_answering#question-answering) and [NLP course](https://huggingface.co/learn/nlp-course/chapter7/7?fw=pt). \r\n\r\nThe snippets in the documentation are meant to be minimal examples so that users can get started and understand the model's API. Sometimes values are hardcoded to keep the example short, however we certainly don't want them to be incorrect or confusing. If there's particular improvements or fixes you'd like to see, we're always happy to review a PR :) ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,686 | 1,689 | 1,689 |
NONE
| null |
https://github.com/huggingface/transformers/blob/8f093fb799246f7dd9104ff44728da0c53a9f67a/docs/source/en/model_doc/gpt2.mdx?plain=1#L116
The documentation generated for this question answering model is complete nonsense: https://huggingface.co/docs/transformers/model_doc/gpt2#transformers.GPT2ForQuestionAnswering.forward.example
It does not produce any meaningful results and the tensors are hardcoded:
```
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24159/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24159/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24156
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24156/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24156/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24156/events
|
https://github.com/huggingface/transformers/pull/24156
| 1,751,036,452 |
PR_kwDOCUB6oc5Srw8N
| 24,156 |
🌐 [i18n-KO] Fixed `tutorial/preprocessing.mdx`
|
{
"login": "sim-so",
"id": 96299403,
"node_id": "U_kgDOBb1piw",
"avatar_url": "https://avatars.githubusercontent.com/u/96299403?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sim-so",
"html_url": "https://github.com/sim-so",
"followers_url": "https://api.github.com/users/sim-so/followers",
"following_url": "https://api.github.com/users/sim-so/following{/other_user}",
"gists_url": "https://api.github.com/users/sim-so/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sim-so/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sim-so/subscriptions",
"organizations_url": "https://api.github.com/users/sim-so/orgs",
"repos_url": "https://api.github.com/users/sim-so/repos",
"events_url": "https://api.github.com/users/sim-so/events{/privacy}",
"received_events_url": "https://api.github.com/users/sim-so/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,686 | 1,688 | 1,687 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixed some words and expressions to align with the latest translation works.
This PR is a revision to apply key terms and phrases that have been established over the course of translating several documents.
Here are the main changes:
- `dataset` : `데이터셋` -> `데이터 세트`
- `truncation` : `생략` -> `잘라내기`
- `train` : `학습` -> `훈련`
Some other sentences have also been modified to read more naturally.
Thank you in advance for your review!
Fixes #22578
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
Team PseudoLab, may you please review this PR?
@0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
May you please review this PR?
@sgugger, @ArthurZucker, @eunseojo
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24156/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24156/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24156",
"html_url": "https://github.com/huggingface/transformers/pull/24156",
"diff_url": "https://github.com/huggingface/transformers/pull/24156.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24156.patch",
"merged_at": 1687171438000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24155
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24155/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24155/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24155/events
|
https://github.com/huggingface/transformers/issues/24155
| 1,750,906,518 |
I_kwDOCUB6oc5oXLaW
| 24,155 |
TypeError: Repository.__init__() got an unexpected keyword argument 'private'
|
{
"login": "heavenkiller2018",
"id": 45555611,
"node_id": "MDQ6VXNlcjQ1NTU1NjEx",
"avatar_url": "https://avatars.githubusercontent.com/u/45555611?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/heavenkiller2018",
"html_url": "https://github.com/heavenkiller2018",
"followers_url": "https://api.github.com/users/heavenkiller2018/followers",
"following_url": "https://api.github.com/users/heavenkiller2018/following{/other_user}",
"gists_url": "https://api.github.com/users/heavenkiller2018/gists{/gist_id}",
"starred_url": "https://api.github.com/users/heavenkiller2018/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/heavenkiller2018/subscriptions",
"organizations_url": "https://api.github.com/users/heavenkiller2018/orgs",
"repos_url": "https://api.github.com/users/heavenkiller2018/repos",
"events_url": "https://api.github.com/users/heavenkiller2018/events{/privacy}",
"received_events_url": "https://api.github.com/users/heavenkiller2018/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @heavenkiller2018,\r\n\r\nCould you try updating the version of transformers in your environment to the latest release? \r\n\r\n`pip install -U transformers`\r\n\r\nIt seems there's a mismatch between the huggingface_hub and transformers packages in your environment. \r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,686 | 1,689 | 1,689 |
NONE
| null |
### System Info
env:
```
❯ conda list
List of packages in environment: "/home/john/micromamba/envs/nlpcourse"
Name Version Build Channel
──────────────────────────────────────────────────────────────────────────────────────────────
_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 2_gnu conda-forge
abseil-cpp 20211102.0 hd4dd3e8_0 anaconda/pkgs/main
absl-py 1.3.0 py310h06a4308_0 anaconda/pkgs/main
accelerate 0.19.0 pyhd8ed1ab_0 anaconda/cloud/conda-forge
aiohttp 3.8.3 py310h5eee18b_0 anaconda/pkgs/main
aiosignal 1.2.0 pyhd3eb1b0_0 anaconda/pkgs/main
appdirs 1.4.4 pyhd3eb1b0_0 anaconda/pkgs/main
arrow 1.2.3 py310h06a4308_1 anaconda/pkgs/main
arrow-cpp 8.0.0 py310h3098874_1 anaconda/pkgs/main
asttokens 2.0.5 pyhd3eb1b0_0 anaconda/pkgs/main
async-timeout 4.0.2 py310h06a4308_0 anaconda/pkgs/main
attrs 22.1.0 py310h06a4308_0 anaconda/pkgs/main
aws-c-common 0.4.57 he6710b0_1 anaconda/pkgs/main
aws-c-event-stream 0.1.6 h2531618_5 anaconda/pkgs/main
aws-checksums 0.1.9 he6710b0_0 anaconda/pkgs/main
aws-sdk-cpp 1.8.185 hce553d0_0 anaconda/pkgs/main
backcall 0.2.0 pyhd3eb1b0_0 anaconda/pkgs/main
beautifulsoup4 4.12.2 py310h06a4308_0 anaconda/pkgs/main
binaryornot 0.4.4 pyhd3eb1b0_1 anaconda/pkgs/main
blas 1.0 mkl anaconda/pkgs/main
boost-cpp 1.73.0 h7f8727e_12 anaconda/pkgs/main
brotlipy 0.7.0 py310h7f8727e_1002 anaconda/pkgs/main
bzip2 1.0.8 h7b6447c_0 anaconda/pkgs/main
c-ares 1.19.1 hd590300_0 conda-forge
ca-certificates 2023.05.30 h06a4308_0 anaconda/pkgs/main
certifi 2023.5.7 py310h06a4308_0 anaconda/pkgs/main
cffi 1.15.1 py310h5eee18b_3 anaconda/pkgs/main
chardet 4.0.0 py310h06a4308_1003 anaconda/pkgs/main
charset-normalizer 2.0.4 pyhd3eb1b0_0 anaconda/pkgs/main
click 8.1.3 unix_pyhd8ed1ab_2 conda-forge
comm 0.1.2 py310h06a4308_0 anaconda/pkgs/main
cookiecutter 1.7.3 pyhd3eb1b0_0 anaconda/pkgs/main
cryptography 39.0.1 py310h9ce1e76_0 anaconda/pkgs/main
cuda-cudart 11.8.89 0 nvidia
cuda-cupti 11.8.87 0 nvidia
cuda-libraries 11.8.0 0 nvidia
cuda-nvrtc 11.8.89 0 nvidia
cuda-nvtx 11.8.86 0 nvidia
cuda-runtime 11.8.0 0 nvidia
dataclasses 0.8 pyh6d0b6a4_7 anaconda/pkgs/main
datasets 2.12.0 py310h06a4308_0 anaconda/pkgs/main
debugpy 1.5.1 py310h295c915_0 anaconda/pkgs/main
decorator 5.1.1 pyhd3eb1b0_0 anaconda/pkgs/main
dill 0.3.6 pyhd8ed1ab_1 conda-forge
evaluate 0.4.0 py310h06a4308_0 anaconda/pkgs/main
executing 0.8.3 pyhd3eb1b0_0 anaconda/pkgs/main
ffmpeg 4.3 hf484d3e_0 pytorch
filelock 3.9.0 py310h06a4308_0 anaconda/pkgs/main
freetype 2.12.1 h4a9f257_0 anaconda/pkgs/main
frozenlist 1.3.3 py310h5eee18b_0 anaconda/pkgs/main
fsspec 2023.5.0 pyh1a96a4e_0 conda-forge
gflags 2.2.2 he1b5a44_1004 conda-forge
giflib 5.2.1 h5eee18b_3 anaconda/pkgs/main
glog 0.6.0 h6f12383_0 conda-forge
gmp 6.2.1 h295c915_3 anaconda/pkgs/main
gmpy2 2.1.2 py310heeb90bb_0 anaconda/pkgs/main
gnutls 3.6.15 he1e5248_0 anaconda/pkgs/main
grpc-cpp 1.46.1 h33aed49_1 anaconda/pkgs/main
huggingface_hub 0.14.1 py310h06a4308_0 anaconda/pkgs/main
icu 58.2 he6710b0_3 anaconda/pkgs/main
idna 3.4 py310h06a4308_0 anaconda/pkgs/main
importlib-metadata 6.6.0 pyha770c72_0 conda-forge
importlib_metadata 6.6.0 hd8ed1ab_0 conda-forge
intel-openmp 2023.1.0 hdb19cb5_46305 anaconda/pkgs/main
ipykernel 6.19.2 py310h2f386ee_0 anaconda/pkgs/main
ipython 8.12.0 py310h06a4308_0 anaconda/pkgs/main
ipywidgets 8.0.4 py310h06a4308_0 anaconda/pkgs/main
jedi 0.18.1 py310h06a4308_1 anaconda/pkgs/main
jinja2 3.1.2 py310h06a4308_0 anaconda/pkgs/main
jinja2-time 0.2.0 pyhd3eb1b0_3 anaconda/pkgs/main
joblib 1.2.0 pyhd8ed1ab_0 conda-forge
jpeg 9e h5eee18b_1 anaconda/pkgs/main
jupyter_client 8.1.0 py310h06a4308_0 anaconda/pkgs/main
jupyter_core 5.3.0 py310h06a4308_0 anaconda/pkgs/main
jupyterlab_widgets 3.0.5 py310h06a4308_0 anaconda/pkgs/main
keyutils 1.6.1 h166bdaf_0 conda-forge
krb5 1.19.4 h568e23c_0 anaconda/pkgs/main
lame 3.100 h7b6447c_0 anaconda/pkgs/main
lcms2 2.12 h3be6417_0 anaconda/pkgs/main
ld_impl_linux-64 2.38 h1181459_1 anaconda/pkgs/main
lerc 3.0 h295c915_0 anaconda/pkgs/main
libabseil 20211102.0 cxx17_h48a1fff_3 anaconda/cloud/conda-forge
libboost 1.73.0 h28710b8_12 anaconda/pkgs/main
libbrotlicommon 1.0.9 h166bdaf_8 conda-forge
libbrotlidec 1.0.9 h166bdaf_8 conda-forge
libbrotlienc 1.0.9 h166bdaf_8 conda-forge
libcrc32c 1.1.2 h9c3ff4c_0 conda-forge
libcublas 11.11.3.6 0 nvidia
libcufft 10.9.0.58 0 nvidia
libcufile 1.6.1.9 0 nvidia
libcurand 10.3.2.106 0 nvidia
libcurl 7.88.1 h91b91d3_0 anaconda/pkgs/main
libcusolver 11.4.1.48 0 nvidia
libcusparse 11.7.5.86 0 nvidia
libdeflate 1.17 h5eee18b_0 anaconda/pkgs/main
libedit 3.1.20221030 h5eee18b_0 anaconda/pkgs/main
libev 4.33 h516909a_1 conda-forge
libevent 2.1.12 h8f2d780_0 anaconda/pkgs/main
libffi 3.4.4 h6a678d5_0 anaconda/pkgs/main
libgcc-ng 12.2.0 h65d4601_19 conda-forge
libgfortran-ng 11.2.0 h00389a5_1 anaconda/pkgs/main
libgfortran5 11.2.0 h1234567_1 anaconda/pkgs/main
libgomp 12.2.0 h65d4601_19 conda-forge
libiconv 1.16 h7f8727e_2 anaconda/pkgs/main
libidn2 2.3.4 h5eee18b_0 anaconda/pkgs/main
libnghttp2 1.46.0 hce63b2e_0 anaconda/pkgs/main
libnpp 11.8.0.86 0 nvidia
libnsl 2.0.0 h7f98852_0 conda-forge
libnuma 2.0.16 h0b41bf4_1 conda-forge
libnvjpeg 11.9.0.86 0 nvidia
libpng 1.6.39 h5eee18b_0 anaconda/pkgs/main
libprotobuf 3.20.3 he621ea3_0 anaconda/pkgs/main
libsodium 1.0.18 h7b6447c_0 anaconda/pkgs/main
libsqlite 3.42.0 h2797004_0 conda-forge
libssh2 1.10.0 h8f2d780_0 anaconda/pkgs/main
libstdcxx-ng 12.2.0 h46fd767_19 conda-forge
libtasn1 4.19.0 h5eee18b_0 anaconda/pkgs/main
libthrift 0.15.0 hcc01f38_0 anaconda/pkgs/main
libtiff 4.5.0 h6a678d5_2 anaconda/pkgs/main
libunistring 0.9.10 h27cfd23_0 anaconda/pkgs/main
libutf8proc 2.8.0 h166bdaf_0 conda-forge
libuuid 1.41.5 h5eee18b_0 anaconda/pkgs/main
libwebp 1.2.4 h11a3e52_1 anaconda/pkgs/main
libwebp-base 1.2.4 h5eee18b_1 anaconda/pkgs/main
libzlib 1.2.13 h166bdaf_4 conda-forge
lz4-c 1.9.4 h6a678d5_0 anaconda/pkgs/main
markupsafe 2.1.1 py310h7f8727e_0 anaconda/pkgs/main
matplotlib-inline 0.1.6 py310h06a4308_0 anaconda/pkgs/main
mkl 2023.1.0 h6d00ec8_46342 anaconda/pkgs/main
mkl-service 2.4.0 py310h5eee18b_1 anaconda/pkgs/main
mkl_fft 1.3.6 py310h1128e8f_1 anaconda/pkgs/main
mkl_random 1.2.2 py310h1128e8f_1 anaconda/pkgs/main
mpc 1.1.0 h10f8cd9_1 anaconda/pkgs/main
mpfr 4.0.2 hb69a4c5_1 anaconda/pkgs/main
mpmath 1.2.1 py310h06a4308_0 anaconda/pkgs/main
multidict 6.0.2 py310h5eee18b_0 anaconda/pkgs/main
multiprocess 0.70.14 py310h5764c6d_3 conda-forge
ncurses 6.4 h6a678d5_0 anaconda/pkgs/main
nest-asyncio 1.5.6 py310h06a4308_0 anaconda/pkgs/main
nettle 3.7.3 hbbd107a_1 anaconda/pkgs/main
networkx 2.8.4 py310h06a4308_1 anaconda/pkgs/main
nltk 3.7 pyhd3eb1b0_0 anaconda/pkgs/main
numpy 1.24.3 py310h5f9d8c6_1 anaconda/pkgs/main
numpy-base 1.24.3 py310hb5e798b_1 anaconda/pkgs/main
openh264 2.1.1 h4ff587b_0 anaconda/pkgs/main
openssl 1.1.1t h7f8727e_0 anaconda/pkgs/main
orc 1.7.4 hb3bc3d3_1 anaconda/pkgs/main
packaging 23.1 pyhd8ed1ab_0 conda-forge
pandas 2.0.2 py310h7cbd5c2_0 conda-forge
parso 0.8.3 pyhd3eb1b0_0 anaconda/pkgs/main
pexpect 4.8.0 pyhd3eb1b0_3 anaconda/pkgs/main
pickleshare 0.7.5 pyhd3eb1b0_1003 anaconda/pkgs/main
pillow 9.4.0 py310h6a678d5_0 anaconda/pkgs/main
pip 23.0.1 py310h06a4308_0 anaconda/pkgs/main
platformdirs 2.5.2 py310h06a4308_0 anaconda/pkgs/main
pooch 1.4.0 pyhd3eb1b0_0 anaconda/pkgs/main
poyo 0.5.0 pyhd3eb1b0_0 anaconda/pkgs/main
prompt-toolkit 3.0.36 py310h06a4308_0 anaconda/pkgs/main
protobuf 3.20.3 py310h6a678d5_0 anaconda/pkgs/main
psutil 5.9.0 py310h5eee18b_0 anaconda/pkgs/main
ptyprocess 0.7.0 pyhd3eb1b0_2 anaconda/pkgs/main
pure_eval 0.2.2 pyhd3eb1b0_0 anaconda/pkgs/main
pyarrow 8.0.0 py310h468efa6_0 anaconda/pkgs/main
pycparser 2.21 pyhd3eb1b0_0 anaconda/pkgs/main
pygments 2.15.1 py310h06a4308_1 anaconda/pkgs/main
pyopenssl 23.0.0 py310h06a4308_0 anaconda/pkgs/main
pysocks 1.7.1 py310h06a4308_0 anaconda/pkgs/main
python 3.10.11 h7a1cb2a_2 anaconda/pkgs/main
python-dateutil 2.8.2 pyhd8ed1ab_0 conda-forge
python-slugify 5.0.2 pyhd3eb1b0_0 anaconda/pkgs/main
python-tzdata 2023.3 pyhd8ed1ab_0 conda-forge
python-xxhash 3.2.0 py310h1fa729e_0 conda-forge
python_abi 3.10 2_cp310 anaconda/cloud/conda-forge
pytorch 2.0.1 py3.10_cuda11.8_cudnn8.7.0_0 pytorch
pytorch-cuda 11.8 h7e8668a_5 pytorch
pytorch-mutex 1.0 cuda pytorch
pytz 2023.3 pyhd8ed1ab_0 conda-forge
pyyaml 6.0 py310h5764c6d_5 conda-forge
pyzmq 25.0.2 py310h6a678d5_0 anaconda/pkgs/main
rdma-core 28.9 h59595ed_1 conda-forge
re2 2022.04.01 h295c915_0 anaconda/pkgs/main
readline 8.2 h5eee18b_0 anaconda/pkgs/main
regex 2023.5.5 py310h2372a71_0 conda-forge
requests 2.29.0 py310h06a4308_0 anaconda/pkgs/main
responses 0.13.3 pyhd3eb1b0_0 anaconda/pkgs/main
rouge-score 0.1.2 pyhd8ed1ab_0 anaconda/cloud/conda-forge
s2n 1.3.33 hae46d1a_0 anaconda/cloud/conda-forge
sacremoses 0.0.53 pyhd8ed1ab_0 conda-forge
scikit-learn 1.2.2 py310h6a678d5_1 anaconda/pkgs/main
scipy 1.10.1 py310h5f9d8c6_1 anaconda/pkgs/main
sentencepiece 0.1.99 py310hdb19cb5_0 anaconda/pkgs/main
setuptools 67.8.0 py310h06a4308_0 anaconda/pkgs/main
six 1.16.0 pyhd3eb1b0_1 anaconda/pkgs/main
snappy 1.1.10 h9fff704_0 conda-forge
soupsieve 2.4 py310h06a4308_0 anaconda/pkgs/main
sqlite 3.41.2 h5eee18b_0 anaconda/pkgs/main
stack_data 0.2.0 pyhd3eb1b0_0 anaconda/pkgs/main
sympy 1.11.1 py310h06a4308_0 anaconda/pkgs/main
tbb 2021.8.0 hdb19cb5_0 anaconda/pkgs/main
text-unidecode 1.3 pyhd3eb1b0_0 anaconda/pkgs/main
threadpoolctl 2.2.0 pyh0d69192_0 anaconda/pkgs/main
tk 8.6.12 h1ccaba5_0 anaconda/pkgs/main
tokenizers 0.11.4 py310h3dcd8bd_1 anaconda/pkgs/main
torchaudio 2.0.2 py310_cu118 pytorch
torchtriton 2.0.0 py310 pytorch
torchvision 0.15.2 py310_cu118 pytorch
tornado 6.2 py310h5eee18b_0 anaconda/pkgs/main
tqdm 4.65.0 py310h2f386ee_0 anaconda/pkgs/main
traitlets 5.7.1 py310h06a4308_0 anaconda/pkgs/main
transformers 4.24.0 py310h06a4308_0 anaconda/pkgs/main
typing-extensions 4.5.0 py310h06a4308_0 anaconda/pkgs/main
typing_extensions 4.5.0 py310h06a4308_0 anaconda/pkgs/main
tzdata 2023c h04d1e81_0 anaconda/pkgs/main
ucx 1.14.1 hf587318_2 anaconda/cloud/conda-forge
unidecode 1.2.0 pyhd3eb1b0_0 anaconda/pkgs/main
urllib3 1.26.16 py310h06a4308_0 anaconda/pkgs/main
utf8proc 2.6.1 h27cfd23_0 anaconda/pkgs/main
wcwidth 0.2.5 pyhd3eb1b0_0 anaconda/pkgs/main
wheel 0.38.4 py310h06a4308_0 anaconda/pkgs/main
widgetsnbextension 4.0.5 py310h06a4308_0 anaconda/pkgs/main
xxhash 0.8.1 h0b41bf4_0 conda-forge
xz 5.4.2 h5eee18b_0 anaconda/pkgs/main
yaml 0.2.5 h7f98852_2 conda-forge
yarl 1.8.1 py310h5eee18b_0 anaconda/pkgs/main
zeromq 4.3.4 h2531618_0 anaconda/pkgs/main
zipp 3.15.0 pyhd8ed1ab_0 conda-forge
zlib 1.2.13 h166bdaf_4 conda-forge
zstd 1.5.5 hc292b87_0 anaconda/pkgs/main
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I have the following problem when practicing Summarization section of chapter 7 in NLP Course
[Summarization - Hugging Face NLP Course](https://huggingface.co/learn/nlp-course/chapter7/5?fw=pt#summarization)
```py
args = Seq2SeqTrainingArguments(
output_dir=f"{model_name}-finetuned-amazon-en-es",
evaluation_strategy="epoch",
learning_rate=5.6e-5,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
weight_decay=0.01,
save_total_limit=3,
num_train_epochs=num_train_epochs,
predict_with_generate=True,
logging_steps=logging_steps,
push_to_hub=True,
)
from transformers import Seq2SeqTrainer
trainer = Seq2SeqTrainer(
model,
args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["validation"],
data_collator=data_collator,
tokenizer=tokenizer,
compute_metrics=compute_metrics,
)
```
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[32], line 3
1 from transformers import Seq2SeqTrainer
----> 3 trainer = Seq2SeqTrainer(
4 model,
5 args,
6 train_dataset=tokenized_datasets["train"],
7 eval_dataset=tokenized_datasets["validation"],
8 data_collator=data_collator,
9 tokenizer=tokenizer,
10 compute_metrics=compute_metrics,
11 )
File ~/micromamba/envs/nlpcourse/lib/python3.10/site-packages/transformers/trainer.py:489, in Trainer.__init__(self, model, args, data_collator, train_dataset, eval_dataset, tokenizer, model_init, compute_metrics, callbacks, optimizers, preprocess_logits_for_metrics)
487 # Create clone of distant repo and output directory if needed
488 if self.args.push_to_hub:
--> 489 self.init_git_repo(at_init=True)
490 # In case of pull, we need to make sure every process has the latest.
491 if is_torch_tpu_available():
File ~/micromamba/envs/nlpcourse/lib/python3.10/site-packages/transformers/trainer.py:3284, in Trainer.init_git_repo(self, at_init)
3281 repo_name = get_full_repo_name(repo_name, token=self.args.hub_token)
3283 try:
-> 3284 self.repo = Repository(
3285 self.args.output_dir,
3286 clone_from=repo_name,
3287 use_auth_token=use_auth_token,
3288 private=self.args.hub_private_repo,
3289 )
3290 except EnvironmentError:
3291 if self.args.overwrite_output_dir and at_init:
3292 # Try again after wiping output_dir
File ~/micromamba/envs/nlpcourse/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:120, in validate_hf_hub_args.._inner_fn(*args, **kwargs)
117 if check_use_auth_token:
118 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)
--> 120 return fn(*args, **kwargs)
TypeError: Repository.__init__() got an unexpected keyword argument 'private'
```
anyone could give a help to fix it ?
### Expected behavior
normal
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24155/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24155/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24152
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24152/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24152/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24152/events
|
https://github.com/huggingface/transformers/issues/24152
| 1,750,764,911 |
I_kwDOCUB6oc5oWo1v
| 24,152 |
/gpt2/resolve/main/tokenizer_config.json (Caused by SSLError(SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number
|
{
"login": "starlitsky2010",
"id": 10757117,
"node_id": "MDQ6VXNlcjEwNzU3MTE3",
"avatar_url": "https://avatars.githubusercontent.com/u/10757117?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/starlitsky2010",
"html_url": "https://github.com/starlitsky2010",
"followers_url": "https://api.github.com/users/starlitsky2010/followers",
"following_url": "https://api.github.com/users/starlitsky2010/following{/other_user}",
"gists_url": "https://api.github.com/users/starlitsky2010/gists{/gist_id}",
"starred_url": "https://api.github.com/users/starlitsky2010/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/starlitsky2010/subscriptions",
"organizations_url": "https://api.github.com/users/starlitsky2010/orgs",
"repos_url": "https://api.github.com/users/starlitsky2010/repos",
"events_url": "https://api.github.com/users/starlitsky2010/events{/privacy}",
"received_events_url": "https://api.github.com/users/starlitsky2010/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The error seems like a temporary failure on the Hub, your code does not error on my side. As for downloading in another folder you need to use the `cache_dir` argument to change the cache location.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,686 | 1,689 | 1,689 |
NONE
| null |
### System Info
/usr/local/lib/python3.8/dist-packages/pandas/core/computation/expressions.py:20: UserWarning: Pandas requires version '2.7.3' or newer of 'numexpr' (version '2.7.2' currently installed).
from pandas.core.computation.check import NUMEXPR_INSTALLED
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.29.2
- Platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0a0+1767026 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
Hi @ArthurZucker @younesbelkada @sgugger,
Environment
```
enroot 3.4.1
pyxis 0.7.0
slurm slurm-wlm 19.05.5
Ubuntu 20.04
NeMo docker image: nvcr.io+ea-bignlp+nemofw-training+23.04.1-py3.sqsh
```
When I do the pile dataset preprocessing. The following error occurs.
```
Traceback (most recent call last):
File "/opt/NeMo/nemo/collections/common/tokenizers/huggingface/auto_tokenizer.py", line 74, in __init__
self.tokenizer = AUTOTOKENIZER.from_pretrained(
File "/usr/local/lib/python3.8/dist-packages/transformers/models/auto/tokenization_auto.py", line 643, in from_pretrained
tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/auto/tokenization_auto.py", line 487, in get_tokenizer_config
resolved_config_file = cached_file(
File "/usr/local/lib/python3.8/dist-packages/transformers/utils/hub.py", line 417, in cached_file
resolved_file = hf_hub_download(
File "/usr/local/lib/python3.8/dist-packages/huggingface_hub/utils/_validators.py", line 120, in _inner_fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/huggingface_hub/file_download.py", line 1195, in hf_hub_download
metadata = get_hf_file_metadata(
File "/usr/local/lib/python3.8/dist-packages/huggingface_hub/utils/_validators.py", line 120, in _inner_fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/huggingface_hub/file_download.py", line 1532, in get_hf_file_metadata
r = _request_wrapper(
File "/usr/local/lib/python3.8/dist-packages/huggingface_hub/file_download.py", line 407, in _request_wrapper
response = _request_wrapper(
File "/usr/local/lib/python3.8/dist-packages/huggingface_hub/file_download.py", line 442, in _request_wrapper
return http_backoff(
File "/usr/local/lib/python3.8/dist-packages/huggingface_hub/utils/_http.py", line 212, in http_backoff
response = session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/requests/sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python3.8/dist-packages/requests/sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/requests/adapters.py", line 514, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /gpt2/resolve/main/tokenizer_config.json (Caused by SSLError(SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:2635)')))
```
I've tried downgrade request==2.19.1 and certifi==2018.8.13 but still failed.
When I run a script below:
test_ssl.py
```
import ssl
print(ssl.OPENSSL_VERSION)
```
Then,
```
# python test_ssl.py
OpenSSL 1.1.1f 31 Mar 2020
```
Then execute below
```
python -c "from transformers import AutoTokenizer; tok_gpt=AutoTokenizer.from_pretrained('gpt2');"
```
It will download the gpt2 tokenizer like this:
```
root@nf5688m7-1:/workspace# tree ~/.cache/huggingface/hub/
/root/.cache/huggingface/hub/
├── models--gpt2
│ ├── blobs
│ │ ├── 10c66461e4c109db5a2196bff4bb59be30396ed8
│ │ ├── 1f1d9aaca301414e7f6c9396df506798ff4eb9a6
│ │ ├── 226b0752cac7789c48f0cb3ec53eda48b7be36cc
│ │ └── 4b988bccc9dc5adacd403c00b4704976196548f8
│ ├── refs
│ │ └── main
│ └── snapshots
│ └── e7da7f221d5bf496a48136c0cd264e630fe9fcc8
│ ├── config.json -> ../../blobs/10c66461e4c109db5a2196bff4bb59be30396ed8
│ ├── merges.txt -> ../../blobs/226b0752cac7789c48f0cb3ec53eda48b7be36cc
│ ├── tokenizer.json -> ../../blobs/4b988bccc9dc5adacd403c00b4704976196548f8
│ └── vocab.json -> ../../blobs/1f1d9aaca301414e7f6c9396df506798ff4eb9a6
└── version.txt
```
How can I solve the SSL problem when download gpt2 tokenizer.json and make it download to /gpt2/resolve/main/tokenizer_config.json automatically?
Thanks
Aaron
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Environment
```
enroot 3.4.1
pyxis 0.7.0
slurm slurm-wlm 19.05.5
Ubuntu 20.04
NeMo docker image: nvcr.io+ea-bignlp+nemofw-training+23.04.1-py3.sqsh
```
2. python -c "from transformers import AutoTokenizer; tok_gpt=AutoTokenizer.from_pretrained('gpt2');"
3. How can I solve the SSL problem when download gpt2 tokenizer.json and make it download to /gpt2/resolve/main/tokenizer_config.json automatically?
### Expected behavior
download gpt2 tokenizer.json and make it download to /gpt2/resolve/main/tokenizer_config.json
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24152/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24152/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24151
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24151/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24151/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24151/events
|
https://github.com/huggingface/transformers/pull/24151
| 1,750,589,923 |
PR_kwDOCUB6oc5SqTIh
| 24,151 |
[tests] fix bitsandbytes import issue
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,686 | 1,686 | 1,686 |
CONTRIBUTOR
| null |
this test has been failing when `peft` library was installed since it tries to access `bitsandbytes.nn`
```
$ pip install accelerate peft bitsandbytes==0.39.0
$ RUN_SLOW="yes" pytest -sv tests/extended/test_trainer_ext.py::TestTrainerExt::test_run_seq2seq_bnb
[...]
RuntimeError: Failed to import transformers.trainer_seq2seq because of the following error (look up to see its traceback):
E module 'bitsandbytes' has no attribute 'nn'
```
here is why it's happening:
1. We push `transformers/tests` into `sys.path` when running the subprocess-based tests [here](https://github.com/huggingface/transformers/blob/deff5979fee1f989d26e4946c92a5c35ce695af8/src/transformers/testing_utils.py#L1226)
2. but we have `transformers/tests/bitsandbytes` dir under `transformers/tests`
3. so when you do import `bitsandbytes.nn` it finds the wrong `bitsandbytes` dir, which is not the bnb library and it breaks
So this PR renames `transformers/tests/bitsandbytes` to `transformers/tests/bnb` which removes the conflicts and the failing test.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24151/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24151/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24151",
"html_url": "https://github.com/huggingface/transformers/pull/24151",
"diff_url": "https://github.com/huggingface/transformers/pull/24151.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24151.patch",
"merged_at": 1686372792000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24150
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24150/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24150/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24150/events
|
https://github.com/huggingface/transformers/issues/24150
| 1,750,427,486 |
I_kwDOCUB6oc5oVWde
| 24,150 |
Training auto cancelling
|
{
"login": "Cirediallo",
"id": 16343565,
"node_id": "MDQ6VXNlcjE2MzQzNTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/16343565?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cirediallo",
"html_url": "https://github.com/Cirediallo",
"followers_url": "https://api.github.com/users/Cirediallo/followers",
"following_url": "https://api.github.com/users/Cirediallo/following{/other_user}",
"gists_url": "https://api.github.com/users/Cirediallo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cirediallo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cirediallo/subscriptions",
"organizations_url": "https://api.github.com/users/Cirediallo/orgs",
"repos_url": "https://api.github.com/users/Cirediallo/repos",
"events_url": "https://api.github.com/users/Cirediallo/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cirediallo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey!\r\nWithout a traceback, there's nothing we can really do to help your with this. It seems that the las line has ` 0% 1228/30627537 [1:17:07<29988:35:31, 3.53s/it]^C` with `^C` is similar to when you actually press ctrl+c.\r\nI suggest to post this on the [forum](https://discuss.huggingface.co/) and see if someone already had this issue",
"Actually there is no traceback it just do it like the last line as described above(like someone pressed Ctrl+C) and stop like the cell have been executed without error nothing else",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,686 | 1,689 | 1,689 |
NONE
| null |
### System Info
Transformers version: 4.31.0.dev0
Platform: Google Colab
Environment configuration: TPU, GPU(V100)
Python version: Python 3.10.12
### Who can help?
@patil-suraj @ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
! python ./transformers/examples/pytorch/translation/run_translation.py \
--model_name_or_path t5-large \
--do_train \
--do_eval \
--source_lang fr \
--target_lang en \
--source_prefix "translate French to English: " \
--dataset_name wmt14 \
--dataset_config_name fr-en \
--output_dir ./tmp/T5-large-fr-en \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate \
--push_to_hub=True
### Expected behavior
The training script should continue but it auto cancel itself like someone pressed Ctrl+C button. Here is the output
```
{'loss': 1.6703, 'learning_rate': 4.9998367482177885e-05, 'epoch': 0.0}
0% 1000/30627537 [1:00:01<26918:05:16, 3.16s/it][INFO|trainer.py:2926] 2023-06-09 19:06:01,861 >> Saving model checkpoint to ./tmp/tst-translation/checkpoint-1000
[INFO|configuration_utils.py:458] 2023-06-09 19:06:01,863 >> Configuration saved in ./tmp/tst-translation/checkpoint-1000/config.json
[INFO|configuration_utils.py:364] 2023-06-09 19:06:01,863 >> Configuration saved in ./tmp/tst-translation/checkpoint-1000/generation_config.json
[INFO|modeling_utils.py:1853] 2023-06-09 19:06:09,160 >> Model weights saved in ./tmp/tst-translation/checkpoint-1000/pytorch_model.bin
[INFO|tokenization_utils_base.py:2194] 2023-06-09 19:06:09,162 >> tokenizer config file saved in ./tmp/tst-translation/checkpoint-1000/tokenizer_config.json
[INFO|tokenization_utils_base.py:2201] 2023-06-09 19:06:09,162 >> Special tokens file saved in ./tmp/tst-translation/checkpoint-1000/special_tokens_map.json
[INFO|tokenization_t5_fast.py:186] 2023-06-09 19:06:09,234 >> Copy vocab file to ./tmp/tst-translation/checkpoint-1000/spiece.model
[INFO|tokenization_utils_base.py:2194] 2023-06-09 19:06:54,528 >> tokenizer config file saved in ./tmp/tst-translation/tokenizer_config.json
[INFO|tokenization_utils_base.py:2201] 2023-06-09 19:06:54,528 >> Special tokens file saved in ./tmp/tst-translation/special_tokens_map.json
[INFO|tokenization_t5_fast.py:186] 2023-06-09 19:06:54,599 >> Copy vocab file to ./tmp/tst-translation/spiece.model
0% 1228/30627537 [1:17:07<29988:35:31, 3.53s/it]^C
```
No matter which configuration I use this is the behiavor I have. Does it related to the fact that there is specific task param for from French to English(see below)
```
"task_specific_params": {
"summarization": {
"early_stopping": true,
"length_penalty": 2.0,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24150/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24150/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24149
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24149/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24149/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24149/events
|
https://github.com/huggingface/transformers/issues/24149
| 1,750,331,657 |
I_kwDOCUB6oc5oU_EJ
| 24,149 |
Official Example - BeamSearchScorer always returns just one beam, even I specified 3
|
{
"login": "Oxi84",
"id": 25420033,
"node_id": "MDQ6VXNlcjI1NDIwMDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/25420033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Oxi84",
"html_url": "https://github.com/Oxi84",
"followers_url": "https://api.github.com/users/Oxi84/followers",
"following_url": "https://api.github.com/users/Oxi84/following{/other_user}",
"gists_url": "https://api.github.com/users/Oxi84/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Oxi84/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Oxi84/subscriptions",
"organizations_url": "https://api.github.com/users/Oxi84/orgs",
"repos_url": "https://api.github.com/users/Oxi84/repos",
"events_url": "https://api.github.com/users/Oxi84/events{/privacy}",
"received_events_url": "https://api.github.com/users/Oxi84/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@ArthurZucker @younesbelkada @Narsil",
"Hi @Oxi84, thanks for raising this issue!\r\n\r\nCould you provide information about the environment packages? Just run `! transformers-cli env` in a colab cell and copy paste the output. \r\n\r\ncc @gante ",
"Sure:\r\n\r\n- `transformers` version: 4.30.1\r\n- Platform: Linux-5.15.107+-x86_64-with-glibc2.31\r\n- Python version: 3.10.12\r\n- Huggingface_hub version: 0.15.1\r\n- Safetensors version: 0.3.1\r\n- PyTorch version (GPU?): 2.0.1+cu118 (True)\r\n- Tensorflow version (GPU?): 2.12.0 (True)\r\n- Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu)\r\n- Jax version: 0.4.10\r\n- JaxLib version: 0.4.10\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n",
"I have found the solution actually:\r\n\r\n just add:\r\n \r\n num_beam_hyps_to_keep = 5\r\n\r\n or number you want, then scorer just chooses 5 best hypothesis.\r\n\r\n beam_scorer = BeamSearchScorer(\r\n batch_size=1,\r\n do_early_stopping=True,\r\n num_beams=num_beams,\r\n num_beam_hyps_to_keep = 5,\r\n device=model.device\r\n )",
"I only need to make this use multiple beams. With just one beam it is too slow.",
"This can be done by using this:\r\n\r\n input_ids = torch.ones((num_beams*the_batch_size, 1), device=model.device, dtype=torch.long)\r\n\r\nI guess something related to encoder/decoder",
"Hey @Oxi84 👋 \r\n\r\nThe `.generate()` method does a lot of preprocessing, input preparation, and instance initialization for you. I'd recommend using it, as we don't have the bandwidth to provide usage support for lower-level APIs :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,686 | 1,689 | 1,689 |
NONE
| null |
### System Info
google colab latest
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import (
AutoTokenizer,
AutoModelForSeq2SeqLM,
LogitsProcessorList,
MinLengthLogitsProcessor,
BeamSearchScorer,
)
import torch
tokenizer = AutoTokenizer.from_pretrained("t5-base")
model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
encoder_input_str = "translate English to German: How old are you?"
encoder_input_ids = tokenizer(encoder_input_str, return_tensors="pt").input_ids
# lets run beam search using 3 beams
num_beams = 3
# define decoder start token ids
input_ids = torch.ones((num_beams, 1), device=model.device, dtype=torch.long)
input_ids = input_ids * model.config.decoder_start_token_id
# add encoder_outputs to model keyword arguments
model_kwargs = {
"encoder_outputs": model.get_encoder()(
encoder_input_ids.repeat_interleave(num_beams, dim=0), return_dict=True
)
}
# instantiate beam scorer
beam_scorer = BeamSearchScorer(
batch_size=1,
num_beams=num_beams,
device=model.device,
)
# instantiate logits processors
logits_processor = LogitsProcessorList(
[
MinLengthLogitsProcessor(5, eos_token_id=model.config.eos_token_id),
]
)
outputs = model.beam_search(input_ids, beam_scorer, logits_processor=logits_processor, **model_kwargs)
out = tokenizer.batch_decode(outputs, skip_special_tokens=True)
print("out",out)
### Expected behavior
out is just one line, even when i put:
num_return_sequences=2
at various places, for example:
outputs = model.beam_search(input_ids, beam_scorer, logits_processor=logits_processor, **model_kwargs, num_return_sequences=2)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24149/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24149/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24148
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24148/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24148/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24148/events
|
https://github.com/huggingface/transformers/issues/24148
| 1,750,324,682 |
I_kwDOCUB6oc5oU9XK
| 24,148 |
RWKV cuda kernel loading
|
{
"login": "Hannibal046",
"id": 38466901,
"node_id": "MDQ6VXNlcjM4NDY2OTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/38466901?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hannibal046",
"html_url": "https://github.com/Hannibal046",
"followers_url": "https://api.github.com/users/Hannibal046/followers",
"following_url": "https://api.github.com/users/Hannibal046/following{/other_user}",
"gists_url": "https://api.github.com/users/Hannibal046/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hannibal046/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hannibal046/subscriptions",
"organizations_url": "https://api.github.com/users/Hannibal046/orgs",
"repos_url": "https://api.github.com/users/Hannibal046/repos",
"events_url": "https://api.github.com/users/Hannibal046/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hannibal046/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I found the problem was about the `nvcc` and `cuda` mismatch. The installation of `cudatoolkit` from `conda` doesn't necessarily download all relevant cuda things(e.g. `nvcc`).\r\n\r\nSo I solve it on my side by `conda install cuda -c nvidia`",
"Awesome thanks for sharing the solution! "
] | 1,686 | 1,687 | 1,687 |
NONE
| null |
### System Info
- `transformers` version: 4.31.0.dev0
- Platform: Linux-5.4.0-147-generic-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.0+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
cc @younesbelkada @ArthurZucker @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
my `demo.py`
```
from transformers import AutoTokenizer, RwkvModel
import torch
device = torch.device("cuda:5")
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-4-169m-pile")
model = RwkvModel.from_pretrained("RWKV/rwkv-4-169m-pile").to(device)
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt").to(device)
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
I commented out try except in `modeling_rwkv.py` to force it to load cuda kernel for RWKV attention. And i got this:
```
python demo.py
Traceback (most recent call last):
File "/data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1893, in _run_ninja_build
subprocess.run(
File "/data/chengxin/anaconda3/envs/rwkv/lib/python3.8/subprocess.py", line 516, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "demo.py", line 6, in <module>
model = RwkvModel.from_pretrained("RWKV/rwkv-4-169m-pile").to(device)
File "/data/chengxin/rwkv/transformers/src/transformers/modeling_utils.py", line 2675, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "/data/chengxin/rwkv/transformers/src/transformers/models/rwkv/modeling_rwkv.py", line 604, in __init__
self.blocks = nn.ModuleList([RwkvBlock(config, layer_id=idx) for idx in range(config.num_hidden_layers)])
File "/data/chengxin/rwkv/transformers/src/transformers/models/rwkv/modeling_rwkv.py", line 604, in <listcomp>
self.blocks = nn.ModuleList([RwkvBlock(config, layer_id=idx) for idx in range(config.num_hidden_layers)])
File "/data/chengxin/rwkv/transformers/src/transformers/models/rwkv/modeling_rwkv.py", line 378, in __init__
self.attention = RwkvSelfAttention(config, layer_id)
File "/data/chengxin/rwkv/transformers/src/transformers/models/rwkv/modeling_rwkv.py", line 263, in __init__
load_wkv_cuda_kernel(config.context_length)
File "/data/chengxin/rwkv/transformers/src/transformers/models/rwkv/modeling_rwkv.py", line 87, in load_wkv_cuda_kernel
rwkv_cuda_kernel = load_kernel(
File "/data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1284, in load
return _jit_compile(
File "/data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1509, in _jit_compile
_write_ninja_file_and_build_library(
File "/data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1624, in _write_ninja_file_and_build_library
_run_ninja_build(
File "/data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1909, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error building extension 'wkv_1024': [1/3] /usr/bin/nvcc -DTORCH_EXTENSION_NAME=wkv_1024 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/include -isystem /data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/include/TH -isystem /data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/include/THC -isystem /data/chengxin/anaconda3/envs/rwkv/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --compiler-options '-fPIC' -res-usage --maxrregcount 60 --use_fast_math -O3 -Xptxas -O3 --extra-device-vectorization -DTmax=1024 -std=c++17 -c /data/chengxin/rwkv/transformers/src/transformers/kernels/rwkv/wkv_cuda.cu -o wkv_cuda.cuda.o
FAILED: wkv_cuda.cuda.o
/usr/bin/nvcc -DTORCH_EXTENSION_NAME=wkv_1024 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/include -isystem /data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/include/TH -isystem /data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/include/THC -isystem /data/chengxin/anaconda3/envs/rwkv/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --compiler-options '-fPIC' -res-usage --maxrregcount 60 --use_fast_math -O3 -Xptxas -O3 --extra-device-vectorization -DTmax=1024 -std=c++17 -c /data/chengxin/rwkv/transformers/src/transformers/kernels/rwkv/wkv_cuda.cu -o wkv_cuda.cuda.o
nvcc fatal : Unknown option '-extra-device-vectorization'
[2/3] /usr/bin/nvcc -DTORCH_EXTENSION_NAME=wkv_1024 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/include -isystem /data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/include/TH -isystem /data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/include/THC -isystem /data/chengxin/anaconda3/envs/rwkv/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --compiler-options '-fPIC' -res-usage --maxrregcount 60 --use_fast_math -O3 -Xptxas -O3 --extra-device-vectorization -DTmax=1024 -std=c++17 -c /data/chengxin/rwkv/transformers/src/transformers/kernels/rwkv/wkv_cuda_bf16.cu -o wkv_cuda_bf16.cuda.o
FAILED: wkv_cuda_bf16.cuda.o
/usr/bin/nvcc -DTORCH_EXTENSION_NAME=wkv_1024 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/include -isystem /data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/include/TH -isystem /data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/include/THC -isystem /data/chengxin/anaconda3/envs/rwkv/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --compiler-options '-fPIC' -res-usage --maxrregcount 60 --use_fast_math -O3 -Xptxas -O3 --extra-device-vectorization -DTmax=1024 -std=c++17 -c /data/chengxin/rwkv/transformers/src/transformers/kernels/rwkv/wkv_cuda_bf16.cu -o wkv_cuda_bf16.cuda.o
nvcc fatal : Unknown option '-extra-device-vectorization'
ninja: build stopped: subcommand failed.
```
### Expected behavior
Load cuda kernel successfully.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24148/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24148/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24147
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24147/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24147/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24147/events
|
https://github.com/huggingface/transformers/issues/24147
| 1,750,272,707 |
I_kwDOCUB6oc5oUwrD
| 24,147 |
ImportError: Using the `Trainer` with `PyTorch` requires `accelerate>=0.20.1`: Please run `pip install transformers[torch]` or `pip install accelerate -U`
|
{
"login": "cssndrx",
"id": 1688894,
"node_id": "MDQ6VXNlcjE2ODg4OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1688894?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cssndrx",
"html_url": "https://github.com/cssndrx",
"followers_url": "https://api.github.com/users/cssndrx/followers",
"following_url": "https://api.github.com/users/cssndrx/following{/other_user}",
"gists_url": "https://api.github.com/users/cssndrx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cssndrx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cssndrx/subscriptions",
"organizations_url": "https://api.github.com/users/cssndrx/orgs",
"repos_url": "https://api.github.com/users/cssndrx/repos",
"events_url": "https://api.github.com/users/cssndrx/events{/privacy}",
"received_events_url": "https://api.github.com/users/cssndrx/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"You need to restart your colab environment after updating the library.",
"thanks @sgugger for the quick fix! I was having the same issue as mentioned by @cssndrx . ",
"Thank you @sgugger, that was the issue! ",
"Thanks, I was having the same issue!\r\n",
"restart virtalenv slove my same issue,thanks",
"wow! Thanks alot @sgugger",
"Restarting the notebook and the kernel resolve it also for me",
"> You need to restart your colab environment after updating the library.\r\n\r\nThank you.",
"Getting this error just by running the cells first time and restart also doesn't work",
"Restart even doesn't work me either",
"I was having the same issue, restarting also worked for me!",
"I am having this issue in my local machine. How to resolve it there?",
"@NavneetSajwan try to restart the notebook.",
"I am having the same issue in my colab environment, after updating the notebook worked for me!",
"Thank you!@sgugger. Same problem solved.",
"wow great restart also worked for me.. mannnnyyy thanks\r\n",
"\"Is there an alternative method? Restarting and rerunning all the cells consumes a significant amount of time.",
"> Restart even doesn't work me either\r\n\r\nStep1:\r\n!pip install transformers[torch] --- run installation\r\nStep2:\r\n#!pip install transformers[torch] --- comment out the installaltion\r\nStep3:\r\nRestart Colab Session"
] | 1,686 | 1,705 | 1,686 |
NONE
| null |
### System Info
I'm running the example code from
https://huggingface.co/docs/transformers/training
on a Colab
and it's failing with
[/usr/local/lib/python3.10/dist-packages/transformers/training_args.py](https://localhost:8080/#) in _setup_devices(self)
1670 if not is_sagemaker_mp_enabled():
1671 if not is_accelerate_available(min_version="0.20.1"):
-> 1672 raise ImportError(
1673 "Using the `Trainer` with `PyTorch` requires `accelerate>=0.20.1`: Please run `pip install transformers[torch]` or `pip install accelerate -U`"
1674 )
ImportError: Using the `Trainer` with `PyTorch` requires `accelerate>=0.20.1`: Please run `pip install transformers[torch]` or `pip install accelerate -U`
However, it still gives the error even after I pip install the recommended libraries.
Also, pip freeze shows that accelerate is accelerate==0.20.3 which should satisfy the requirement of `accelerate>=0.20.1` - so I'm not sure why the Trainer is throwing the error.
Thanks for taking a look @sgugger
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Expected behavior
To run training
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24147/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24147/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24146
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24146/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24146/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24146/events
|
https://github.com/huggingface/transformers/pull/24146
| 1,750,219,729 |
PR_kwDOCUB6oc5SpBzi
| 24,146 |
Stop storing references to bound methods via tf.function
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"no OOM with this PR (for the models involved), but we have some errors regarding `TypeError: Binding inputs to tf.function `eager_serving` failed due to `missing a required argument: 'inputs'`` popping up for several model/tokenizer tests.\r\n\r\nOne example is\r\n```bash\r\nself = <tests.models.gpt2.test_modeling_tf_gpt2.TFGPT2ModelTest testMethod=test_saved_model_creation>\r\n\r\n @slow\r\n def test_saved_model_creation(self):\r\n config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()\r\n config.output_hidden_states = False\r\n config.output_attentions = False\r\n \r\n if hasattr(config, \"use_cache\"):\r\n config.use_cache = False\r\n \r\n model_class = self.all_model_classes[0]\r\n \r\n class_inputs_dict = self._prepare_for_class(inputs_dict, model_class)\r\n model = model_class(config)\r\n \r\n model(class_inputs_dict)\r\n \r\n with tempfile.TemporaryDirectory() as tmpdirname:\r\n> model.save_pretrained(tmpdirname, saved_model=True)\r\n\r\ntests/test_modeling_tf_common.py:268: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\nsrc/transformers/modeling_tf_utils.py:2427: in save_pretrained\r\n self.save(saved_model_dir, include_optimizer=False, signatures=signatures)\r\n/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:70: in error_handler\r\n raise e.with_traceback(filtered_tb) from None\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\nself = <tensorflow.python.eager.polymorphic_function.function_spec.FunctionSpec object at 0x7f6e1c4db7f0>\r\nargs = ({'attention_mask': TensorSpec(shape=(None, None), dtype=tf.int32, name='attention_mask'), 'input_ids': TensorSpec(sha...tf.int32, name='input_ids'), 'token_type_ids': TensorSpec(shape=(None, None), dtype=tf.int32, name='token_type_ids')},), kwargs = {}\r\n\r\n def bind_function_inputs(self, args, kwargs):\r\n \"\"\"Bind `args` and `kwargs` into a canonicalized signature args, kwargs.\"\"\"\r\n sanitized_kwargs = {\r\n function_type_lib.sanitize_arg_name(k): v for k, v in kwargs.items()\r\n }\r\n if len(kwargs) != len(sanitized_kwargs):\r\n raise ValueError(f\"Name collision after sanitization. Please rename \"\r\n f\"tf.function input parameters. Original: \"\r\n f\"{sorted(kwargs.keys())}, Sanitized: \"\r\n f\"{sorted(sanitized_kwargs.keys())}\")\r\n \r\n try:\r\n bound_arguments = self.function_type.bind_with_defaults(\r\n args, sanitized_kwargs, self.default_values)\r\n except Exception as e:\r\n> raise TypeError(\r\n f\"Binding inputs to tf.function `{self._name}` failed due to `{e}`.\"\r\n f\"Received args: {args} and kwargs: {sanitized_kwargs} for signature:\"\r\n f\" {self.function_type}.\"\r\n ) from e\r\nE TypeError: Binding inputs to tf.function `eager_serving` failed due to `missing a required argument: 'inputs'`.Received args: ({'input_ids': TensorSpec(shape=(None, None), dtype=tf.int32, name='input_ids'), 'attention_mask': TensorSpec(shape=(None, None), dtype=tf.int32, name='attention_mask'), 'token_type_ids': TensorSpec(shape=(None, None), dtype=tf.int32, name='token_type_ids')},) and kwargs: {} for signature: (self, inputs: Dict(mapping={'input_ids': TensorSpec(shape=(None, None), dtype=tf.int32, name='input_ids'), 'attention_mask': TensorSpec(shape=(None, None), dtype=tf.int32, name='attention_mask'), 'token_type_ids': TensorSpec(shape=(None, None), dtype=tf.int32, name='token_type_ids')})).\r\n```",
"Looks like the way I'm handling the methods fails when we try to save the model with those signatures. I'll figure it out on Monday!",
"This should be ready for review now! The changes are pretty small, but it took me a while to figure out the details.\r\n\r\nIt turns out anything that looks like `self.serving = tf.function(self.eager_serving)` will create a circular reference between `self.serving` and `self` and inhibit cleanup. This does not apply to methods defined at the class (rather than instance) level. Something like this is fine and does not block cleanup:\r\n```\r\[email protected](input_signature=...)\r\ndef serving(self, inputs):\r\n ...\r\n```\r\nThe problem with the construction above, though, is that the `tf.function` decorator has to be called with all of its arguments at the class level, before the model has been initialized with a config. This means it can't read any shapes or details from the config, which means its signature has to be very **very** general. This is why we transitioned to `self.serving = ...` in the first place.\r\n\r\nThe solution I found is the following:\r\n\r\n- Get rid of all helper methods like `self.eager_serving`. These were only used internally anyway, to allow us to compile multiple serving signatures.\r\n- Decorate the base `serving` method with `tf.function` and no signature at all.\r\n- Rely on our control of `self.save_spec` to ensure that base TF methods like `model.save()` will save with the right signature even when we aren't manually defining it (I checked this and it works!)\r\n- When we want to manually specify signatures, we just call `self.serving.get_concrete_signature` with different signatures. No need to keep `eager_serving` around anymore!\r\n\r\nThis should totally preserve functionality and backward compatibility, while resolving the memory cleanup issue and keeping the specific save signatures. The only potentially noticeable change is that `self.serving.input_signature` is no longer defined. We read that value in a couple of tests as a shortcut to find the model input names, so I just replaced it with `self.input_signature` instead. I don't think anyone outside of Hugging Face was using it, and it certainly wasn't part of our public API, so I don't expect any issues!",
"Thanks to @ydshieh for his patience with the tests and to @gante for digging out the old PRs that let me finally understand why a lot of this stuff was ever here in the first place!",
"OK, I will run a few tests and let you know @Rocketknight1 \r\nThank you for trying trying!",
"@ydshieh actually, you're right - I thought it wasn't doing anything anymore, but it's still useful in some cases when we define a broad signature that gets inherited. Let me rework that so we keep it!",
"No warning sign after running tests for 4 involved models. You are very good at TF!",
"@ydshieh I finished rebasing and I removed your `gc.collect()` change to the doctests. Are you okay for me to merge now, or do you want to run any further tests?\r\n\r\nEither way, I think we've finally resolved this one!",
"It's ok, go ahead. If doctest starts to fail, I will `call` you.",
"Also pinging @amyeroberts for core maintainer review",
"> LGTM 🔥\r\n> \r\n> To be safe, can you trigger the slow CI on this branch? Most of the TF serialization tests are slow tests :D\r\n\r\nHello @gante !\r\n\r\nDo you mean enable slow tests but for all the models ..? Or anything else?\r\nCan't run slow tests on CircleCI however, so need to run on a specific VM.",
"Good point, actually - this PR isn't specific to any one model, so we'd need to run slow tests for all models. Since it's a long time to the next release, let's just merge this PR (after review) and see if anything fails overnight?",
"@Rocketknight1 @ydshieh Could we run slow tests on just a handful of models ~5 popular ones from different modalities to make sure any obvious issues have been caught? ",
"@amyeroberts I've been running them locally on the core models - BERT and GPT-2 look good! Are you okay if I try a few more and then merge if there are no issues?",
"Tested BERT, GPT-2, BART, ViT, CLIP and Wav2Vec2 without issues. Merging!"
] | 1,686 | 1,686 | 1,686 |
MEMBER
| null |
This is (hopefully!) the end of a long saga this week.
@ydshieh noticed that our tests runners were going OOM, after a couple of PRs I made to dummy inputs. I thought the problem was just that the new dummy inputs were too large, but eventually we figured out that the problem was actually quite complicated!
tl;dr **A circular reference exists, which is caused by us calling tf.function() on a model method and then storing the result as a model attribute. Because this reference exists, our TF models are not cleaned up immediately when they are deleted, but only after the next Python garbage collection.**
I believe the PRs triggered the issue by eliminating unneccessary calls and making TF model building much faster. This left less time for garbage collection to happen, and as a result our test suites started a second test before the first test had been cleaned up, which caused the test runner to go OOM.
We tried resolving this problem by manually calling `gc.collect()` before each test, but this made some of the test suites much slower! Obviously the real solution had to be to resolve the circular reference that was slowing down model cleanup.
~The solution is to replace `model.eager_serving` with a method `model._get_eager_serving_fn()`. This returns a function that TensorFlow can compile, but which doesn't create a hard reference to a model method in the returned `tf.function`. I confirmed through manual inspection with `gc.get_referrers` that the reference is removed and models are cleaned up immediately once they go out of scope now.~
See the update below for a full description of the solution I finally went with!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24146/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24146/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24146",
"html_url": "https://github.com/huggingface/transformers/pull/24146",
"diff_url": "https://github.com/huggingface/transformers/pull/24146.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24146.patch",
"merged_at": 1686679462000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24145
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24145/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24145/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24145/events
|
https://github.com/huggingface/transformers/pull/24145
| 1,750,117,216 |
PR_kwDOCUB6oc5SorOs
| 24,145 |
Adds LILT to models exportable with ONNX
|
{
"login": "mariababich",
"id": 28024756,
"node_id": "MDQ6VXNlcjI4MDI0NzU2",
"avatar_url": "https://avatars.githubusercontent.com/u/28024756?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariababich",
"html_url": "https://github.com/mariababich",
"followers_url": "https://api.github.com/users/mariababich/followers",
"following_url": "https://api.github.com/users/mariababich/following{/other_user}",
"gists_url": "https://api.github.com/users/mariababich/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariababich/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariababich/subscriptions",
"organizations_url": "https://api.github.com/users/mariababich/orgs",
"repos_url": "https://api.github.com/users/mariababich/repos",
"events_url": "https://api.github.com/users/mariababich/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariababich/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @mariababich , the ONNX export is now supported in Optimum, I merged your PR there: https://github.com/huggingface/optimum/pull/1098"
] | 1,686 | 1,686 | 1,686 |
NONE
| null |
# What does this PR do?
Adds LILT to models exportable with ONNX
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24145/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24145/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24145",
"html_url": "https://github.com/huggingface/transformers/pull/24145",
"diff_url": "https://github.com/huggingface/transformers/pull/24145.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24145.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24144
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24144/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24144/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24144/events
|
https://github.com/huggingface/transformers/pull/24144
| 1,750,051,674 |
PR_kwDOCUB6oc5Soc4E
| 24,144 |
Fix typo in streamers.py
|
{
"login": "freddiev4",
"id": 8339018,
"node_id": "MDQ6VXNlcjgzMzkwMTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8339018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/freddiev4",
"html_url": "https://github.com/freddiev4",
"followers_url": "https://api.github.com/users/freddiev4/followers",
"following_url": "https://api.github.com/users/freddiev4/following{/other_user}",
"gists_url": "https://api.github.com/users/freddiev4/gists{/gist_id}",
"starred_url": "https://api.github.com/users/freddiev4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/freddiev4/subscriptions",
"organizations_url": "https://api.github.com/users/freddiev4/orgs",
"repos_url": "https://api.github.com/users/freddiev4/repos",
"events_url": "https://api.github.com/users/freddiev4/events{/privacy}",
"received_events_url": "https://api.github.com/users/freddiev4/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24144). All of your documentation changes will be reflected on that endpoint."
] | 1,686 | 1,686 | 1,686 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes a typo in `transformers/generation/streamers.py`. Caught it while browsing some of the streaming code 😃
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24144/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24144/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24144",
"html_url": "https://github.com/huggingface/transformers/pull/24144",
"diff_url": "https://github.com/huggingface/transformers/pull/24144.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24144.patch",
"merged_at": 1686328066000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24143
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24143/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24143/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24143/events
|
https://github.com/huggingface/transformers/issues/24143
| 1,749,957,013 |
I_kwDOCUB6oc5oTjmV
| 24,143 |
audio classification official script on local own dataset
|
{
"login": "flckv",
"id": 103381497,
"node_id": "U_kgDOBil5-Q",
"avatar_url": "https://avatars.githubusercontent.com/u/103381497?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flckv",
"html_url": "https://github.com/flckv",
"followers_url": "https://api.github.com/users/flckv/followers",
"following_url": "https://api.github.com/users/flckv/following{/other_user}",
"gists_url": "https://api.github.com/users/flckv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flckv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flckv/subscriptions",
"organizations_url": "https://api.github.com/users/flckv/orgs",
"repos_url": "https://api.github.com/users/flckv/repos",
"events_url": "https://api.github.com/users/flckv/events{/privacy}",
"received_events_url": "https://api.github.com/users/flckv/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @flckv, thanks for raising an issue! \r\n\r\nThe error messages are telling you what the issues are. \r\n\r\n1. The feature `audio` isn't in the csv. The csv has two column names: `train` and `label`. You should either update the csv to have `audio` as a column name, or passing in `--audio_column_name train` when you run the script\r\n\r\n2. The dataset created is a `DatasetDict` with `DatasetDict` objects as its keys rather than the expected `Dataset` instance. This should be resolved by doing:\r\n\r\n```python\r\ndata_files = {'train': 'train/train.csv', 'test': 'test/test.csv', 'valid': 'valid/valid.csv'}\r\nraw_datasets = load_dataset(\"s/data/s/s\", data_files=data_files)\r\n```\r\n\r\nFor further questions about how to customise a script, please ask in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.\r\n\r\n",
"Thank you, @amyeroberts ",
"See related: https://discuss.huggingface.co/t/custom-local-data-loading-generating-split-with-load-dataset-not-working-values-in-datasetdict-should-be-of-type-dataset-but-got-type-class-datasets-dataset-dict-datasetdict/42740/2?u=sanchit-gandhi"
] | 1,686 | 1,686 | 1,686 |
NONE
| null |
### System Info
- `transformers` version: 4.30.0.dev0
- Platform: Linux-5.4.204-ql-generic-12.0-19-x86_64-with-glibc2.17
- Python version: 3.8.12
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[conda] numpy 1.24.3 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
[conda] torchaudio 2.0.2 pypi_0 pypi
### Who can help?
@sanchit-gandhi @sgugger @albertvillanova
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. I want to run this model but not on superb dataset:
https://github.com/huggingface/transformers/blob/main/examples/pytorch/audio-classification/README.md
2. I want to load a dataset from local:
> - here is the local data structure for splits:
> 
>
> - here is the csv file structure containing the path to the audio file and the audio label:
> 
>
with command: I don't specify the superb dataset:
> python `run_audio_classification.py` \
> --model_name_or_path facebook/wav2vec2-base \
> --output_dir wav2vec2-base-s \
> --overwrite_output_dir \
> --remove_unused_columns False \
> --do_train \
> --do_eval \
> --fp16 \
> --learning_rate 3e-5 \
> --max_length_seconds 1 \
> --attention_mask False \
> --warmup_ratio 0.1 \
> --num_train_epochs 5 \
> --per_device_train_batch_size 32 \
> --gradient_accumulation_steps 4 \
> --per_device_eval_batch_size 32 \
> --dataloader_num_workers 4 \
> --logging_strategy steps \
> --logging_steps 10 \
> --evaluation_strategy epoch \
> --save_strategy epoch \
> --load_best_model_at_end True \
> --metric_for_best_model accuracy \
> --save_total_limit 3 \
> --seed 0 \
> --push_to_hub \
> --use_auth_token True
3. Changes I made in the `run_audio_classification.py` script to load audio from csv file:
3.1 I specify the location of the csv files :
> so I replace lines [249 - 261](https://github.com/huggingface/transformers/blob/b8fe259f163c48a18c9b27428b72b2ac104de346/examples/pytorch/audio-classification/run_audio_classification.py#LL247C1-L260C6)
with:
>
> data_files = {'train': 'train.csv', 'test': 'test.csv', 'valid': 'valid.csv'}
>
> raw_datasets["train"] = load_dataset('s/data/s/s/train', data_files=data_files["train"])
> raw_datasets["test"] = load_dataset('s/data/s/s/test', data_files=data_files["test"])
> raw_datasets["valid"] = load_dataset('s/data/s/s/valid', data_files=data_files["valid"])
>
>
### It seems that loading the csv files is successful. I get message: "Dataset csv downloaded and prepared ".
# But these are the errors:
4. I comment out lines [262 -274 ](https://github.com/huggingface/transformers/blob/b8fe259f163c48a18c9b27428b72b2ac104de346/examples/pytorch/audio-classification/run_audio_classification.py#L262)
>
> Because no matter how I change the audio path in csv files to audio, file_name, train/test/valid it still gives me error:
> `ValueError: --audio_column_name audio not found in dataset 'None'. Make sure to set `--audio_column_name` to the correct audio column - one of train.`
>
> even though I successfully load the csv file with `'audio' `and 'label' headers. (also tried: 'filne_name' instead of 'audio'). The csv files are **"Dataset csv downloaded and prepared ".** However, the error says that the _--audio_column_name `audio` is not found_
5. Then I receive error:
> on raw_datasets = raw_datasets.cast_column(
> python3.8/site-packages/datasets/dataset_dict.py line 309, in cast_column self._check_values_type()
> line 45, in _check_values_type
> raise TypeError(f"Values in `DatasetDict` should be of type `Dataset` but got type '{type(dataset)}'")
>
>
> TypeError: Values in `DatasetDict` should be of type `Dataset` but got type '<class 'datasets.dataset_dict.DatasetDict'>'
>
(I am loading it locally because I have not received a reply on how to load private hub datasets when I raised the issue: https://github.com/huggingface/datasets/issues/5930 ) @albertvillanova
### Expected behavior
I want to be able to run[ the official example script run_audio_classification.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/audio-classification/README.md ) instead of
predefined dataset superb, but on my own local dataset to train the model on my dataset.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24143/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24143/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24142
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24142/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24142/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24142/events
|
https://github.com/huggingface/transformers/issues/24142
| 1,749,933,135 |
I_kwDOCUB6oc5oTdxP
| 24,142 |
Add MQTTS
|
{
"login": "susnato",
"id": 56069179,
"node_id": "MDQ6VXNlcjU2MDY5MTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/56069179?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/susnato",
"html_url": "https://github.com/susnato",
"followers_url": "https://api.github.com/users/susnato/followers",
"following_url": "https://api.github.com/users/susnato/following{/other_user}",
"gists_url": "https://api.github.com/users/susnato/gists{/gist_id}",
"starred_url": "https://api.github.com/users/susnato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/susnato/subscriptions",
"organizations_url": "https://api.github.com/users/susnato/orgs",
"repos_url": "https://api.github.com/users/susnato/repos",
"events_url": "https://api.github.com/users/susnato/events{/privacy}",
"received_events_url": "https://api.github.com/users/susnato/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] |
[
"cc: @sanchit-gandhi and @ArthurZucker ",
"I think this is a cool model - whether it outperforms Bark (#24086) is up for debate. My only concerns are:\r\n1. The NC license which is not super permissive\r\n2. The low-visibility of the original repo: with only 130 GH stars, it seems like the community is not super excited by the model (and thus are unlikely to use it in the library)\r\n\r\nWhile the voice prompting feature would be cool and inference much faster than a hierarchical transformer model like Bark, I think the lack of visibility / excitement around the model means it would be a big effort to add with maybe little usage as a result\r\n\r\ncc @Vaibhavs10 who has had more experience with MQTTS, @ylacombe who's adding Bark and @hollance who's adding VITS MMS\r\n\r\nWhat do you all think?",
"IMO for MQTTS - doesn't make as much sense, purely from a licensing standpoint. Plus it uses a non-standard quantizer, which makes it difficult to maintain (primarily because it'll be used only for MQTTS).\r\n\r\nI think a more ambitious idea would be to add tortoise-tts - https://github.com/neonbjb/tortoise-tts (Was released a while back but still is the king) - the original repo is not as optimised so with the transformers bells and whistles we can make sure that it works faster and better?\r\n\r\nAnother idea would be to add StyleTTS - https://github.com/yl4579/StyleTTS, the results are quite promising and given there is training code as well, it opens up the opportunity to train a bigger model.",
"Tortoise TTS would probably go in the [`diffusers`](https://github.com/huggingface/diffusers) repo (since we could build it as a diffusion pipeline with a transformer encoder) - since the purpose of `diffusers` is more pure performance (which is not the objective of `transformers`) it would be a good fit here\r\n\r\nWould you like to open a feature request for Tortoise TTS on the diffusers repo and tag myself and @Vaibhavs10? We can then discuss how feasible a new pipeline addition would be!",
"thanks a lot for all the insights! \r\n\r\nAlso I opened an issue for Tortoise TTS on the diffusers repo. It is [here](https://github.com/huggingface/diffusers/issues/3891)\r\n\r\n",
"Perfect, thanks @susnato! Going to close this then since we're in agreement that MQTTS is not a good addition for transformers. Tortoise TTS issue in diffusers: https://github.com/huggingface/diffusers/issues/3891"
] | 1,686 | 1,688 | 1,688 |
CONTRIBUTOR
| null |
### Model description
MQTTS is a Text to Speech model which was introduced in the paper [A Vector Quantized Approach for Text to Speech Synthesis on Real-World Spontaneous Speech](https://arxiv.org/pdf/2302.04215.pdf). Their work explore the use of more abundant real-world data for building speech synthesizers. It's architecture is designed for multiple code generation and monotonic alignment, along with the use of a clean silence prompt to improve synthesis quality.They show that MQTTS outperforms existing TTS systems in several objective and subjective measures.
I would like to add this model to HF.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Implementation - https://github.com/b04901014/MQTTS
Checkpoints -
1. Config - https://cmu.box.com/s/hvv06w3yr8mob4csjjaigu5szq2qcjab
2. Quantize - https://cmu.box.com/s/966rcxkyjps80p7thu0r6lo2udk1ezdm
3. Transformer model - https://cmu.box.com/s/xuen9o8wxsmyaz32a65fu25cz92a2jei
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24142/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24142/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24141
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24141/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24141/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24141/events
|
https://github.com/huggingface/transformers/pull/24141
| 1,749,913,580 |
PR_kwDOCUB6oc5Sn-up
| 24,141 |
[documentation] grammatical fixes in image_classification.mdx
|
{
"login": "LiamSwayne",
"id": 108629034,
"node_id": "U_kgDOBnmMKg",
"avatar_url": "https://avatars.githubusercontent.com/u/108629034?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LiamSwayne",
"html_url": "https://github.com/LiamSwayne",
"followers_url": "https://api.github.com/users/LiamSwayne/followers",
"following_url": "https://api.github.com/users/LiamSwayne/following{/other_user}",
"gists_url": "https://api.github.com/users/LiamSwayne/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LiamSwayne/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LiamSwayne/subscriptions",
"organizations_url": "https://api.github.com/users/LiamSwayne/orgs",
"repos_url": "https://api.github.com/users/LiamSwayne/repos",
"events_url": "https://api.github.com/users/LiamSwayne/events{/privacy}",
"received_events_url": "https://api.github.com/users/LiamSwayne/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24141). All of your documentation changes will be reflected on that endpoint."
] | 1,686 | 1,686 | 1,686 |
CONTRIBUTOR
| null |
The changes made are grammatical and do not affect the ideas communicated in the file.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24141/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24141/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24141",
"html_url": "https://github.com/huggingface/transformers/pull/24141",
"diff_url": "https://github.com/huggingface/transformers/pull/24141.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24141.patch",
"merged_at": 1686326384000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24140
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24140/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24140/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24140/events
|
https://github.com/huggingface/transformers/pull/24140
| 1,749,896,782 |
PR_kwDOCUB6oc5Sn7Vu
| 24,140 |
[`SAM`] Fix sam slow test
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@younesbelkada Could you update the type hint for the pipeline too? \r\n\r\nnb: from pytorch it seems [we shouldn't use torch.cude.set_device()](https://pytorch.org/docs/stable/generated/torch.cuda.set_device.html) cc @Narsil ",
"Sure yes just updated it, we probably need also to address a proper fix for that in a separate PR, not sure though ",
"Code is from 3 years ago (and I'm pretty sure I just moved it from somewhere else).\r\n\r\nHappy to refactor to something more up-to-date. Accepting `strings` as device should be supported imo.",
"@Narsil I also agree we should support str for pipeline, let me know if you want me to work on this, I am happy to have a look and ping you once something is ready"
] | 1,686 | 1,686 | 1,686 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes SAM slow test, link to failing job: https://github.com/huggingface/transformers/actions/runs/5206799863
* Why this fix is relevant?
Before the PR, it seems I was initalizing the pipeline in the wrong way. Passing a string to `device` argument of pipeline leads to an error. To reproduce:
```python
from transformers import pipeline
pipe = pipeline("text-generation", device="cuda")
pipe("Hello")
>>> ValueError: Expected a torch.device with a specified index or an integer, but got:cuda
```
That is yield here: https://github.com/huggingface/transformers/blob/847b47c0eed4e6ab904f584fb415e3d3a397867f/src/transformers/pipelines/base.py#L905
Whereas the typehint of pipeline says: https://github.com/huggingface/transformers/blob/847b47c0eed4e6ab904f584fb415e3d3a397867f/src/transformers/pipelines/base.py#L763
The fix seems to be to pass an int for the device if cuda is available and -1 if not
cc @ydshieh
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24140/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24140/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24140",
"html_url": "https://github.com/huggingface/transformers/pull/24140",
"diff_url": "https://github.com/huggingface/transformers/pull/24140.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24140.patch",
"merged_at": 1686320530000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24139
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24139/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24139/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24139/events
|
https://github.com/huggingface/transformers/pull/24139
| 1,749,846,592 |
PR_kwDOCUB6oc5Snv9u
| 24,139 |
Avoid OOM in doctest CI
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Update: after a correction, only 30 minutes longer or a bit more.\r\n\r\nThe job runs 2x slower ...",
"Running the whole doctest: No more OOM. Only one test failure (lucky!)\r\n\r\nTag @sgugger so we can have doctest run until @Rocketknight1 have a complete solution in #24146.\r\n(We haven't received any doctest report for 2 weeks)"
] | 1,686 | 1,686 | 1,686 |
COLLABORATOR
| null |
# What does this PR do?
This is done by inject `gc.collect()` to the source code during pytest/doctest collecting the test to run.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24139/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24139/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24139",
"html_url": "https://github.com/huggingface/transformers/pull/24139",
"diff_url": "https://github.com/huggingface/transformers/pull/24139.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24139.patch",
"merged_at": 1686383258000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24138
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24138/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24138/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24138/events
|
https://github.com/huggingface/transformers/pull/24138
| 1,749,698,143 |
PR_kwDOCUB6oc5SnPJK
| 24,138 |
Correctly build models and import call_context for older TF versions
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@amyeroberts The code is in a conditional block that checks TF versions. I'm currently testing all TF versions since 2.4 to make sure this works for all of them - give me a few minutes to finish that before we merge the PR!",
"Version testing looks good!",
"_The documentation is not available anymore as the PR was closed or merged._",
"Merging this now and will begin discussions about a patch release"
] | 1,686 | 1,686 | 1,686 |
MEMBER
| null |
Our import for `call_context` was wrong for older TF versions - this unfortunately makes it quite hard to load models on older TF versions! This PR fixes it, and sorry about the issue!
Fixes #24133
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24138/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24138/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24138",
"html_url": "https://github.com/huggingface/transformers/pull/24138",
"diff_url": "https://github.com/huggingface/transformers/pull/24138.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24138.patch",
"merged_at": 1686312661000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24137
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24137/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24137/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24137/events
|
https://github.com/huggingface/transformers/pull/24137
| 1,749,619,280 |
PR_kwDOCUB6oc5Sm9g9
| 24,137 |
[`bnb`] Fix bnb config json serialization
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,686 | 1,689 | 1,686 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #24131
Fixes https://github.com/huggingface/peft/issues/558
Replaces the PR: https://github.com/huggingface/transformers/pull/24094
To reproduce:
```python
import torch
from transformers import BitsAndBytesConfig, AutoModelForVision2Seq
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
model = AutoModelForVision2Seq.from_pretrained("Salesforce/blip2-opt-2.7b", quantization_config=bnb_config, device_map='auto')
print(model.config.to_json_string())
```
(or use any causal LM/ text models)
Adds also a nice test
cc @amyeroberts
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24137/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24137/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24137",
"html_url": "https://github.com/huggingface/transformers/pull/24137",
"diff_url": "https://github.com/huggingface/transformers/pull/24137.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24137.patch",
"merged_at": 1686310874000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24136
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24136/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24136/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24136/events
|
https://github.com/huggingface/transformers/pull/24136
| 1,749,585,729 |
PR_kwDOCUB6oc5Sm2C3
| 24,136 |
Update urls in warnings for rich rendering
|
{
"login": "IvanReznikov",
"id": 25007854,
"node_id": "MDQ6VXNlcjI1MDA3ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/25007854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IvanReznikov",
"html_url": "https://github.com/IvanReznikov",
"followers_url": "https://api.github.com/users/IvanReznikov/followers",
"following_url": "https://api.github.com/users/IvanReznikov/following{/other_user}",
"gists_url": "https://api.github.com/users/IvanReznikov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IvanReznikov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IvanReznikov/subscriptions",
"organizations_url": "https://api.github.com/users/IvanReznikov/orgs",
"repos_url": "https://api.github.com/users/IvanReznikov/repos",
"events_url": "https://api.github.com/users/IvanReznikov/events{/privacy}",
"received_events_url": "https://api.github.com/users/IvanReznikov/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @IvanReznikov - thanks for opening a PR. Could you expand a bit on the issue this is addressing? All I'm seeing in the diff is splitting the `\")\"` bracket onto a new line. ",
"@amyeroberts, the bracket was part of the URL, what leads to 404 obviously.",
"@IvanReznikov The bracket is just the closing bracket the opened with `\"(see...\"` in the line above, it's not part of the url. Splitting like this won't change the evaluated string because of python's implicit line continuation behaviour. ",
"\r\n",
"@amyeroberts yep, fixed it above",
"@amyeroberts , sure",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,686 | 1,686 | 1,686 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24136/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24136/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24136",
"html_url": "https://github.com/huggingface/transformers/pull/24136",
"diff_url": "https://github.com/huggingface/transformers/pull/24136.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24136.patch",
"merged_at": 1686677010000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24135
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24135/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24135/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24135/events
|
https://github.com/huggingface/transformers/issues/24135
| 1,749,573,532 |
I_kwDOCUB6oc5oSF-c
| 24,135 |
Invalid hyperlink error 404 in Transformer Doc for RayTune
|
{
"login": "JoshuaEPSamuel",
"id": 66880119,
"node_id": "MDQ6VXNlcjY2ODgwMTE5",
"avatar_url": "https://avatars.githubusercontent.com/u/66880119?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JoshuaEPSamuel",
"html_url": "https://github.com/JoshuaEPSamuel",
"followers_url": "https://api.github.com/users/JoshuaEPSamuel/followers",
"following_url": "https://api.github.com/users/JoshuaEPSamuel/following{/other_user}",
"gists_url": "https://api.github.com/users/JoshuaEPSamuel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JoshuaEPSamuel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JoshuaEPSamuel/subscriptions",
"organizations_url": "https://api.github.com/users/JoshuaEPSamuel/orgs",
"repos_url": "https://api.github.com/users/JoshuaEPSamuel/repos",
"events_url": "https://api.github.com/users/JoshuaEPSamuel/events{/privacy}",
"received_events_url": "https://api.github.com/users/JoshuaEPSamuel/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"feel free to open a PR to fix if you can:)"
] | 1,686 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
Greetings,
Small issue here: following the HuggingFace Transformer Docs on Hyperparameter Search using Trainer API, for raytune the hyperlink for 'object_parameter' is now invalid and should be updated.
The other backends (sigopt, optuna, wandb) have correctly working hyperlinks for the 'object_parameter'.
Link to doc page:
https://huggingface.co/docs/transformers/hpo_train#how-to-enable-hyperparameter-search-in-example
Link to git markdown file:
https://github.com/huggingface/transformers/blob/main/docs/source/en/hpo_train.mdx
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24135/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24135/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24134
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24134/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24134/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24134/events
|
https://github.com/huggingface/transformers/pull/24134
| 1,749,491,044 |
PR_kwDOCUB6oc5Smhxp
| 24,134 |
fix bugs with trainer
|
{
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,686 | 1,686 | 1,686 |
CONTRIBUTOR
| null |
# What does this PR do?
Context:
1. Issue 1 - Currently, when the lr_scheduler is specified in the deepspeed config file, we leverage a DummyScheduler to pass to the `accelerator.prepare` to get the correct scheduler post prepare call. A prior PR removed preparation of the lr_scheduler leading to a lot of DeepSpeed tests failing.
2. Issue 2 - when using apex we shouldn't be preparing optimizer else we get `AttributeError: 'AcceleratedOptimizer' object has no attribute '_amp_stash'`
3. Issue 3 - FSDP ckpt logic should create ckpt dir if not present. Fixes https://github.com/huggingface/transformers/issues/24130
This PR fixes the above issues.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24134/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24134/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24134",
"html_url": "https://github.com/huggingface/transformers/pull/24134",
"diff_url": "https://github.com/huggingface/transformers/pull/24134.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24134.patch",
"merged_at": 1686313494000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24133
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24133/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24133/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24133/events
|
https://github.com/huggingface/transformers/issues/24133
| 1,749,413,780 |
I_kwDOCUB6oc5oRe-U
| 24,133 |
Transformers can not load dependency of tensorflow - `cannot import name 'call_context' from 'tensorflow.***.keras.engine'`
|
{
"login": "MaximeChurin",
"id": 11043047,
"node_id": "MDQ6VXNlcjExMDQzMDQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/11043047?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MaximeChurin",
"html_url": "https://github.com/MaximeChurin",
"followers_url": "https://api.github.com/users/MaximeChurin/followers",
"following_url": "https://api.github.com/users/MaximeChurin/following{/other_user}",
"gists_url": "https://api.github.com/users/MaximeChurin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MaximeChurin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MaximeChurin/subscriptions",
"organizations_url": "https://api.github.com/users/MaximeChurin/orgs",
"repos_url": "https://api.github.com/users/MaximeChurin/repos",
"events_url": "https://api.github.com/users/MaximeChurin/events{/privacy}",
"received_events_url": "https://api.github.com/users/MaximeChurin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Agh, sorry about that. PR to fix it is open at #24138 ",
"thanks feel free to close it when it is merged, if the PR does not close it automatically",
"Fixed on `main`, but we'll need to organize a patch release before you can unpin. Sorry for the trouble!",
"@MaximeChurin This should be resolved by the 4.30.1 hotfix release, along with a couple of other release bugs. You should be able to unpin now!"
] | 1,686 | 1,686 | 1,686 |
NONE
| null |
### System Info
- `transformers` version: 4.30.0
- Platform: Linux-5.19.0-43-generic-x86_64-with-glibc2.34
- Python version: 3.8.16
- Tensorflow version (GPU?): 2.8.2 (False)
### Who can help?
@gante and @Rocketknight1
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The problem arises due to this [PR](https://github.com/huggingface/transformers/pull/23760) and the [case](https://github.com/huggingface/transformers/blame/main/src/transformers/modeling_tf_utils.py#L84) for tensorflow import of `call_context` according to minor version. In tf < 2.11, the import should be `from tensorflow.python.keras.engine.base_layer_utils import call_context`
Steps to reproduce:
1. Install the System Info dependencies mentionned above
2. `from transformers import DistilBertTokenizerFast, TFDistilBertModel`
3. Then you should see
```bash
File "/opt/hostedtoolcache/Python/3.8.16/x64/lib/***3.8/site-packages/transformers/models/distilbert/modeling_tf_distilbert.py", line 37, in <module>
from ...modeling_tf_utils import (
File "/opt/hostedtoolcache/Python/3.8.16/x64/lib/***3.8/site-packages/transformers/modeling_tf_utils.py", line 84, in <module>
from tensorflow.***.keras.engine import call_context
ImportError: cannot import name 'call_context' from 'tensorflow.***.keras.engine' (/opt/hostedtoolcache/Python/3.8.16/x64/lib/***3.8/site-packages/tensorflow/***/keras/engine/__init__.py)
```
### Expected behavior
Update the case where the import is [located](https://github.com/huggingface/transformers/blame/main/src/transformers/modeling_tf_utils.py#L84) to the correct naming inside tensorflow `from tensorflow.python.keras.engine.base_layer_utils import call_context`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24133/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24133/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24132
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24132/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24132/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24132/events
|
https://github.com/huggingface/transformers/pull/24132
| 1,749,315,292 |
PR_kwDOCUB6oc5Sl61G
| 24,132 |
[lamaTokenizerFast] Update documentation
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,686 | 1,686 | 1,686 |
COLLABORATOR
| null |
# What does this PR do?
Updates the documentation for llamaFast.
I think that long term-wise it would make more sense that the rust tokenizer updates its internals if the specials tokens are updated as well, this would allow us to remove the python layer that takes care of it.
Related to #23889
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24132/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24132",
"html_url": "https://github.com/huggingface/transformers/pull/24132",
"diff_url": "https://github.com/huggingface/transformers/pull/24132.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24132.patch",
"merged_at": 1686321021000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24131
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24131/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24131/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24131/events
|
https://github.com/huggingface/transformers/issues/24131
| 1,749,189,219 |
I_kwDOCUB6oc5oQoJj
| 24,131 |
Object of type 'BitsAndBytesConfig' is not JSON serializable
|
{
"login": "karths8",
"id": 47289950,
"node_id": "MDQ6VXNlcjQ3Mjg5OTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/47289950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/karths8",
"html_url": "https://github.com/karths8",
"followers_url": "https://api.github.com/users/karths8/followers",
"following_url": "https://api.github.com/users/karths8/following{/other_user}",
"gists_url": "https://api.github.com/users/karths8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/karths8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/karths8/subscriptions",
"organizations_url": "https://api.github.com/users/karths8/orgs",
"repos_url": "https://api.github.com/users/karths8/repos",
"events_url": "https://api.github.com/users/karths8/events{/privacy}",
"received_events_url": "https://api.github.com/users/karths8/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Thanks for reporting, see the comment here: https://github.com/huggingface/transformers/pull/24094#pullrequestreview-1471475968 \r\nThat suggestion should solve the issue",
"@younesbelkada, I am running a finetuning of Llama2 7b and I needed to use the **gradient_checkpointing=True** in the **TrainingArguments** to handle CUDA out of memory.\r\nAdding this configuration caused the error **TypeError: Object of type BitsAndBytesConfig is not JSON serializable** when **save_checkpoint**.\r\n\r\n#24137 not solved this issue.\r\n\r\nFollow bellow my log:\r\n\r\n`\r\nINFO: fuse: warning: library too old, some operations may not work\r\n\r\n==========\r\n== CUDA ==\r\n==========\r\n\r\nCUDA Version 11.8.0\r\n\r\nContainer image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.\r\n\r\nThis container image and its contents are governed by the NVIDIA Deep Learning Container License.\r\nBy pulling and using the container, you accept the terms and conditions of this license:\r\nhttps://developer.nvidia.com/ngc/nvidia-deep-learning-container-license\r\n\r\nA copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.\r\n\r\n\r\nLoading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]\r\nLoading checkpoint shards: 50%|█████ | 1/2 [00:29<00:29, 29.73s/it]\r\nLoading checkpoint shards: 100%|██████████| 2/2 [00:39<00:00, 17.96s/it]\r\nLoading checkpoint shards: 100%|██████████| 2/2 [00:39<00:00, 19.73s/it]\r\n{\r\n \"_name_or_path\": \"/scratch/LLM/LLAMA2/Llama-2-7b-chat-hf\",\r\n \"architectures\": [\r\n \"LlamaForCausalLM\"\r\n ],\r\n \"attention_bias\": false,\r\n \"bos_token_id\": 1,\r\n \"eos_token_id\": 2,\r\n \"hidden_act\": \"silu\",\r\n \"hidden_size\": 4096,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 11008,\r\n \"max_position_embeddings\": 4096,\r\n \"model_type\": \"llama\",\r\n \"num_attention_heads\": 32,\r\n \"num_hidden_layers\": 32,\r\n \"num_key_value_heads\": 32,\r\n \"pretraining_tp\": 1,\r\n \"quantization_config\": {\r\n \"bnb_4bit_compute_dtype\": \"bfloat16\",\r\n \"bnb_4bit_quant_type\": \"nf4\",\r\n \"bnb_4bit_use_double_quant\": true,\r\n \"llm_int8_enable_fp32_cpu_offload\": false,\r\n \"llm_int8_has_fp16_weight\": false,\r\n \"llm_int8_skip_modules\": null,\r\n \"llm_int8_threshold\": 6.0,\r\n \"load_in_4bit\": true,\r\n \"load_in_8bit\": false,\r\n \"quant_method\": \"bitsandbytes\"\r\n },\r\n \"rms_norm_eps\": 1e-06,\r\n \"rope_scaling\": null,\r\n \"rope_theta\": 10000.0,\r\n \"tie_word_embeddings\": false,\r\n \"torch_dtype\": \"float16\",\r\n \"transformers_version\": \"4.35.0.dev0\",\r\n \"use_cache\": false,\r\n \"vocab_size\": 32000\r\n}\r\n\r\n\r\n 0%| | 0/750 [00:00<?, ?it/s]You're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\r\n/usr/local/lib/python3.10/dist-packages/torch/utils/checkpoint.py:429: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.\r\n warnings.warn(\r\n\r\n 0%| | 1/750 [00:07<1:33:01, 7.45s/it]\r\n 0%| | 2/750 [00:10<1:03:41, 5.11s/it]\r\n 0%| | 3/750 [00:14<53:38, 4.31s/it] \r\n 1%| | 4/750 [00:17<48:40, 3.92s/it]\r\n 1%| | 5/750 [00:20<44:54, 3.62s/it]\r\n 1%| | 6/750 [00:23<42:28, 3.43s/it]\r\n 1%| | 7/750 [00:26<40:52, 3.30s/it]\r\n 1%| | 8/750 [00:29<39:40, 3.21s/it]\r\n 1%| | 9/750 [00:32<38:46, 3.14s/it]\r\n 1%|▏ | 10/750 [00:35<38:06, 3.09s/it]\r\n 1%|▏ | 11/750 [00:38<37:39, 3.06s/it]\r\n 2%|▏ | 12/750 [00:41<37:18, 3.03s/it]\r\n 2%|▏ | 13/750 [00:44<37:00, 3.01s/it]\r\n 2%|▏ | 14/750 [00:47<36:44, 2.99s/it]\r\n 2%|▏ | 15/750 [00:50<36:28, 2.98s/it]\r\n 2%|▏ | 16/750 [00:53<36:15, 2.96s/it]\r\n 2%|▏ | 17/750 [00:56<36:03, 2.95s/it]\r\n 2%|▏ | 18/750 [00:59<35:52, 2.94s/it]\r\n 3%|▎ | 19/750 [01:02<35:37, 2.92s/it]\r\n 3%|▎ | 20/750 [01:05<35:03, 2.88s/it]\r\n 3%|▎ | 21/750 [01:07<34:07, 2.81s/it]\r\n 3%|▎ | 22/750 [01:10<32:55, 2.71s/it]\r\n 3%|▎ | 23/750 [01:12<32:05, 2.65s/it]\r\n 3%|▎ | 24/750 [01:15<31:28, 2.60s/it]\r\n 3%|▎ | 25/750 [01:17<30:59, 2.57s/it]\r\n \r\n{'loss': 1.7484, 'learning_rate': 0.0002, 'epoch': 0.1}\r\n\r\n 3%|▎ | 25/750 [01:17<30:59, 2.57s/it]Traceback (most recent call last):\r\n File \"/home/andre/ondemand/data/sys/myjobs/projects/default/4/finetuning.py\", line 92, in <module>\r\n configured_trainer.train()\r\n File \"/usr/local/lib/python3.10/dist-packages/transformers/trainer.py\", line 1506, in train\r\n return inner_training_loop(\r\n File \"/usr/local/lib/python3.10/dist-packages/transformers/trainer.py\", line 1869, in _inner_training_loop\r\n self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)\r\n File \"/usr/local/lib/python3.10/dist-packages/transformers/trainer.py\", line 2224, in _maybe_log_save_evaluate\r\n self._save_checkpoint(model, trial, metrics=metrics)\r\n File \"/usr/local/lib/python3.10/dist-packages/transformers/trainer.py\", line 2281, in _save_checkpoint\r\n self.save_model(output_dir, _internal_call=True)\r\n File \"/usr/local/lib/python3.10/dist-packages/transformers/trainer.py\", line 2768, in save_model\r\n self._save(output_dir)\r\n File \"/usr/local/lib/python3.10/dist-packages/transformers/trainer.py\", line 2831, in _save\r\n self.tokenizer.save_pretrained(output_dir)\r\n File \"/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py\", line 2445, in save_pretrained\r\n out_str = json.dumps(tokenizer_config, indent=2, sort_keys=True, ensure_ascii=False) + \"\\n\"\r\n File \"/usr/lib/python3.10/json/__init__.py\", line 238, in dumps\r\n **kw).encode(obj)\r\n File \"/usr/lib/python3.10/json/encoder.py\", line 201, in encode\r\n chunks = list(chunks)\r\n File \"/usr/lib/python3.10/json/encoder.py\", line 431, in _iterencode\r\n yield from _iterencode_dict(o, _current_indent_level)\r\n File \"/usr/lib/python3.10/json/encoder.py\", line 405, in _iterencode_dict\r\n yield from chunks\r\n File \"/usr/lib/python3.10/json/encoder.py\", line 438, in _iterencode\r\n o = _default(o)\r\n File \"/usr/lib/python3.10/json/encoder.py\", line 179, in default\r\n raise TypeError(f'Object of type {o.__class__.__name__} '\r\nTypeError: Object of type BitsAndBytesConfig is not JSON serializable\r\n`",
"Hi @andreducfer \r\nThanks for the ping, can you open a new ticket for that issue with ideally a simple reproducer of the issue? "
] | 1,686 | 1,697 | 1,686 |
NONE
| null |
### System Info
- `transformers` version: 4.30.0
- Platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 1.13.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
This is the script Im using:
```
import pandas as pd
import os
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, BitsAndBytesConfig
import torch
from peft import LoraConfig, get_peft_model, prepare_model_for_int8_training, TaskType, prepare_model_for_kbit_training
from transformers import DataCollatorForSeq2Seq
import evaluate
import nltk
import numpy as np
from nltk.tokenize import sent_tokenize
from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments
from datasets import Dataset, DatasetDict
import argparse
import pickle
import json
parser = argparse.ArgumentParser(description='Options')
parser.add_argument('--dataset_dir', default='data', type=str, help="folder in which the dataset is stored")
parser.add_argument('--output_dir', default="lora-instructcodet5p", type=str, help="output directory for the model")
parser.add_argument('--results_dir', default="results", type=str, help="where the results should be stored")
args = parser.parse_args()
nltk.download("punkt")
tokenized_dataset = DatasetDict.load_from_disk(args.dataset_dir)
# Metric
metric = evaluate.load("rouge")
pad_tok = 50256
token_id="Salesforce/instructcodet5p-16b"
tokenizer = AutoTokenizer.from_pretrained(token_id)
# helper function to postprocess text
def postprocess_text(preds, labels):
preds = [pred.strip() for pred in preds]
labels = [label.strip() for label in labels]
# rougeLSum expects newline after each sentence
preds = ["\n".join(sent_tokenize(pred)) for pred in preds]
labels = ["\n".join(sent_tokenize(label)) for label in labels]
return preds, labels
def compute_metrics(eval_preds):
preds, labels = eval_preds
if isinstance(preds, tuple):
preds = preds[0]
for idx in range(len(preds)):
for idx2 in range(len(preds[idx])):
if preds[idx][idx2]==-100:
preds[idx][idx2] = 50256
decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)
# Replace -100 in the labels as we can't decode them.
labels = np.where(labels != pad_tok, labels, tokenizer.pad_token_id)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
# Some simple post-processing
decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels)
result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True)
result = {k: round(v * 100, 4) for k, v in result.items()}
prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds]
result["gen_len"] = np.mean(prediction_lens)
return result
def get_dict(predicts):
d = {}
for num in range(len(tokenized_dataset['test'])):
pred = tokenizer.decode([n for n in predicts[0][num] if n!=50256 and n!=-100])[1:]
d[num+1] = {'Question':tokenizer.decode([n for n in tokenized_dataset['test'][num]['input_ids'] if n!=50256]),
'Ground truth solution':tokenizer.decode([n for n in tokenized_dataset['test'][num]['labels'] if n!=50256]),
'Prediction': pred if pred else None}
return d
def find_all_linear_names(model):
cls = torch.nn.Linear
lora_module_names = set()
for name, module in model.named_modules():
if isinstance(module, cls):
names = name.split('.')
lora_module_names.add(names[0] if len(names) == 1 else names[-1])
if 'lm_head' in lora_module_names:
lora_module_names.remove('lm_head')
return list(lora_module_names)
def main():
device = 'cuda'
# huggingface hub model id
model_id="instructcodet5p-16b"
if not os.path.exists(model_id):
model_id=token_id
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
# load model from the hub
model = AutoModelForSeq2SeqLM.from_pretrained(model_id,
# torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True, decoder_start_token_id=1, pad_token_id=pad_tok, device_map="auto", quantization_config=bnb_config)
modules = find_all_linear_names(model)
# Define LoRA Config
lora_config = LoraConfig(
r=8,
lora_alpha=32,
target_modules=modules,
lora_dropout=0.05,
bias="none",
task_type=TaskType.SEQ_2_SEQ_LM
)
# prepare int-8 model for training
model = prepare_model_for_kbit_training(model, False)
# add LoRA adaptor
model = get_peft_model(model, lora_config)
model.print_trainable_parameters()
# we want to ignore tokenizer pad token in the loss
label_pad_token_id = pad_tok
# Data collator
data_collator = DataCollatorForSeq2Seq(
tokenizer,
model=model,
label_pad_token_id=label_pad_token_id,
pad_to_multiple_of=8
)
output_dir=args.output_dir
training_args = Seq2SeqTrainingArguments(
output_dir=output_dir,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
predict_with_generate=True,
weight_decay=0.05,
# warmup_steps=200,
fp16=False, # Overflows with fp16
learning_rate=1e-3,
num_train_epochs=5,
# logging & evaluation strategies
logging_dir=f"{output_dir}/logs",
logging_strategy="epoch",
# logging_steps=500,
evaluation_strategy="epoch",
save_strategy="epoch",
save_total_limit=20,
# load_best_model_at_end=True,
# metric_for_best_model="overall_f1",
# push to hub parameters
report_to="tensorboard",
push_to_hub=False,
generation_max_length=200,
optim="paged_adamw_8bit"
)
# Create Trainer instance
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=tokenized_dataset["train"],
eval_dataset=tokenized_dataset["validation"],
compute_metrics=compute_metrics,
)
# train model
trainer.train()
# Save our LoRA model & tokenizer results
predicts = trainer.predict(tokenized_dataset['test'], max_length=200)
with open('predicts.pkl', 'wb') as file:
pickle.dump(predicts, file)
d = get_dict(predicts)
for num in d:
print("Question:\n%s"%(d[num]['Question']))
print('Ground Truth Solution:\n')
print(d[num]['Ground truth solution'])
print()
print('Prediction:\n')
print(d[num]['Prediction'])
print()
peft_model_id=args.results_dir
trainer.model.save_pretrained(peft_model_id)
tokenizer.save_pretrained(peft_model_id)
# if you want to save the base model to call
# trainer.model.base_model.save_pretrained(peft_model_id)
with open('generations.json', "w") as json_file:
json.dump(d, json_file)
#Evaluate on test data
# trainer.evaluate()
if __name__ == '__main__':
main()
```
### Expected behavior
I'm trying to use QLoRA for fine-tuning on a Seq2Seq Task using [InstructCodeT5+](https://huggingface.co/Salesforce/instructcodet5p-16b) guided by this example [notebook](https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zbf2-9VQTtGJ24k?usp=sharing#scrollTo=jq0nX33BmfaC).
I am getting the following error:
```
Traceback (most recent call last):
File "training.py", line 242, in <module>
main()
File "training.py", line 215, in main
trainer.train()
File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1645, in train
return inner_training_loop(
File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1853, in _inner_training_loop
self.control = self.callback_handler.on_train_begin(args, self.state, self.control)
File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/trainer_callback.py", line 353, in on_train_begin
return self.call_event("on_train_begin", args, state, control)
File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/trainer_callback.py", line 397, in call_event
result = getattr(callback, event)(
File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/integrations.py", line 640, in on_train_begin
model_config_json = model.config.to_json_string()
File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/configuration_utils.py", line 836, in to_json_string
return json.dumps(config_dict, indent=2, sort_keys=True) + "\n"
File "/usr/lib/python3.8/json/__init__.py", line 234, in dumps
return cls(
File "/usr/lib/python3.8/json/encoder.py", line 201, in encode
chunks = list(chunks)
File "/usr/lib/python3.8/json/encoder.py", line 431, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/usr/lib/python3.8/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib/python3.8/json/encoder.py", line 438, in _iterencode
o = _default(o)
File "/usr/lib/python3.8/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type BitsAndBytesConfig is not JSON serializable
```
Expecting the model to run and train as per the example notebook referenced above. Any help is appreciated!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24131/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24131/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24130
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24130/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24130/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24130/events
|
https://github.com/huggingface/transformers/issues/24130
| 1,749,009,513 |
I_kwDOCUB6oc5oP8Rp
| 24,130 |
seems to be a bug related to saving model
|
{
"login": "jeffchy",
"id": 17855610,
"node_id": "MDQ6VXNlcjE3ODU1NjEw",
"avatar_url": "https://avatars.githubusercontent.com/u/17855610?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jeffchy",
"html_url": "https://github.com/jeffchy",
"followers_url": "https://api.github.com/users/jeffchy/followers",
"following_url": "https://api.github.com/users/jeffchy/following{/other_user}",
"gists_url": "https://api.github.com/users/jeffchy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jeffchy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jeffchy/subscriptions",
"organizations_url": "https://api.github.com/users/jeffchy/orgs",
"repos_url": "https://api.github.com/users/jeffchy/repos",
"events_url": "https://api.github.com/users/jeffchy/events{/privacy}",
"received_events_url": "https://api.github.com/users/jeffchy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @pacman100 ",
"Hello @jeffchy, Thank you for the thorough issue, can you please confirm if the above PR resolves your issue?"
] | 1,686 | 1,686 | 1,686 |
NONE
| null |
### System Info
I use pytorch==2.0 fsdp fully-shard
If I use transformers==4.29.1, accelerate==0.19.0, things works well:
```
[INFO|trainer.py:2904] 2023-06-09 10:35:25,236 >> Saving model checkpoint to ../outputs/tigerbot-7b/full/2023-06-09-10-33-49/ckpt/checkpoint-4
[INFO|configuration_utils.py:458] 2023-06-09 10:35:25,237 >> Configuration saved in ../outputs/tigerbot-7b/full/2023-06-09-10-33-49/ckpt/checkpoint-4/config.json
[INFO|configuration_utils.py:364] 2023-06-09 10:35:25,237 >> Configuration saved in ../outputs/tigerbot-7b/full/2023-06-09-10-33-49/ckpt/checkpoint-4/generation_config.json
```
When I switch to transformers==4.30 accelerate==0.20.0 when I save the model, I got the following error
```
│ 285 │
│ 286 class _open_zipfile_writer_file(_opener): │
│ 287 │ def __init__(self, name) -> None: │
│ ❱ 288 │ │ super().__init__(torch._C.PyTorchFileWriter(str(name))) │
│ 289 │ │
│ 290 │ def __exit__(self, *args) -> None: │
│ 291 │ │ self.file_like.write_end_of_file() │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: Parent directory ../outputs/tigerbot-7b/full/2023-06-09-10-21-30/ckpt/checkpoint-4 does not exist.
```
It seems like, when I save fsdp model, transformers/accelerator don't help me to create the parent folder 'xxxx/checkpoint-4'. When I downgrade the transformers and the accelerate's version, it works, and when I manually create the 'xxx/checkpoint-4' before saving, it also works.
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
pytorch==2.0
transformers==4.30.0
accelerate==0.20.3
Trainer using FSDP fully shard, modified from train_clm.py example
```
CUDA_VISIBLE_DEVICES=0,1,2,3,7 $BASE_ENV/torchrun --nproc_per_node 5 --nnodes=1 --node_rank=0 --master_port $MASTER_PORT main_sft.py \
--model_name_or_path $MODEL \
--model_type $MODEL_TYPE \
--dataset_config_file config/data/tiger.yaml \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--do_train \
--do_eval \
--output_dir $OUTPUT_DIR \
--fp16 \
--cutoff_len 2048 \
--save_steps 500 \
--logging_steps 50 \
--max_steps 6000 \
--eval_steps 500 \
--warmup_steps 5 \
--gradient_accumulation_steps 32 \
--lr_scheduler_type "cosine" \
--fsdp "full_shard auto_wrap" \
--fsdp_transformer_layer_cls_to_wrap 'BloomBlock' \
--gradient_checkpointing True \
--overwrite_cache \
--learning_rate 1e-5 \
| tee $LOG_DIR/train.log \
2> $LOG_DIR/train.err
```
### Expected behavior
I use pytorch==2.0 fsdp fully-shard
If I use transformers==4.29.1, accelerate==0.19.0, things works well:
```
[INFO|trainer.py:2904] 2023-06-09 10:35:25,236 >> Saving model checkpoint to ../outputs/tigerbot-7b/full/2023-06-09-10-33-49/ckpt/checkpoint-4
[INFO|configuration_utils.py:458] 2023-06-09 10:35:25,237 >> Configuration saved in ../outputs/tigerbot-7b/full/2023-06-09-10-33-49/ckpt/checkpoint-4/config.json
[INFO|configuration_utils.py:364] 2023-06-09 10:35:25,237 >> Configuration saved in ../outputs/tigerbot-7b/full/2023-06-09-10-33-49/ckpt/checkpoint-4/generation_config.json
```
When I switch to transformers==4.30 accelerate==0.20.0 when I save the model, I got the following error
```
│ 285 │
│ 286 class _open_zipfile_writer_file(_opener): │
│ 287 │ def __init__(self, name) -> None: │
│ ❱ 288 │ │ super().__init__(torch._C.PyTorchFileWriter(str(name))) │
│ 289 │ │
│ 290 │ def __exit__(self, *args) -> None: │
│ 291 │ │ self.file_like.write_end_of_file() │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: Parent directory ../outputs/tigerbot-7b/full/2023-06-09-10-21-30/ckpt/checkpoint-4 does not exist.
```
It seems like, when I save fsdp model, transformers/accelerator don't help me to create the parent folder 'xxxx/checkpoint-4'. When I downgrade the transformers and the accelerate's version, it works, and when I manually create the 'xxx/checkpoint-4' before saving, it also works.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24130/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24130/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24129
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24129/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24129/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24129/events
|
https://github.com/huggingface/transformers/pull/24129
| 1,748,987,462 |
PR_kwDOCUB6oc5Skzc4
| 24,129 |
PLAM => PaLM
|
{
"login": "xingener",
"id": 3382402,
"node_id": "MDQ6VXNlcjMzODI0MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3382402?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xingener",
"html_url": "https://github.com/xingener",
"followers_url": "https://api.github.com/users/xingener/followers",
"following_url": "https://api.github.com/users/xingener/following{/other_user}",
"gists_url": "https://api.github.com/users/xingener/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xingener/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xingener/subscriptions",
"organizations_url": "https://api.github.com/users/xingener/orgs",
"repos_url": "https://api.github.com/users/xingener/repos",
"events_url": "https://api.github.com/users/xingener/events{/privacy}",
"received_events_url": "https://api.github.com/users/xingener/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I have no idea is it applicable that mr to main branch?",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,686 | 1,686 | 1,686 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #24114 (issue)
doc fix: PLAM => PaLM
@amyeroberts, @sgugger, please review
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24129/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24129/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24129",
"html_url": "https://github.com/huggingface/transformers/pull/24129",
"diff_url": "https://github.com/huggingface/transformers/pull/24129.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24129.patch",
"merged_at": 1686310337000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24128
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24128/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24128/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24128/events
|
https://github.com/huggingface/transformers/pull/24128
| 1,748,803,370 |
PR_kwDOCUB6oc5SkRBL
| 24,128 |
Nah
|
{
"login": "jamesthesnake",
"id": 8227820,
"node_id": "MDQ6VXNlcjgyMjc4MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8227820?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamesthesnake",
"html_url": "https://github.com/jamesthesnake",
"followers_url": "https://api.github.com/users/jamesthesnake/followers",
"following_url": "https://api.github.com/users/jamesthesnake/following{/other_user}",
"gists_url": "https://api.github.com/users/jamesthesnake/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jamesthesnake/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jamesthesnake/subscriptions",
"organizations_url": "https://api.github.com/users/jamesthesnake/orgs",
"repos_url": "https://api.github.com/users/jamesthesnake/repos",
"events_url": "https://api.github.com/users/jamesthesnake/events{/privacy}",
"received_events_url": "https://api.github.com/users/jamesthesnake/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,686 | 1,686 | 1,686 |
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24128/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24128/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24128",
"html_url": "https://github.com/huggingface/transformers/pull/24128",
"diff_url": "https://github.com/huggingface/transformers/pull/24128.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24128.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24127
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24127/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24127/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24127/events
|
https://github.com/huggingface/transformers/pull/24127
| 1,748,800,992 |
PR_kwDOCUB6oc5SkQgp
| 24,127 |
Adding padding token GPTj config for the tokenizer.
|
{
"login": "jojivk73",
"id": 14943401,
"node_id": "MDQ6VXNlcjE0OTQzNDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/14943401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jojivk73",
"html_url": "https://github.com/jojivk73",
"followers_url": "https://api.github.com/users/jojivk73/followers",
"following_url": "https://api.github.com/users/jojivk73/following{/other_user}",
"gists_url": "https://api.github.com/users/jojivk73/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jojivk73/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jojivk73/subscriptions",
"organizations_url": "https://api.github.com/users/jojivk73/orgs",
"repos_url": "https://api.github.com/users/jojivk73/repos",
"events_url": "https://api.github.com/users/jojivk73/events{/privacy}",
"received_events_url": "https://api.github.com/users/jojivk73/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@sgugger , The GPTJOnnxConfig uses it though. Is there a better resolution for this in run_glue.py as it requires padding.",
"The `GPTJOnnxConfig` is not used anymore and only there for backward compatibility. Like I said, you can manually add that `pad_token_id` in the config to suit your needs, but it shouldn't be there by default.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,686 | 1,689 | 1,689 |
NONE
| null |
# What does this PR do?
This PR adds pad_tokens to GPTJ tokenizer. This was seen as needed when GPTJ uses GPT2 or other tokenizer for GLUE fine-tuning tasks. The changes are in line with ONNX GPTJ Config setting in the same file. If there is an alternate fix that would solve it, that can be added as well.
Fixes # (issue)
Adding padding for GLUE fine-tuning Tasks.
## Before submitting
## Who can review?
Models:
GPTJ
Library:
Integrations:
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24127/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24127",
"html_url": "https://github.com/huggingface/transformers/pull/24127",
"diff_url": "https://github.com/huggingface/transformers/pull/24127.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24127.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24126
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24126/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24126/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24126/events
|
https://github.com/huggingface/transformers/issues/24126
| 1,748,734,109 |
I_kwDOCUB6oc5oO5Cd
| 24,126 |
Run audio classification example using "facebook/hubert-base-ls960" model got stuck when use deepspeed, but works for wav2vec2
|
{
"login": "Hertin",
"id": 14032494,
"node_id": "MDQ6VXNlcjE0MDMyNDk0",
"avatar_url": "https://avatars.githubusercontent.com/u/14032494?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hertin",
"html_url": "https://github.com/Hertin",
"followers_url": "https://api.github.com/users/Hertin/followers",
"following_url": "https://api.github.com/users/Hertin/following{/other_user}",
"gists_url": "https://api.github.com/users/Hertin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hertin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hertin/subscriptions",
"organizations_url": "https://api.github.com/users/Hertin/orgs",
"repos_url": "https://api.github.com/users/Hertin/repos",
"events_url": "https://api.github.com/users/Hertin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hertin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @sanchit-gandhi @pacman100 ",
"That's super weird - is there any activity on your GPU? Thought it might be an issue with SpecAug + DeepSpeed, but we have the config params set the same in HuBERT and Wav2Vec2. So it must be a DeepSpeed bug. Are you able to see with any more detail where it hands? E.g. maybe with a torch profile, or just killing the programme when it hangs and seeing which line it got to?",
"When it gets stuck, the GPU utility goes to 100%. When I Ctrl+C to interrupt, I see the following traceback:\r\n```\r\n 0%| | 0/9250 [00:00<?, ?it/s]\r\n\r\nC[2023-06-12 10:59:44,481] [INFO] [launch.py:314:sigkill_handler] Killing subprocess 1234292\r\n2023-06-12 10:59:44,581] [INFO] [launch.py:314:sigkill_handler] Killing subprocess 1234292\r\nraceback (most recent call last):\r\n File \"/nws/user/hertin/softwares/miniconda3/envs/slullm/bin/deepspeed\", line 6, in <module>\r\n main()\r\n File \"/nws/user/hertin/softwares/miniconda3/envs/slullm/lib/python3.9/site-packages/deepspeed/launcher/runner\r\npy\", line 570, in main\r\n result.wait()\r\n File \"/nws/user/hertin/softwares/miniconda3/envs/slullm/lib/python3.9/subprocess.py\", line 1189, in wait\r\n return self._wait(timeout=timeout)\r\n File \"/nws/user/hertin/softwares/miniconda3/envs/slullm/lib/python3.9/subprocess.py\", line 1917, in _wait\r\n (pid, sts) = self._try_wait(0)\r\n File \"/nws/user/hertin/softwares/miniconda3/envs/slullm/lib/python3.9/subprocess.py\", line 1875, in _try_wait\r\n (pid, sts) = os.waitpid(self.pid, wait_flags)\r\n File \"/nws/user/hertin/softwares/miniconda3/envs/slullm/lib/python3.9/site-packages/deepspeed/launcher/runner\r\npy\", line 562, in sigkill_handler\r\n result_kill = subprocess.Popen(kill_cmd, env=env)\r\nameError: free variable 'kill_cmd' referenced before assignment in enclosing scope\r\n2023-06-12 10:59:44,867] [INFO] [launch.py:314:sigkill_handler] Killing subprocess 1234293\r\n [2023-06-12 10:59:45,152] [INFO] [launch.py:314:sigkill_handler] Killing subprocess 1234294\r\n2023-06-12 10:59:45,515] [INFO] [launch.py:323:sigkill_handler] Main process received SIGTERM, exiting\r\n```",
"Hey @Hertin, this indeed looks like a DeepSpeed bug (i.e. we see deepspeed hang before launching the process). Not sure why we're getting it for HuBERT and not Wav2Vec2 though 🤔 Could you verify your transformers version + deepspeed version please?\r\n```\r\ntransformers-cli env\r\npython -c \"import deepspeed; print(deepspeed.__version__)\"\r\n```",
"Thanks for the reply. This is my transformer and deepspeed version\r\n```\r\ntransformers-cli env\r\n```\r\n```\r\nSetting ds_accelerator to cuda (auto detect)\r\n\r\nCopy-and-paste the text below in your GitHub issue and FILL OUT the two last points.\r\n\r\n- `transformers` version: 4.31.0.dev0\r\n- Platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.31\r\n- Python version: 3.9.16\r\n- Huggingface_hub version: 0.15.1\r\n- Safetensors version: 0.3.1\r\n- PyTorch version (GPU?): 2.0.1+cu117 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n```\r\n\r\n```\r\npython -c \"import deepspeed; print(deepspeed.__version__)\"\r\n```\r\n```\r\nSetting ds_accelerator to cuda (auto detect)\r\n0.9.3\r\n```",
"We can try installing to the latest version of deepspeed (0.9.4) but I don't think this is going to fix it... I'm not sure here - I think this is a question for the deepspeed repo to be honest! If you could share with them a reproducible code snippet of the issue they should be able to dive deeper on why deepspeed is hanging in the way it is. Unfortunately this is out of the scope of transformers at this point",
"Thanks for your time. I will close this issue as it is more likely a deepspeed issue.",
"Thanks @Hertin and sorry we were not able to find a fix! Hope it goes well asking on the DS repo"
] | 1,686 | 1,687 | 1,687 |
NONE
| null |
### System Info
I was trying to run the audio classification example using "facebook/hubert-base-ls960" with deepspeed on a 3 gpu node. The trainer and the model got stuck at the first training step. However, if I change only the `--model_name_or_path=facebook/hubert-base-ls960` to `--model_name_or_path=facebook/wav2vec2-base`, I am able to run the audio classification example using "facebook/wav2vec2-base" without problem. Not sure if other people also have this issue.
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. clone and install huggingface repo `https://github.com/huggingface/transformers.git`
2. go to `examples/pytorch/audio-classification` folder
3. save the config into config/stage2.json
```
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"weight_decay": "auto",
"torch_adam": true,
"adam_w_mode": true
}
},
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto",
"total_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": "auto",
"contiguous_gradients": true
},
"gradient_accumulation_steps": 1,
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
4. execute the following command line and the scripts stucks the the first step of training
```
deepspeed --num_gpus=3 run_audio_classification.py \
--model_name_or_path facebook/hubert-base-ls960 \
--dataset_name common_language \
--audio_column_name audio \
--label_column_name language \
--output_dir ac-base-lang-id \
--overwrite_output_dir \
--remove_unused_columns False \
--do_train \
--do_eval \
--fp16 \
--learning_rate 3e-4 \
--max_length_seconds 16 \
--attention_mask True \
--warmup_ratio 0.1 \
--num_train_epochs 10 \
--per_device_train_batch_size 8 \
--gradient_accumulation_steps 1 \
--per_device_eval_batch_size 1 \
--dataloader_num_workers 8 \
--logging_strategy steps \
--logging_steps 10 \
--evaluation_strategy epoch \
--save_strategy epoch \
--load_best_model_at_end True \
--metric_for_best_model accuracy \
--save_total_limit 3 \
--seed 0 \
--cache_dir /nws/user/hertin/.cache/huggingface \
--deepspeed config/stage2.json
```
### Expected behavior
5. the tail of the output looks like this and the training got stuck:
```No modifications detected for re-loaded extension module utils, skipping build step...
Loading extension module utils...
Time to load utils op: 0.0006883144378662109 seconds
[INFO|trainer.py:1777] 2023-06-08 15:09:13,470 >> ***** Running training *****
[INFO|trainer.py:1778] 2023-06-08 15:09:13,470 >> Num examples = 22,194
[INFO|trainer.py:1779] 2023-06-08 15:09:13,470 >> Num Epochs = 10
[INFO|trainer.py:1780] 2023-06-08 15:09:13,470 >> Instantaneous batch size per device = 8
[INFO|trainer.py:1781] 2023-06-08 15:09:13,470 >> Total train batch size (w. parallel, distributed & accumulation) = 24
[INFO|trainer.py:1782] 2023-06-08 15:09:13,470 >> Gradient Accumulation steps = 1
[INFO|trainer.py:1783] 2023-06-08 15:09:13,470 >> Total optimization steps = 9,250
[INFO|trainer.py:1784] 2023-06-08 15:09:13,471 >> Number of trainable parameters = 90,379,693
0%| | 0/9250 [00:00<?, ?it/s]
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24126/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24126/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24125
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24125/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24125/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24125/events
|
https://github.com/huggingface/transformers/pull/24125
| 1,748,604,983 |
PR_kwDOCUB6oc5SjsMd
| 24,125 |
Fix SAM OOM issue on CI
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,686 | 1,686 | 1,686 |
COLLABORATOR
| null |
# What does this PR do?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24125/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24125/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24125",
"html_url": "https://github.com/huggingface/transformers/pull/24125",
"diff_url": "https://github.com/huggingface/transformers/pull/24125.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24125.patch",
"merged_at": 1686316029000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24124
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24124/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24124/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24124/events
|
https://github.com/huggingface/transformers/pull/24124
| 1,748,603,469 |
PR_kwDOCUB6oc5Sjr2O
| 24,124 |
Fix Pipeline CI OOM issue
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> 1. Do you think this memory handling is something we should extend to most model tests by default? Is there too much of an overhead repeatedly clearing the cache / GC or other reasons it's not suitable?\r\n\r\nThis cleanup is only necessary for integration tests, but yes, I think it's best to apply this to all model (integration) tests.\r\n\r\n> 2. Could we create a general utility in testing utils to avoid some of the repeated code e.g.\r\n\r\nYes! I am also thinking if we should define a subclass of unittest.TestCase and have a common `def tearDown`.\r\nLet's talk this and/or your suggestion above later.\r\n"
] | 1,686 | 1,686 | 1,686 |
COLLABORATOR
| null |
# What does this PR do?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24124/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24124/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24124",
"html_url": "https://github.com/huggingface/transformers/pull/24124",
"diff_url": "https://github.com/huggingface/transformers/pull/24124.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24124.patch",
"merged_at": 1686322143000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24123
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24123/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24123/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24123/events
|
https://github.com/huggingface/transformers/pull/24123
| 1,748,564,105 |
PR_kwDOCUB6oc5SjjYt
| 24,123 |
Fix XGLM OOM on CI
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Going to merge as this is just the same as the other PRs. Don't need to bother too much the core maintainers."
] | 1,686 | 1,686 | 1,686 |
COLLABORATOR
| null |
# What does this PR do?
same as in #24122 and #24106
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24123/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24123/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24123",
"html_url": "https://github.com/huggingface/transformers/pull/24123",
"diff_url": "https://github.com/huggingface/transformers/pull/24123.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24123.patch",
"merged_at": 1686316860000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24122
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24122/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24122/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24122/events
|
https://github.com/huggingface/transformers/pull/24122
| 1,748,514,942 |
PR_kwDOCUB6oc5SjZPM
| 24,122 |
Fix TF Rag OOM issue
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,686 | 1,686 | 1,686 |
COLLABORATOR
| null |
# What does this PR do?
It seems the only thing is to `gc.collect()`. Thanks @Rocketknight1 for continuous trying.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24122/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24122/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24122",
"html_url": "https://github.com/huggingface/transformers/pull/24122",
"diff_url": "https://github.com/huggingface/transformers/pull/24122.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24122.patch",
"merged_at": 1686315791000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24121
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24121/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24121/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24121/events
|
https://github.com/huggingface/transformers/pull/24121
| 1,748,477,710 |
PR_kwDOCUB6oc5SjRHl
| 24,121 |
Slow (i.e., Python) Tokenizer `batch_encode_plus` for Input as `List[PreTokenizedInput]` or `List[PreTokenizedInputPair]`
|
{
"login": "sniperyyc",
"id": 22042183,
"node_id": "MDQ6VXNlcjIyMDQyMTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/22042183?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sniperyyc",
"html_url": "https://github.com/sniperyyc",
"followers_url": "https://api.github.com/users/sniperyyc/followers",
"following_url": "https://api.github.com/users/sniperyyc/following{/other_user}",
"gists_url": "https://api.github.com/users/sniperyyc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sniperyyc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sniperyyc/subscriptions",
"organizations_url": "https://api.github.com/users/sniperyyc/orgs",
"repos_url": "https://api.github.com/users/sniperyyc/repos",
"events_url": "https://api.github.com/users/sniperyyc/events{/privacy}",
"received_events_url": "https://api.github.com/users/sniperyyc/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24121). All of your documentation changes will be reflected on that endpoint.",
"> Hey! Thanks a lot for adding this 😉 Would you mind adding a test? To make sure that List of input pairs/list works? (in this state I think a list of pairs is passed as a list instead or a pair of list no?\r\n\r\nHi @ArthurZucker, thanks a lot for the quick reply! Sure, I will add a test to this. I am not very sure about the current state/assumption, but will take a look and get back to you on this!\r\n\r\n(Separately I think the fast tokenizer suffers from the same problem here -- I will try to make another PR regarding that or at least open a feature request so that the community can contribute)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,686 | 1,689 | 1,689 |
NONE
| null |
# What does this PR do?
Current `batch_encode_plus()` should support input type including`List[PreTokenizedInput]` and `List[PreTokenizedInputPair]` by doc. However, a simple example would incur the error:
```
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', use_fast=False)
>>> tokenizer([['I', 'love', 'you'], ['I', 'love', 'you']])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ubuntu/anaconda3/envs/fid-bert/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 2556, in __call__
encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)
File "/home/ubuntu/anaconda3/envs/fid-bert/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 2642, in _call_one
return self.batch_encode_plus(
File "/home/ubuntu/anaconda3/envs/fid-bert/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 2833, in batch_encode_plus
return self._batch_encode_plus(
File "/home/ubuntu/anaconda3/envs/fid-bert/lib/python3.9/site-packages/transformers/tokenization_utils.py", line 731, in _batch_encode_plus
ids, pair_ids = ids_or_pair_ids
ValueError: too many values to unpack (expected 2)
```
The fixed version would properly tokenize such inputs without errors:
```
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', use_fast=False)
>>> tokenizer([['I', 'love', 'you'], ['I', 'love', 'you']])
{'input_ids': [[101, 100, 2293, 2017, 102], [101, 100, 2293, 2017, 102]], 'token_type_ids': [[0, 0, 0, 0, 0], [0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1], [1, 1, 1, 1, 1]]}
```
The "fast" tokenizer written by Rust should be fixed accordingly as well.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker @younesbelkada
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24121/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24121",
"html_url": "https://github.com/huggingface/transformers/pull/24121",
"diff_url": "https://github.com/huggingface/transformers/pull/24121.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24121.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24120
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24120/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24120/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24120/events
|
https://github.com/huggingface/transformers/pull/24120
| 1,748,384,657 |
PR_kwDOCUB6oc5Si8uO
| 24,120 |
Contrastive Search peak memory reduction
|
{
"login": "blbadger",
"id": 54602201,
"node_id": "MDQ6VXNlcjU0NjAyMjAx",
"avatar_url": "https://avatars.githubusercontent.com/u/54602201?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/blbadger",
"html_url": "https://github.com/blbadger",
"followers_url": "https://api.github.com/users/blbadger/followers",
"following_url": "https://api.github.com/users/blbadger/following{/other_user}",
"gists_url": "https://api.github.com/users/blbadger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/blbadger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/blbadger/subscriptions",
"organizations_url": "https://api.github.com/users/blbadger/orgs",
"repos_url": "https://api.github.com/users/blbadger/repos",
"events_url": "https://api.github.com/users/blbadger/events{/privacy}",
"received_events_url": "https://api.github.com/users/blbadger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"To make our CI go green, you may need to:\r\n1. rebase with `main` (LMK if you need instructions)\r\n2. run `make fixup` and commit the changes",
"Thanks very much for the review and good comments and edits @gante ! I have committed your changes and am adding the tests to `transformers/tests/generation/test_utils.py` and will commit that when done. I might need some pointers for rebasing the PR after:)",
"Instructions to rebase:\r\n1. get the latest `main`. If your fork is not synched with upstream, you may need to follow [these instructions](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork#syncing-a-fork-branch-from-the-web-ui) before running the commands below.\r\n```\r\ngit checkout main\r\ngit pull\r\n```\r\n\r\n2. rebase your branch\r\n```\r\ngit checkout your_branch\r\ngit rebase origin/main\r\n```\r\n\r\n3. force push your changes\r\n```\r\ngit push origin your_branch -f\r\n```",
"Thanks for the rebase instructions! I have rebased and added the required tests. \r\n\r\nWhile testing, I found and removed a bug that caused the contrastive search to degenerate into greedy generation. After fixing this issue, however, the low-memory contrastive search still does not yield exactly the same output as batched (normal) contrastive search. \r\n\r\nAfter looking into this today, it seems that this issue is caused by numerical errors in forward passes between batched versus unbatched inputs. These errors seem to propegate in the generation process (via saved hidden layers) such that a dozen tokens or so after generation starts, one starts to get different tokens.\r\n\r\nI am also not sure how the memory test should be performed with many models sequentially. I typically have been measuring the footprint on GPU, and with one model the low-memory version has a smaller footprint for any given model tested. But footprint typically does not decrease predictably between model initializations, so this test will likely fail as written.\r\n\r\nI'd be happy to go into the former issue in more detail, or I can simply substitute a test to show that the output is approximately the same for batched versus unbatched contrastive search. And would be happy to change the memory test to include only one model too",
"Although with some more testing it seems that the numerical errors are not propegating, but there is a problem with the past key value cache in the low memory approach. Will work on fixing this!",
"OK all is fixed and ready to go. Selecting `low_memory` does not change the output tokens for contrastive search but reduces the memory footprint for longer (>1k tokens) sequence generation by the amount mentioned above. Generally the longer the sequence length, the larger the difference between low memory and normal contrastive search.\r\n\r\nI am not sure how to write the memory test in the style of the other tests which iterate through a set of models, as unless the models are sorted by increasing size then the test will fail (memory measurement via `torch.cuda.max_memory_allocated()` does not reliably decrease between model re-initializations or even cache cleaning).\r\n\r\nFinal note, the added code is a bit messy and could be reduced to around 20 lines if https://github.com/huggingface/transformers/issues/17016 were to be implemented. I can clean it somewhat if required.",
"> I am not sure how to write the memory test in the style of the other tests which iterate through a set of models, as unless the models are sorted by increasing size then the test will fail (memory measurement via torch.cuda.max_memory_allocated() does not reliably decrease between model re-initializations or even cache cleaning).\r\n\r\nCould you share a memory measurement script then, so we can keep it here in the PR for future reference? :)",
"No problem, for a newly spun up single GPU the following can be used to check the memory reduction.\r\n\r\n```python\r\n!pip install -q -U git+https://github.com/blbadger/transformers.git\r\n\r\nimport torch\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM, AutoModelForSeq2SeqLM\r\n# any model compatible with contrastive search\r\ntokenizer = AutoTokenizer.from_pretrained(\"gpt2\") \r\nmodel = AutoModelForCausalLM.from_pretrained(\"gpt2\")\r\n\r\ninput_ids = tokenizer.encode('This is a new prompt', return_tensors='pt').to('cuda')\r\nmodel = model.to('cuda')\r\n\r\nlow_output = model.generate(input_ids, top_k=4, penalty_alpha=0.8, low_memory=True, max_new_tokens=600)\r\nlow_mem = torch.cuda.max_memory_allocated()\r\n\r\nhigh_output = model.generate(input_ids, top_k=4, penalty_alpha=0.8, low_memory=False, max_new_tokens=600)\r\nhigh_mem = torch.cuda.max_memory_allocated()\r\n\r\nprint (torch.all(low_output == high_output))\r\nprint (low_mem, high_mem)\r\n```\r\n\r\nI went ahead and removed the memory portion of the contrastive search test module, but we can add it back if necessary.",
"@blbadger Thank you for the script! (I've confirmed that it works as expected on my end 👍 )\r\n\r\nTo get the CI green you'll need to run `make fixup` and then commit the changes. You also have errors on the test that's being added on this PR :)\r\n\r\nAs soon as our CI becomes green, I'll tag a core maintainer so we can merge the PR 🙌 ",
"@gante you are most welcome, thanks for flagging the test failing too. I have moved the `low_memory` kwarg to the config, cleaned up the low memory code, and fixed up the test module. \r\n\r\nIt looks like we are all set with respect to CI requirements except for the code formatting. Is there a way you recommend fixing this so that `black` does not throw an error?",
"@blbadger try running `make fixup` and then committing :) If that fails, then try rebasing.",
"@gante thanks! `make fixup` completes but there were no changes to commit, same goes for rebasing after syncing the branch but unfortunately `black` still fails with\r\n```\r\nOh no! 💥 💔 💥\r\n2 files would be reformatted, 2482 files would be left unchanged.\r\n\r\nExited with code exit status 1\r\n```\r\n",
"Just heads up: with (much) more testing it appears that the batch versus no-batch numerical errors mentioned above do indeed propegate such that choosing `low_memory` results in divergence from normal contrastive search for long outputs. It seems this divergence is not picked up by the existing test suite because these outputs are limited in length, and because the models tested are relatively small.\r\n\r\nFor an example of this divergence, for Guanaco 33b loaded in `torch.bfloat16` with an input prompt `Write a full Shakespearean sonnet about the virtues of apples` with `low_memory=False` gives\r\n\r\n```\r\nWhen like a ripe and ruddy cherub's face,\r\nThe apple doth in greenness shine so bright,\r\nIts luscious juice and crispness doth embrace,\r\nA feast for taste, both sweet and mild of might.\r\n\r\nYet more than this, it doth our health impart,\r\nWith vitamins rich, and fiber to sustain,\r\nOur bodies nourished, and our spirits cheered,\r\nIn vigor and in joy, we're once again regained.\r\n\r\nOh, let us praise the apple, in each part,\r\nFor its delights so pure, so fresh and fair,\r\nA gift from Nature's bounty, without art,\r\nA treasure true, that ne'er shall disappear.\r\n```\r\n\r\nbut `low_memory=True` gives\r\n\r\n```\r\nWhen like a ripe and ruddy cherub's face,\r\nThe apple doth in greenness shine so bright,\r\nIts luscious juice and crispness doth embrace,\r\nA feast for taste, both sweet and mild of might.\r\n\r\nYet more than this, it doth our health impart,\r\nIts vitals rich, a cure for many ills,\r\nIn olden days, 'twas deemed a symbol art,\r\nOf love and beauty, a fruit most fulfills.\r\n\r\nO then, let us rejoice in its bounty great,\r\nAnd praise the Lord who doth such gifts bestow,\r\nWhose handiwork doth manifest such treat,\r\nA blessed fruit, the apple, let us bow.\r\n```\r\n\r\nfor the same penalty_alpha and top_k.\r\n\r\nI would not really classify this as an issue (as the low memory output is likely more numerically accurate than standard contrastive search) but we might want to include this in the documentation for to avoid confusion.\r\n",
"@gante OK looks like the CI is green, running `make quality` after `make fixup` was able to reformat the code properly.",
"_The documentation is not available anymore as the PR was closed or merged._",
"@amyeroberts quick TL;DR -- This PR adds a new flag that, when turned on, reduces the memory requirements of contrastive search. There are small numerical, which is expected -- the order of operations with floating point may produce minor variations, as usual in this type of changes",
"@gante Happy to do so, I appreciate all your help with this PR! \r\n\r\nWould it be better to keep the low memory flag optional and then cast to a bool, or to enforce the low memory flag as a bool? I can see benefits of both options.\r\n \r\n@amyeroberts Any time, thanks for the detailed review!\r\n1. Perhaps a better name would be `low_cache_memory` or `single_pkv` as we are here avoiding the generation of past key value caches for all `top_k` tokens. Is that on the right track?\r\n2. Agreed that the code can be refactored, I will work on integrating your suggestions and will try to simplify the logic. \r\n\r\n",
"@amyeroberts I'd rather keep the generic name (`low_memory`) with an unequivocally unset state by default (`None`) unless you strongly oppose :) \r\n\r\nHere's my reasoning:\r\n- Naming: `.generate()` already has a very big number of configuration parameters, to the point of being one of the most challenging problems to manage. It's very hard to add new options without flags, making the flag discoverability process even harder. I'll gladly take any chance to consolidate flag names, even if it comes at the cost of possible extra code logic in the future 🤗 If some new `.generate()`-level memory-reduction technique comes out, and if it is mutually exclusive with the existing techniques, then we can bump the flag complexity by allowing it to be a string.\r\n- Default: In the recent transition from `model.config` to `model.generate_config`, all non-`None` defaults were a massive pain -- we can't distinguish a default value from an intentionally set value that matches the default. A `None` default protects us from that :)\r\n\r\n_____________________________\r\n\r\nRegarding complex logic, it won't be a problem as soon as I get my hands on refactoring generate 💪 ",
"@gante OK, understood :) \r\n\r\n* For `low_memory` I agree for the generation config that generic is better. For the `contrastive_search` method, I'd still prefer the kwarg to be something clearer e.g. `sequential`. WDYT?\r\n* Completely agree - I'd prefer `None` as default! \r\n* Complex logic - We're sweeping things under the rug a bit but OK if a refactor is happening soon. Only thing I'd mention is that the code at the moment makes reviewing hard and longer. The sooner it's tidied up, the quicker new features can (safely) be added! ",
"> For low_memory I agree for the generation config that generic is better. For the contrastive_search method, I'd still prefer the kwarg to be something clearer e.g. sequential. WDYT?\r\n\r\nSounds good 👍 ",
"Sounds great to me too! @gante would you like me to go ahead and change the kwarg in contrastive_search and assign it the value of `low_memory` in the generation config?",
"@blbadger yes 👍 ",
"@gante @amyeroberts just renamed the flag, looks like we are ready to go!",
"Awesome, merging the PR 🔥 Thank you for the cool contribution @blbadger! ",
"My pleasure, and thanks so much for all your help @gante @amyeroberts!",
"Thanks for the PR! Do we know if set `low_memory=True` will make the generation how much slower than before? And if `top_k` is small enough like 5, do you still suggest using this method? Thanks!",
"That is a good question, thanks @yuchenlin! In the worst case the generation time per token is `top_k` times what it would be with `low_memory=False`, and the memory required is asymptotically 1/top_k (as the number of generated tokens increases to infinity). \r\n\r\nIn practice for generation around the 1k-2k token length for a medium sized 4-bit quantized (~10b parameter) model I have found that choosing `top_k=3` or `top_k=4` leads to approximately 2x the time required per token, and the peak memory required is around 60% of what is needed with `low_memory=False`. This is somewhat model-dependent, however.\r\n\r\nGenerally the smaller the `top_k` the faster generation will be, but the less memory savings you will see:)",
"Hi @blbadger,\r\n\r\nThank you sooo much for your help! It is really insightful! I have another quick question about contrastive searching in general, and hope you would please help! \r\n\r\nFrom the description of this contrastive searching method, it should generate a deterministic output by setting `top_k=x` and `penalty_alpha=y`. Do you need to set `do_sample=True`? I initially thought we should not because this is a `searching` method and the penalty helps us do `argmax` for selecting from the top_k, which should not involve any randomness via sampling. But there will be a warning message telling me that I should enable `do_sample` when I set `top_k` with something. \r\n\r\nI tried both enabling `do_sample` and not, and I found that if you set `do_sample=True`, there will be randomness in multiple runs, while if I set `do_sample=False`, the results are the same as the greedy decoding results. I'm not sure if I did this correctly. Thank you so much in advance! :D \r\n",
"No worries @yuchenlin! You can leave the default value `do_sample=False` as setting this to True will over-ride other arguments and contrastive search will not be used. Make sure you provide argument values for both `penalty_alpha` and `top_k` as giving only a `top_k` while setting `do_sample=True` will raise the warning you mention and generation will revert to greedy decoding. You are correct that the contrastive search outputs should be deterministic, or very nearly so.\r\n\r\nAs an example, below is a typical call to `model.generate` that activates the low-memory version of contrastive search:\r\n\r\n```python\r\nmodel.generate(input_ids, top_k=4, penalty_alpha=0.6, low_memory=True, max_new_tokens=200)\r\n```\r\n\r\nHope that is helpful!",
"hi @blbadger thank you sooo much for clarifying this! I really appreciate it! :D "
] | 1,686 | 1,698 | 1,689 |
CONTRIBUTOR
| null |
# Contrastive Search memory reduction via sequential topk embedding recovery
This PR describes a new feature for contrastive search generation that may be of interest to the community.
Problem:
Contrastive search is an effective method for LLM text generation, but as currently implemented requires far more maximum memory (tested with VRAM) than comparable methods like nucleus search.
Solution:
Extra memory (ie more than greedy generation) required for contrastive search primarily comes from two sources: storing of the last hidden layers per token and the parallelized computation of the last hidden layer embeddings for `top_k` tokens. This PR addresses the second source by providing a switch to sequential last hidden layer computation.
The result is that far less maximum memory is required during generation: for example, generation max memory usage for Llama 13b using Int4 quantization (@qlora) for 1k tokens reduces from >15GB to ~9GB. The generation process necessarily becomes somewhat slower as well.
Code Snippet:
Pass the kwarg `low_memory` to the generation as follows:
```python
model.generate(input_ids, top_k=4, penalty_alpha=0.6, low_memory=True)
```
No additional modules are required for use.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@gante
Anyone who would like to review is welcome!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24120/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24120/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24120",
"html_url": "https://github.com/huggingface/transformers/pull/24120",
"diff_url": "https://github.com/huggingface/transformers/pull/24120.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24120.patch",
"merged_at": 1689875213000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24119
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24119/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24119/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24119/events
|
https://github.com/huggingface/transformers/issues/24119
| 1,748,337,351 |
I_kwDOCUB6oc5oNYLH
| 24,119 |
Different results when using `__call__` and `encode` and `encode_plus` of (fast/slow) bert tokenizers
|
{
"login": "sniperyyc",
"id": 22042183,
"node_id": "MDQ6VXNlcjIyMDQyMTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/22042183?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sniperyyc",
"html_url": "https://github.com/sniperyyc",
"followers_url": "https://api.github.com/users/sniperyyc/followers",
"following_url": "https://api.github.com/users/sniperyyc/following{/other_user}",
"gists_url": "https://api.github.com/users/sniperyyc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sniperyyc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sniperyyc/subscriptions",
"organizations_url": "https://api.github.com/users/sniperyyc/orgs",
"repos_url": "https://api.github.com/users/sniperyyc/repos",
"events_url": "https://api.github.com/users/sniperyyc/events{/privacy}",
"received_events_url": "https://api.github.com/users/sniperyyc/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"It seems that the misunderstanding of `is_split_into_words` from #8217 is the main issue here.\r\n\r\nBut the fix in #24121 should also be helpful to align the expected outputs of slow tokenizer when input is a list of list of pretokenized strings.",
"cc @ArthurZucker :D",
"Pr reviewed 😉 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,686 | 1,691 | 1,691 |
NONE
| null |
### System Info
- `transformers` version: 4.30.0.dev0
- Platform: Linux-5.15.0-1033-aws-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Steps to reproduce the differences:
[hf_github_issue.zip](https://github.com/huggingface/transformers/files/11691030/hf_github_issue.zip) contains all the input data used to reproduce the issue. Please download and unzip.
1. I got my customized token2id mapping and stored the three files in one folder let's say "hf_github_issue".
1. `vocab.txt`: my own vocabulary
2. `special_tokens_map.json`
3. `tokenizer_config.json`
2. `raw_transformed_uncased.csv`: each row in this csv is a **pre-tokenized** sequence (i.e., each cell is a word that does not need further tokenization)
```
import pandas as pd
from datasets import Dataset
raw_transformed_df = pd.read_csv("hf_github_issue/raw_transformed.csv")
training_dataset = Dataset.from_pandas(raw_transformed_df, preserve_index=False)
```
3. Load as a fast BertTokenizer:
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("hf_github_issue", use_fast=True)
# method 1
print(tokenizer.encode(list(training_dataset[0].values()), is_split_into_words=True))
# method 2
print(tokenizer.encode_plus(list(training_dataset[0].values()), is_split_into_words=True))
# method 3
print(tokenizer(list(training_dataset[0].values()), is_split_into_words=True))
```
4. Load as a slow BertTokenizer
```
tokenizer = AutoTokenizer.from_pretrained("hf_github_issue", use_fast=False)
# method 4
print(tokenizer.encode(list(training_dataset[0].values()), is_split_into_words=True))
# method 5
print(tokenizer.encode_plus(list(training_dataset[0].values()), is_split_into_words=True))
# method 6
print(tokenizer(list(training_dataset[0].values()), is_split_into_words=True))
# method 7
print(tokenizer.encode(list(training_dataset[0].values())))
```
### Expected behavior
I doubt this is not a bug but more like my misunderstanding from either creating the custom tokenizer or choosing the right API(s). Any help is appreciated.
I thought all seven methods here will give the same output. However, only `method 7` gives my expected output:
```
[3, 8, 16, 29, 32, 48, 55, 61, 77, 84, 90, 103, 111, 115, 131, 139, 145, 161, 167, 174, 187, 194, 201, 215, 225, 234, 244, 255, 263, 276, 285, 288, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 307, 317, 327, 338, 343, 354, 359, 369, 370, 371, 374, 384, 391, 404, 412, 418, 429, 430, 431, 440, 448, 453, 461, 470, 479, 493, 4]
```
Method 1-6 gives the same (unexpected) output:
```
{'input_ids': [3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 4], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
```
In particular, I would like to get the `__call__` function work as expected so that I may use the example [run_mlm.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24119/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24118
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24118/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24118/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24118/events
|
https://github.com/huggingface/transformers/pull/24118
| 1,748,186,870 |
PR_kwDOCUB6oc5SiS4i
| 24,118 |
Remove decoder_input_ids from RAG dummy inputs
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"unfortunately, the problematic situation (GPU usage) I described on Slack channel is still the same. It still takes 5 - 6 G extra memory at some point, compared to the commit before `814de8fa`.",
"Closing this because the dummies turned out not to be the problem after all!"
] | 1,686 | 1,686 | 1,686 |
MEMBER
| null |
cc @ydshieh
The old RAG dummy inputs didn't have `decoder_input_ids` but the new ones do - this seems like the most likely cause of the memory blowup, because RAG probably does a lot of weird retrieval stuff.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24118/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24118/timeline
| null | true |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24118",
"html_url": "https://github.com/huggingface/transformers/pull/24118",
"diff_url": "https://github.com/huggingface/transformers/pull/24118.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24118.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24117
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24117/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24117/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24117/events
|
https://github.com/huggingface/transformers/issues/24117
| 1,748,180,459 |
I_kwDOCUB6oc5oMx3r
| 24,117 |
RuntimeError: CUDA driver error: invalid argument
|
{
"login": "murthyrudra",
"id": 14203368,
"node_id": "MDQ6VXNlcjE0MjAzMzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/14203368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/murthyrudra",
"html_url": "https://github.com/murthyrudra",
"followers_url": "https://api.github.com/users/murthyrudra/followers",
"following_url": "https://api.github.com/users/murthyrudra/following{/other_user}",
"gists_url": "https://api.github.com/users/murthyrudra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/murthyrudra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/murthyrudra/subscriptions",
"organizations_url": "https://api.github.com/users/murthyrudra/orgs",
"repos_url": "https://api.github.com/users/murthyrudra/repos",
"events_url": "https://api.github.com/users/murthyrudra/events{/privacy}",
"received_events_url": "https://api.github.com/users/murthyrudra/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I can reproduce this with the much simpler:\r\n\r\n```\r\nimport transformers\r\ngpt2_generator = transformers.pipeline('text-generation', model='gpt2', device=1)\r\nsentences = gpt2_generator(\"To be honest, neural networks\", do_sample=True, top_k=50, temperature=0.6, max_length=128, num_return_sequences=3)\r\nfor sentence in sentences:\r\n print(sentence[\"generated_text\"])\r\n print(\"=\"*50)\r\n```\r\n\r\nThis is example code from the transformers docs and should \"just work\". It feels like an environment issue, but the error is super indistinct. `CUDA_LAUNCH_BLOCKING=1` does not produce a clearer error. I've tried in a fresh environment, with different versions of transformers, torch, and cuda to no avail. ",
"Hey @blucz @murthyrudra 👋 \r\n\r\nI am unable to reproduce the issue on my end, which seems to indicate this is an environment issue 🤔 \r\n\r\nThis is my current env:\r\n```\r\n- `transformers` version: 4.31.0.dev0\r\n- Platform: Linux-5.15.0-73-generic-x86_64-with-glibc2.35\r\n- Python version: 3.10.6\r\n- Huggingface_hub version: 0.14.1\r\n- Safetensors version: 0.3.1\r\n- PyTorch version (GPU?): 2.0.0+cu118 (True)\r\n- Tensorflow version (GPU?): 2.10.1 (True)\r\n- Flax version (CPU?/GPU?/TPU?): 0.5.3 (gpu)\r\n- Jax version: 0.3.6\r\n- JaxLib version: 0.3.5\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,686 | 1,689 | 1,689 |
NONE
| null |
### System Info
- `transformers` version: 4.29.1
- Platform: Linux-3.10.0-1160.59.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.13
- Huggingface_hub version: 0.15.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@gante
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained('gpt2')
model = AutoModelForCausalLM.from_pretrained('gpt2', device_map = 'auto')
tokenizer.padding_side = "left"
tokenizer.truncation_side = "left"
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
tokenizer.pad_token_id = tokenizer.eos_token_id
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
if torch.cuda.is_available():
model = model.to(device=device)
prompt = "What is the capital of India?"
input_ids = tokenizer(prompt, return_tensors="pt", max_length=512, truncation=True, add_special_tokens=False).input_ids.to(
dtype=torch.long, device=device
)
max_new_tokens = 10
model.eval()
with torch.no_grad():
generated_ids = model.generate(
input_ids,
max_new_tokens=max_new_tokens,
pad_token_id=tokenizer.eos_token_id,
)
preds = [
tokenizer.decode(
g, skip_special_tokens=True, clean_up_tokenization_spaces=True
).strip()
for g in generated_ids
]
```
Running the above script produces the following error
```
Using pad_token, but it is not set yet.
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /home/scripts/temp.py:28 in <module> │
│ │
│ 25 │
│ 26 model.eval() │
│ 27 with torch.no_grad(): │
│ ❱ 28 │ generated_ids = model.generate( │
│ 29 │ │ input_ids, │
│ 30 │ │ max_new_tokens=max_new_tokens, │
│ 31 │ │ pad_token_id=tokenizer.eos_token_id, │
│ │
│ /home/rudra/miniconda3/envs/FM/lib/python3.9/site-packages/torch/utils/_c │
│ ontextlib.py:115 in decorate_context │
│ │
│ 112 │ @functools.wraps(func) │
│ 113 │ def decorate_context(*args, **kwargs): │
│ 114 │ │ with ctx_factory(): │
│ ❱ 115 │ │ │ return func(*args, **kwargs) │
│ 116 │ │
│ 117 │ return decorate_context │
│ 118 │
│ │
│ /home/rudra//miniconda3/envs/FM/lib/python3.9/site-packages/transformers/g │
│ eneration/utils.py:1515 in generate │
│ │
│ 1512 │ │ │ │ ) │
│ 1513 │ │ │ │
│ 1514 │ │ │ # 11. run greedy search │
│ ❱ 1515 │ │ │ return self.greedy_search( │
│ 1516 │ │ │ │ input_ids, │
│ 1517 │ │ │ │ logits_processor=logits_processor, │
│ 1518 │ │ │ │ stopping_criteria=stopping_criteria, │
│ │
│ /home/rudra//miniconda3/envs/FM/lib/python3.9/site-packages/transformers/g │
│ eneration/utils.py:2385 in greedy_search │
│ │
│ 2382 │ │ │ # if eos_token was found in one sentence, set sentence to finished │
│ 2383 │ │ │ if eos_token_id_tensor is not None: │
│ 2384 │ │ │ │ unfinished_sequences = unfinished_sequences.mul( │
│ ❱ 2385 │ │ │ │ │ next_tokens.tile(eos_token_id_tensor.shape[0], 1).ne(eos_token_id_te │
│ 2386 │ │ │ │ ) │
│ 2387 │ │ │ │ │
│ 2388 │ │ │ │ # stop when each sentence is finished │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: CUDA driver error: invalid argument
```
### Expected behavior
No error and some text generated by the model. Works perfectly on CPU.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24117/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24117/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24116
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24116/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24116/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24116/events
|
https://github.com/huggingface/transformers/pull/24116
| 1,748,175,660 |
PR_kwDOCUB6oc5SiQcZ
| 24,116 |
fix overflow when training mDeberta in fp16
|
{
"login": "sjrl",
"id": 10526848,
"node_id": "MDQ6VXNlcjEwNTI2ODQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/10526848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sjrl",
"html_url": "https://github.com/sjrl",
"followers_url": "https://api.github.com/users/sjrl/followers",
"following_url": "https://api.github.com/users/sjrl/following{/other_user}",
"gists_url": "https://api.github.com/users/sjrl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sjrl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sjrl/subscriptions",
"organizations_url": "https://api.github.com/users/sjrl/orgs",
"repos_url": "https://api.github.com/users/sjrl/repos",
"events_url": "https://api.github.com/users/sjrl/events{/privacy}",
"received_events_url": "https://api.github.com/users/sjrl/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @younesbelkada @ArthurZucker ",
"_The documentation is not available anymore as the PR was closed or merged._",
"I used this code block to check results.\r\nThis was run on:\r\n- Ubuntu 20.04.4 LTS\r\n- NVIDIA 3070\r\n- CUDA Version: 11.7\r\n```python\r\nimport torch\r\nfrom transformers import AutoTokenizer, AutoModelForQuestionAnswering\r\nfrom transformers.pipelines import QuestionAnsweringPipeline\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"sjrhuschlee/mdeberta-v3-base-squad2\")\r\nmodel = AutoModelForQuestionAnswering.from_pretrained(\r\n \"sjrhuschlee/mdeberta-v3-base-squad2\",\r\n# torch_dtype=torch.float16,\r\n# torch_dtype=torch.bfloat16,\r\n# load_in_8bit=True,\r\n)\r\npipe = QuestionAnsweringPipeline(model, tokenizer, device=torch.device(\"cuda:0\")) # device=... was removed for 8bit\r\n```\r\n\r\n**Running on Main Branch**\r\n\r\nRunning the above code using `torch.float16` on the main branch gives me no answer\r\n```python\r\n# with torch.float16\r\npipe = QuestionAnsweringPipeline(model, tokenizer, device=torch.device(\"cuda:0\"))\r\n# []\r\n```\r\nRunning with `torch.bfloat16` and `torch.float32` gives me the expected answer\r\n```python\r\n# with torch.bfloat16\r\npipe = QuestionAnsweringPipeline(model, tokenizer, device=torch.device(\"cuda:0\"))\r\n# {'score': 0.98369300365448, 'start': 33, 'end': 41, 'answer': ' Berlin.'}\r\n\r\n# with torch.float32\r\npipe = QuestionAnsweringPipeline(model, tokenizer, device=torch.device(\"cuda:0\"))\r\n# {'score': 0.9850791096687317, 'start': 33, 'end': 41, 'answer': ' Berlin.'}\r\n```\r\nAlso running in `8bit` works\r\n```python\r\n# with load_in_8bit=True\r\npipe = QuestionAnsweringPipeline(model, tokenizer)\r\n# {'score': 0.9868391752243042, 'start': 33, 'end': 41, 'answer': ' Berlin.'}\r\n```\r\n\r\n**Running on the PR**\r\nThe change in this PR also enables mDeberta models to run at inference in `torch.float16` which wasn't possible before. And it doesn't look to affect any of the other dtypes.\r\n```python\r\n# with torch.float16\r\npipe = QuestionAnsweringPipeline(model, tokenizer, device=torch.device(\"cuda:0\"))\r\n# {'score': 0.9848804473876953, 'start': 33, 'end': 41, 'answer': ' Berlin.'}\r\n\r\n# with torch.bfloat16\r\npipe = QuestionAnsweringPipeline(model, tokenizer, device=torch.device(\"cuda:0\"))\r\n# {'score': 0.9841369986534119, 'start': 33, 'end': 41, 'answer': ' Berlin.'}\r\n\r\n# with torch.float32\r\npipe = QuestionAnsweringPipeline(model, tokenizer, device=torch.device(\"cuda:0\"))\r\n# {'score': 0.9850791096687317, 'start': 33, 'end': 41, 'answer': ' Berlin.'}\r\n\r\n# with load_in_8bit=True\r\npipe = QuestionAnsweringPipeline(model, tokenizer)\r\n# {'score': 0.9870386719703674, 'start': 33, 'end': 41, 'answer': ' Berlin.'}\r\n```",
"I also noticed that the TF implementation in DebertaV2 has the same line https://github.com/huggingface/transformers/blob/2e2088f24b60d8817c74c32a0ac6bb1c5d39544d/src/transformers/models/deberta_v2/modeling_tf_deberta_v2.py#L678-L679\r\nI'm not too familiar with TF though so I'm not sure if this change should be made there as well. ",
"@sjrl To the best of my knowledge, we don't support training in fp16 in TF, so less of a risk here. I'd be pro updating in TF, so that the implementations are aligned and it's potentially safer. cc @Rocketknight1 for his thoughts. ",
"Yes, we support mixed-precision float16/bfloat16 training in TensorFlow, but in general we still expect a 'master' copy of the weights to remain in float32. We're planning some exploration to see if we can get Keras to accept full (b)float16 training, but it might require some refactoring!",
"@Rocketknight1 should I go ahead and update the TF implementation as well then? ",
"@sjrl Yes please! Better numerical stability will be nice to have once we've enabled full float16 training",
"@sjrl - Are there any other changes to add? Otherwise I think we're good to merge :) ",
"@amyeroberts You're welcome, and that's it for the changes!"
] | 1,686 | 1,690 | 1,686 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes https://github.com/microsoft/DeBERTa/issues/77 (issue about transformers opened in Microsoft repo)
- This issue was originally raised in the https://github.com/microsoft/DeBERTa repo which had to do with mDeberta not being able to be trained using fp16. A fix for this was implemented in the Microsoft repo by @BigBird01 but did not yet make it to HuggingFace. I was interested in training mDeberta models on small hardware (e.g. a 3070, T4) so I updated the HF implementation with the changes from the Microsoft repo. I tried to only bring over the minimal changes needed to get the fp16 training to work.
- I checked that existing tests passed and also used this code to successfully train an mDeberta model in fp16 on Squad2 that can be found [here](https://huggingface.co/sjrhuschlee/mdeberta-v3-base-squad2) which is not currently possible with the main branch of transformers. I'm unsure if there is a good way to add an additional test to make sure mDeberta-V3 training works in fp16 in the CI.
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Hey, based on the recommendations from the PR template (and git blame) I decided to tag @ArthurZucker and @sgugger in case you may be interested.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24116/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24116/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24116",
"html_url": "https://github.com/huggingface/transformers/pull/24116",
"diff_url": "https://github.com/huggingface/transformers/pull/24116.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24116.patch",
"merged_at": 1686665068000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24115
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24115/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24115/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24115/events
|
https://github.com/huggingface/transformers/pull/24115
| 1,748,131,860 |
PR_kwDOCUB6oc5SiGjf
| 24,115 |
Experiment with static past key/value buffer
|
{
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24115). All of your documentation changes will be reflected on that endpoint.",
"In order to efficiently work with the PyTorch team here and to figure out exactly what needs to be done for a super fast generate method, I'd suggest to open a new benchmark repo that includes this PR for say three important LLM models:\r\n- GPT2\r\n- Llama\r\n- starcoder\r\n\r\nThe repo \"maybe just called `transformers-generate-benchmark` should have:\r\n- copied the modeling files of GPT2, Llama, Starcoder\r\n- Use the SDPA attention for all models (just like `betterTransformers` does - you could copy it from here: https://github.com/huggingface/optimum/blob/main/optimum/bettertransformer/models/attention.py)\r\n- A super small generate function that we write ourselves (just greedy without any logit processor)\r\n\r\nThen in this repo we run all the benchmarking and also make it easy for the PyTorch team to reproduce the benchmarking numbers.",
"Sounds good - I'll continue on this PR for easy diff."
] | 1,686 | 1,686 | 1,686 |
COLLABORATOR
| null |
This PR is just to see if this could reside in transformers or not.
## Motivation
We suspect [the concatenations of the key/value buffer](https://github.com/huggingface/transformers/blob/ba695c1efd55091e394eb59c90fb33ac3f9f0d41/src/transformers/models/gpt2/modeling_gpt2.py#L321-L322) at each generation step to be expensive. The idea is to modify the buffer in place after a unique allocation.
FasterTransformer [does preallocate the kv cache](https://github.com/NVIDIA/FasterTransformer/blob/c6e8f60ec40da218804a60e6aa986903e7fa8594/src/fastertransformer/models/decoding/Decoding.cc#L83C9-L84) (among others). Preallocating may also help `torch.compile` according to @Chillee, although I don't quite get why yet (there are still dynamic shapes in the model itself, so why care about model I/O static shapes?).
For reference: https://huggingface.slack.com/archives/C055NT312LW/p1683038064467109 & https://huggingface.slack.com/archives/C055NT312LW/p1685570357532069
## Current (ugly) workflow
Very temporary
```python
cache_size = max_new_tokens
past_key_values = tuple([
tuple([torch.empty(
batch_size,
model.config.n_head,
cache_size,
model.config.n_embd // model.config.n_head, # head dimension
dtype=dtype,
device=device
) for _ in range(2)])
model.enable_static_kv_cache()
res = model.generate(
**inputs,
num_beams=1,
min_new_tokens=max_new_tokens,
max_new_tokens=max_new_tokens,
past_key_values=past_key_values
)
```
## Results
Some optimizations may still be missing - right now in small model / small batch size setting this is not interesting.
Results are with PyTorch 2.0.1 eager.
Script: https://github.com/fxmarty/transformers-preallocate-kv-cache/blob/main/run.py
Raw results: https://docs.google.com/spreadsheets/d/15P1o9vDcXOSeLAwLUWiqXQatHxtQYBa_xcyLZanOjQQ/edit?usp=sharing






## Misc
The current implementation of `valid_past_index` being `Optional[int]` is quite debatable and may hamper readability.
I'm also not sure whether adding methods in `PreTrainedModel` that are specific to decoders is OK?
Missing test right now is to generate a sequence length shorter than the KV buffer.
Some todos:
- [ ] `past_key_values` buffer should be initialized in the background, instead of requiring the user to initialize it himself and requiring to pass `generate(**inputs, past_key_values=past_key_values)` (as currently).
- [ ] Current implementation is likely to break with `accelerate` with naive pipeline parallelism, as the buffer is currently initialized on a single device
- [ ] Preallocated kv cache still does not help with small models / batch size.
- [ ] Support an iterative buffer, e.g. that auto-extends each 512 tokens, instead of initializing a buffer of size `max_new_tokens`. This may help reducing memory usage (and speed? probably not).
- [ ] Implement tests
- [ ] Should this be in optimum or transformers?
- [ ] Support all (is it possible?) decoding strategies, instead of currently only `greedy_search`
- [ ] Have it work with cross-attention
- [ ] Preallocate `attention_mask` as well
- [ ] Test on CPU as well
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24115/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24115/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24115",
"html_url": "https://github.com/huggingface/transformers/pull/24115",
"diff_url": "https://github.com/huggingface/transformers/pull/24115.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24115.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/24114
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24114/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24114/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24114/events
|
https://github.com/huggingface/transformers/issues/24114
| 1,748,058,107 |
I_kwDOCUB6oc5oMT_7
| 24,114 |
doc issue from docs/source/en/model_doc/open-llama.mdx
|
{
"login": "xingener",
"id": 3382402,
"node_id": "MDQ6VXNlcjMzODI0MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3382402?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xingener",
"html_url": "https://github.com/xingener",
"followers_url": "https://api.github.com/users/xingener/followers",
"following_url": "https://api.github.com/users/xingener/following{/other_user}",
"gists_url": "https://api.github.com/users/xingener/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xingener/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xingener/subscriptions",
"organizations_url": "https://api.github.com/users/xingener/orgs",
"repos_url": "https://api.github.com/users/xingener/repos",
"events_url": "https://api.github.com/users/xingener/events{/privacy}",
"received_events_url": "https://api.github.com/users/xingener/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@xingener Good spot - would you like to open a PR to fix it? ",
"> @xingener Good spot - would you like to open a PR to fix it?\r\n\r\nSure."
] | 1,686 | 1,686 | 1,686 |
CONTRIBUTOR
| null |
In overview, 2nd paragraph:
The model is mainly based on LLaMA with some modifications, incorporating memory-efficient attention from Xformers, stable embedding from Bloom, and shared input-output embedding from PLAM. And the model is pre-trained on both Chinese and English, which gives it better performance on Chinese language tasks.
Shared input-output embedding should be from PaLM?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24114/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24114/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24113
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24113/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24113/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24113/events
|
https://github.com/huggingface/transformers/pull/24113
| 1,747,962,024 |
PR_kwDOCUB6oc5ShhQ3
| 24,113 |
[`GPT2`] Add correct keys on `_keys_to_ignore_on_load_unexpected` on all child classes of `GPT2PreTrainedModel`
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24113). All of your documentation changes will be reflected on that endpoint."
] | 1,686 | 1,686 | 1,686 |
CONTRIBUTOR
| null |
# What does this PR do?
as per title
forgot to add them in https://github.com/huggingface/transformers/pull/23256
Currently this snippet:
```python
from transformers import GPT2Model
model = GPT2Model.from_pretrained("gpt2")
```
Gives a big warning:
```bash
Some weights of the model checkpoint at gpt2 were not used when initializing GPT2Model: ['h.10.attn.bias', 'h.5.attn.bias', 'h.7.attn.bias', 'h.0.attn.bias', 'h.11.attn.bias', 'h.8.attn.bias', 'h.1.attn.bias', 'h.9.attn.bias', 'h.2.attn.bias', 'h.4.attn.bias', 'h.6.attn.bias', 'h.3.attn.bias']
- This IS expected if you are initializing GPT2Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing GPT2Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
```
This PR fixes it by adding the correct regex expressions on `_keys_to_ignore_on_load_unexpected` for all child classes that inherit from `GPT2PreTrainedModel`
cc @sgugger @patrickvonplaten
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24113/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24113/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24113",
"html_url": "https://github.com/huggingface/transformers/pull/24113",
"diff_url": "https://github.com/huggingface/transformers/pull/24113.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24113.patch",
"merged_at": 1686234102000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24112
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24112/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24112/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24112/events
|
https://github.com/huggingface/transformers/issues/24112
| 1,747,954,853 |
I_kwDOCUB6oc5oL6yl
| 24,112 |
LogitsProcessor - are there any examples of how to use it?
|
{
"login": "Oxi84",
"id": 25420033,
"node_id": "MDQ6VXNlcjI1NDIwMDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/25420033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Oxi84",
"html_url": "https://github.com/Oxi84",
"followers_url": "https://api.github.com/users/Oxi84/followers",
"following_url": "https://api.github.com/users/Oxi84/following{/other_user}",
"gists_url": "https://api.github.com/users/Oxi84/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Oxi84/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Oxi84/subscriptions",
"organizations_url": "https://api.github.com/users/Oxi84/orgs",
"repos_url": "https://api.github.com/users/Oxi84/repos",
"events_url": "https://api.github.com/users/Oxi84/events{/privacy}",
"received_events_url": "https://api.github.com/users/Oxi84/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @gante ",
"I have found one more example for T5. Only I do not know how would I add multiple inputs, not just one, when i increase the list seq1 to more elements it does not work as sizes do not match.\r\n\r\nhttps://stackoverflow.com/questions/72180737/beam-search-and-generate-are-not-consistent\r\n\r\n\r\n seq1 = [ \"summarize: beamsearch and generate does not give the same result\"]\r\n \r\n \r\n encoding = tokenizer(\r\n seq1,\r\n padding=\"longest\",\r\n max_length=128,\r\n truncation=True,\r\n return_tensors=\"pt\",\r\n )\r\n \r\n encoder_input_ids, attention_mask = encoding.input_ids.to(\"cuda\"), encoding.attention_mask.to(\"cuda\")\r\n num_beams = 2\r\n input_ids = torch.ones((num_beams, 1), device=model.device, dtype=torch.long)\r\n input_ids = input_ids * model.config.decoder_start_token_id\r\n model_kwargs = {\r\n \"encoder_outputs\": model.get_encoder()(\r\n encoder_input_ids.repeat_interleave(num_beams, dim=0), return_dict=True\r\n )\r\n }\r\n beam_scorer = BeamSearchScorer(\r\n batch_size=1,\r\n do_early_stopping=True,\r\n num_beams=num_beams,\r\n device=model.device,\r\n )\r\n \r\n outputs = model.beam_search(input_ids, beam_scorer,\r\n logits_processor=None,\r\n early_stopping=True,\r\n no_repeat_ngram_size=4,\r\n max_length=64,\r\n **model_kwargs,\r\n output_scores=True,\r\n return_dict_in_generate=True)\r\n \r\n # beam_search result\":\r\n\r\n\r\n\r\n\r\n",
"@Oxi84 If there are errors in the examples, could you share the errors with a full stack trace and information about the environment being run? Keep in mind that examples are not meant to be exhaustive and there may be cases they don't cover. ",
"Here is an example on colab - https://colab.research.google.com/drive/1TR6PWwKK4SuD7RluN_f82lOZQi_SOJnf#scrollTo=Uj4YddOGq2ee\r\n\r\nThis is a basic example, where logits_processor=None from https://stackoverflow.com/questions/72180737/beam-search-and-generate-are-not-consistent.\r\n\r\nWhen seq1 = [\"paraphrase: Beamsearch and generate give the same result.\"] , it works fine, but when \r\n\r\n seq1 = [\"paraphrase: Beamsearch and generate give the same result.\",\"paraphrase: Beamsearch and generate give the same result.\"] \r\n\r\nI do not know how to make it work. \r\nError is: RuntimeError: The size of tensor a (28) must match the size of tensor b (14) at non-singleton dimension 3.\r\n\r\n\r\n\r\n\r\n import torch\r\n import torch.nn as nn\r\n import torch.optim as optim\r\n import torch.nn.functional as F\r\n from transformers import AutoTokenizer, AutoModelForSeq2SeqLM\r\n from transformers import LogitsProcessorList, MinLengthLogitsProcessor, BeamSearchScorer,MaxLengthCriteria, StoppingCriteriaList\r\n \r\n \r\n from transformers import AutoTokenizer, AutoModelForSeq2SeqLM\r\n tokenizer = AutoTokenizer.from_pretrained(\"humarin/chatgpt_paraphraser_on_T5_base\")\r\n model = AutoModelForSeq2SeqLM.from_pretrained(\"humarin/chatgpt_paraphraser_on_T5_base\")\r\n \r\n model.resize_token_embeddings(len(tokenizer))\r\n model.to(\"cuda\")\r\n \r\n seq1 = [\"paraphrase: Beamsearch and generate give the same result.\"]\r\n \r\n \r\n encoding = tokenizer(\r\n seq1,\r\n padding=\"longest\",\r\n max_length=128,\r\n truncation=True,\r\n return_tensors=\"pt\",\r\n )\r\n \r\n encoder_input_ids, attention_mask = encoding.input_ids.to(\"cuda\"), encoding.attention_mask.to(\"cuda\")\r\n num_beams = 2\r\n input_ids = torch.ones((num_beams, 1), device=model.device, dtype=torch.long)\r\n input_ids = input_ids * model.config.decoder_start_token_id\r\n model_kwargs = {\r\n \"encoder_outputs\": model.get_encoder()(\r\n encoder_input_ids.repeat_interleave(num_beams, dim=0), return_dict=True\r\n )\r\n }\r\n \r\n #print(\"input_ids\",input_ids)\r\n #print(\"model_kwargs\",model_kwargs)\r\n \r\n beam_scorer = BeamSearchScorer(\r\n batch_size=1,\r\n do_early_stopping=True,\r\n num_beams=num_beams,\r\n device=model.device,\r\n )\r\n \r\n outputs = model.beam_search(input_ids, beam_scorer,\r\n logits_processor=None,\r\n early_stopping=True,\r\n max_length=64,\r\n **model_kwargs,\r\n output_scores=True,\r\n return_dict_in_generate=True)\r\n \r\n # beam_search result\":\r\n out = tokenizer.batch_decode(outputs.sequences, skip_special_tokens=True)\r\n \r\n print(\"out\",out)\r\n \r\n #generate results:\r\n out = model.generate(encoder_input_ids,\r\n max_length=64,\r\n \r\n early_stopping=True,\r\n num_beams=2,\r\n do_sample=False,\r\n num_return_sequences=1)\r\n \r\n out1 = tokenizer.batch_decode(out, skip_special_tokens=True)\r\n \r\n print(\"out1\",out1)\r\n\r\n\r\n\r\n\r\n\r\n",
"Hey @Oxi84 👋 \r\n\r\nYou are absolutely right, our docs and examples for generation are quite poor atm. It is my highest priority for the next 1-2 months -- stay tuned 💪 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,686 | 1,689 | 1,689 |
NONE
| null |
### Feature request
I could not find any more examples than this one, and this one does not work. It reports some erors, when used with t5. https://colab.research.google.com/drive/1ezT24sogpVyr2HJLOvXHzjv61JZJ1gMT?usp=sharing#scrollTo=0MJJZEylVO-x
Are there any more examples of usage.
I need to lower the scores for a specified value, for chosen tokens, but any example will do.
### Motivation
We need some usage examples of every feature of transformers, otherwise these features are not very useful for 90 percent of people.
### Your contribution
I found this example: https://colab.research.google.com/drive/1ezT24sogpVyr2HJLOvXHzjv61JZJ1gMT?usp=sharing#scrollTo=0MJJZEylVO-x
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24112/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24112/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24111
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24111/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24111/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24111/events
|
https://github.com/huggingface/transformers/pull/24111
| 1,747,910,817 |
PR_kwDOCUB6oc5ShWFN
| 24,111 |
Generate: PT's `top_p` enforces `min_tokens_to_keep` when it is `1`
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> top_p was not enforcing min_tokens_to_keep when it was 1 \r\n\r\nFrom the diff - I don't see how this is resolved. The checks ensure the value of `min_tokens_to_keeps` but doesn't seem to be conditional on `top_p`. Am I missing something? ",
"@gante I hit an issue related to this in the prior version of transformers, glad to see that it's fixed thanks! However why don't we enforce `min_tokens_to_keep >= 1`. 0 makes no sense right?",
"@njhill true, the initial check should be against `>=1`, patching it"
] | 1,686 | 1,687 | 1,686 |
MEMBER
| null |
# What does this PR do?
Fixes #23688
Contrary to our description in the docstring, PT's `top_p` was not enforcing `min_tokens_to_keep` when it was 1 (the default). TF and FLAX were fine. This PR corrects it, and adds a check on `min_tokens_to_keep` (must be a non-negative integer)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24111/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24111/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24111",
"html_url": "https://github.com/huggingface/transformers/pull/24111",
"diff_url": "https://github.com/huggingface/transformers/pull/24111.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24111.patch",
"merged_at": 1686313205000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24110
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24110/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24110/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24110/events
|
https://github.com/huggingface/transformers/pull/24110
| 1,747,910,074 |
PR_kwDOCUB6oc5ShV7A
| 24,110 |
Update the pin on Accelerate
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,686 | 1,686 | 1,686 |
COLLABORATOR
| null |
# What does this PR do?
Move to the second patch as the minimum required version. cc @muellerzr
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24110/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24110/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24110",
"html_url": "https://github.com/huggingface/transformers/pull/24110",
"diff_url": "https://github.com/huggingface/transformers/pull/24110.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24110.patch",
"merged_at": 1686233462000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24109
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24109/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24109/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24109/events
|
https://github.com/huggingface/transformers/pull/24109
| 1,747,901,128 |
PR_kwDOCUB6oc5ShT9d
| 24,109 |
Add Musicgen
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Would be great to hear your thoughts on the design here @patrickvonplaten (adding the tests otherwise now)\r\n\r\nTODO:\r\n- [x] convert m/l checkpoints\r\n- [x] handle padded tokens from encodec (in delay pattern mask, then again when we decode)\r\n- [x] fast tests\r\n- [x] integration tests\r\n- [x] add method for unconditional generation (no need to use processor to get input ids)\r\n- [x] finish docs / docstrings",
"This is ready for review (fyi @ArthurZucker / @patrickvonplaten) - kindly requesting review from @sgugger!"
] | 1,686 | 1,688 | 1,688 |
CONTRIBUTOR
| null |
# What does this PR do?
Adds the musicgen model by fairseq to transformers
This model is made of three components:
1. T5Encoder (which import as `AutoModelForTextEncoding`)
2. MusicgenDecoder (which we copy as much as possible from [`modeling_bart.py`](https://github.com/huggingface/transformers/blob/1e9da2b0a6ef964c2cf72dd715dbee991a3f49fa/src/transformers/models/bart/modeling_bart.py#L142))
3. Encodec (which we import as `AutoModel`)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24109/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24109/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24109",
"html_url": "https://github.com/huggingface/transformers/pull/24109",
"diff_url": "https://github.com/huggingface/transformers/pull/24109.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24109.patch",
"merged_at": 1688046540000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24108
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24108/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24108/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24108/events
|
https://github.com/huggingface/transformers/pull/24108
| 1,747,879,316 |
PR_kwDOCUB6oc5ShPLw
| 24,108 |
error bug on saving distributed optim state when using data parallel
|
{
"login": "xshaun",
"id": 8446322,
"node_id": "MDQ6VXNlcjg0NDYzMjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8446322?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xshaun",
"html_url": "https://github.com/xshaun",
"followers_url": "https://api.github.com/users/xshaun/followers",
"following_url": "https://api.github.com/users/xshaun/following{/other_user}",
"gists_url": "https://api.github.com/users/xshaun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xshaun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xshaun/subscriptions",
"organizations_url": "https://api.github.com/users/xshaun/orgs",
"repos_url": "https://api.github.com/users/xshaun/repos",
"events_url": "https://api.github.com/users/xshaun/events{/privacy}",
"received_events_url": "https://api.github.com/users/xshaun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @pacman100 ",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,686 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
the indexing typo causes a wrong result when saving distributed optimizer when enabling data parallelism.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24108/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24108/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24108",
"html_url": "https://github.com/huggingface/transformers/pull/24108",
"diff_url": "https://github.com/huggingface/transformers/pull/24108.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24108.patch",
"merged_at": 1687170861000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24107
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24107/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24107/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24107/events
|
https://github.com/huggingface/transformers/pull/24107
| 1,747,826,927 |
PR_kwDOCUB6oc5ShDte
| 24,107 |
reset accelerate env variables after each test
|
{
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,686 | 1,686 | 1,686 |
CONTRIBUTOR
| null |
# What does this PR do?
## Context:
For the failing tests in `test_trainer_ext`, the reason is given below:
1. These tests are run via following command:
```
python -m pytest -v --make-reports=multi-gpu_tests_torch_cuda_extensions_gpu tests/deepspeed tests/extended
```
2. Now, as DeepSpeed tests are run first, even though the AcceleratorState is reset during teardown, the env variable set by Accelerate ACCELERATE_USE_DEEPSPEED isn't deleted (if the test isn't a script run as a subprocess) and as such Accelerator object initialization in extended tests create DeepSpeedPlugin leading to them failing with - HFDeepSpeedPlugin raises config mismatch error
3. Simple reproducer:
```
cd transformers
export CUDA_VISIBLE_DEVICES="0,1"
export RUN_SLOW="yes"
pytest -sv tests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_hf_ds_config_mismatch tests/extended/test_trainer_ext.py::TestTrainerExt::test_run_seq2seq
```
This PR fixes it by deleting all the env variables having `ACCELERATE` in them during test `tearDown`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24107/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24107/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24107",
"html_url": "https://github.com/huggingface/transformers/pull/24107",
"diff_url": "https://github.com/huggingface/transformers/pull/24107.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24107.patch",
"merged_at": 1686230348000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24106
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24106/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24106/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24106/events
|
https://github.com/huggingface/transformers/pull/24106
| 1,747,730,804 |
PR_kwDOCUB6oc5Sgumt
| 24,106 |
Avoid `GPT-2` daily CI job OOM (in TF tests)
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,686 | 1,686 | 1,686 |
COLLABORATOR
| null |
# What does this PR do?
Clear (as much as possible) GPU memory usage allocated by torch, so the TF tests (GPT-2) get more room and make @Rocketknight1 's life easier 😆.
Some (TF) tests get OOM after #23234.
Note, the changes in this PR are in torch test files instead of TF test files! This is similar to #16881, which has more details mentioned.
Running manually and all gpt2 tests pass now.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24106/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24106/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24106",
"html_url": "https://github.com/huggingface/transformers/pull/24106",
"diff_url": "https://github.com/huggingface/transformers/pull/24106.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24106.patch",
"merged_at": 1686241269000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24105
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24105/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24105/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24105/events
|
https://github.com/huggingface/transformers/pull/24105
| 1,747,636,574 |
PR_kwDOCUB6oc5SgZ-Z
| 24,105 |
[Whisper] Make tests faster
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24105). All of your documentation changes will be reflected on that endpoint.",
"Sorry, I am asking not because I see a test being slow, but just I saw some more Whisper test failures on daily CI, which is `tests/models/whisper/test_modeling_whisper.py::WhisperModelTest::test_cpu_offload`.\r\n\r\nBut yes, in general, it's best to use low number. I will take a look.",
"Note that the Whisper tests have already been flagged as being slow (#23736) so this should help combat this issue!",
"It's not because it's slow test that we use large value without really valid reason :-). Always better to make them use low values is the goal, unless it's absolute necessary.\r\n\r\nI still have questions on why we don't need to pass `input_shape` in the corresponding flax test file.",
"OK, in flax test file, I see\r\n\r\n```\r\n self.all_model_classes = (\r\n make_partial_class(model_class, input_shape=self.init_shape) for model_class in self.all_model_classes\r\n )\r\n```\r\nprobably it's the reason.",
"Yep agreed - the seq len was unnecessarily high here :) You're spot on regarding the init shape: we have to change this based on the sequence length since Flax Whisper initialises the positional embeddings based on the context window, so if we change the seq len (= context window) we need to init the weights with the new shape"
] | 1,686 | 1,687 | 1,687 |
CONTRIBUTOR
| null |
# What does this PR do?
Reduces the input seq length of the Whisper tests from 1500 -> 60 frames. This in turn should speed up the tests quite considerably.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24105/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24105/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24105",
"html_url": "https://github.com/huggingface/transformers/pull/24105",
"diff_url": "https://github.com/huggingface/transformers/pull/24105.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24105.patch",
"merged_at": 1687273317000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24104
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24104/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24104/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24104/events
|
https://github.com/huggingface/transformers/issues/24104
| 1,747,401,465 |
I_kwDOCUB6oc5oJzr5
| 24,104 |
Error when overriding generation config: GenerationConfig() got multiple values for keyword argument 'num_beams'
|
{
"login": "Taytay",
"id": 1330693,
"node_id": "MDQ6VXNlcjEzMzA2OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1330693?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Taytay",
"html_url": "https://github.com/Taytay",
"followers_url": "https://api.github.com/users/Taytay/followers",
"following_url": "https://api.github.com/users/Taytay/following{/other_user}",
"gists_url": "https://api.github.com/users/Taytay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Taytay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Taytay/subscriptions",
"organizations_url": "https://api.github.com/users/Taytay/orgs",
"repos_url": "https://api.github.com/users/Taytay/repos",
"events_url": "https://api.github.com/users/Taytay/events{/privacy}",
"received_events_url": "https://api.github.com/users/Taytay/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey @Taytay 👋 \r\n\r\nThank you for raising this issue! This is indeed a bug, I'll open a PR ASAP"
] | 1,686 | 1,686 | 1,686 |
NONE
| null |
### System Info
- `transformers` version: 4.30.0.dev0 (commit: 4aa13224a5bca560147a29c06b2e0597137caf3e)
- Platform: Linux-5.15.0-1013-oracle-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes (launching with `accelerate`)
### Who can help?
@gante @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Calling `GenerationConfig.from_pretrained` with a model that already defines `num_beams` in its configuration, and attempting to override the `num_beams` parameter (and presumably any other parameter), results in a runtime exception `got multiple values for keyword argument 'num_beams'`
```python
generation_config: GenerationConfig = GenerationConfig.from_pretrained(
"My-private-model",
num_beams=num_beams)
```
Results in :
```
File "/app/scripts/fine_tune/./fine_tune_and_evaluate.py", line 1481, in <module>
main()
File "/app/scripts/fine_tune/./fine_tune_and_evaluate.py", line 1267, in main
generation_config: GenerationConfig = GenerationConfig.from_pretrained(
File "/app/ai_categorize_env/lib/python3.10/site-packages/transformers/generation/configuration_utils.py", line 541, in from_pretrained
return cls.from_dict(config_dict, **kwargs)
File "/app/ai_categorize_env/lib/python3.10/site-packages/transformers/generation/configuration_utils.py", line 574, in from_dict
config = cls(**config_dict, **kwargs)
TypeError: transformers.generation.configuration_utils.GenerationConfig() got multiple values for keyword argument 'num_beams'
```
This appears to be because of this code:
https://github.com/huggingface/transformers/blob/ba695c1efd55091e394eb59c90fb33ac3f9f0d41/src/transformers/generation/configuration_utils.py#L572-L576
That is calling `cls(**config_dict, **kwargs)`, which might pass the same keyword values in twice if the `config_dict` has the property that `kwargs` does, right? I don't see a step where we remove the properties from `config_dict` that are mentioned in `kwargs`, although there is a comment right above that says: `# remove all the arguments that are in the config_dict`
Wouldn't the code need to do something more like this?
```
config_dict_copy = config_dict.copy()
config_dict_copy.update(kwargs)
config = cls(**config_dict_copy)
```
My generation_config.json from my model is this:
```json
{
"decoder_start_token_id": 0,
"eos_token_id": 1,
"length_penalty": 0,
"max_length": 32,
"num_beams": 2,
"num_return_sequences": 2,
"output_scores": true,
"pad_token_id": 0,
"return_dict_in_generate": true,
"transformers_version": "4.30.0.dev0"
}
```
### Expected behavior
This should not throw an exception:
```python
generation_config: GenerationConfig = GenerationConfig.from_pretrained(
"My-model",
num_beams=num_beams)
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24104/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24104/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24103
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24103/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24103/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24103/events
|
https://github.com/huggingface/transformers/pull/24103
| 1,747,286,593 |
PR_kwDOCUB6oc5SfOle
| 24,103 |
[`Trainer`] Correct behavior of `_load_best_model` for PEFT models
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,686 | 1,686 | 1,686 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/24096
This PR fixes the bugs related with PEFT models and `load_best_model_at_end`. It also refactors a bit the current logic to extend it generally to all LoRA models, not only 8-bit base models + LoRA.
<details><summary>Repro script</summary>
```python
from datasets import load_dataset
from trl import SFTTrainer
from peft import LoraConfig
from transformers import TrainingArguments
dataset = load_dataset("imdb", split="train")
peft_config = LoraConfig(
r=16,
lora_alpha=32,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
)
args = TrainingArguments(
max_steps=1,
save_steps=1,
eval_steps=1,
evaluation_strategy="steps",
per_device_train_batch_size=1,
resume_from_checkpoint=True,
output_dir="test_trainer",
load_best_model_at_end=True,
)
trainer = SFTTrainer(
"EleutherAI/gpt-neo-125m",
train_dataset=dataset,
eval_dataset=dataset,
dataset_text_field="text",
peft_config=peft_config,
max_seq_length=128,
args=args,
)
trainer.train()
```
</details>
cc @sgugger @pacman100
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24103/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24103",
"html_url": "https://github.com/huggingface/transformers/pull/24103",
"diff_url": "https://github.com/huggingface/transformers/pull/24103.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24103.patch",
"merged_at": 1686231511000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24102
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24102/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24102/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24102/events
|
https://github.com/huggingface/transformers/issues/24102
| 1,747,262,370 |
I_kwDOCUB6oc5oJRui
| 24,102 |
Memory leak when using GIT for image captioning in reference
|
{
"login": "XingyuZhu-Pamela",
"id": 65881232,
"node_id": "MDQ6VXNlcjY1ODgxMjMy",
"avatar_url": "https://avatars.githubusercontent.com/u/65881232?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/XingyuZhu-Pamela",
"html_url": "https://github.com/XingyuZhu-Pamela",
"followers_url": "https://api.github.com/users/XingyuZhu-Pamela/followers",
"following_url": "https://api.github.com/users/XingyuZhu-Pamela/following{/other_user}",
"gists_url": "https://api.github.com/users/XingyuZhu-Pamela/gists{/gist_id}",
"starred_url": "https://api.github.com/users/XingyuZhu-Pamela/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/XingyuZhu-Pamela/subscriptions",
"organizations_url": "https://api.github.com/users/XingyuZhu-Pamela/orgs",
"repos_url": "https://api.github.com/users/XingyuZhu-Pamela/repos",
"events_url": "https://api.github.com/users/XingyuZhu-Pamela/events{/privacy}",
"received_events_url": "https://api.github.com/users/XingyuZhu-Pamela/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @XingyuZhu-Pamela, thanks for raising this issue!\r\n\r\nA few questions from my side so that I can best try and help: \r\n* In the above snippet, what values does `device` have (I'm assuming `\"cuda\"`)? \r\n* Are there any patterns to when the memory leak occurs e.g. certain images, running inside certain scripts, running after a long time? How often do you observe this - is it every time or more sporadic? \r\n* Would you be able to share an example image being used so that I can run the script? \r\n* Is there any chance that you're using `decord` as part of the pipeline? There was a known issue where [decord would cause crashes when moving the models to GPU](https://github.com/huggingface/transformers/issues/21085).\r\n\r\nAs a side note - soon python 3.7 will no longer be officially support by the transformer library. I'd suggest upgrading the python version to make sure your code remains compatible with the library. ",
"> \r\n\r\nThank you! Here are my answers for your questions:\r\n\r\n- The value of device is cuda, but when I try to use cpu, the memory leak problem still exists.\r\n- This problem happened no matter what image I use, and the memory increase is small for each call and continues to increase after multiple calls.\r\n- Here is a example url of image : http://p6.music.126.net/obj/w5zDmcODw6PDjj7DiMOi/4497423202/f55d/1ece/b350/7d32fea2646ce85fba6756f84761a702.jpg\r\nI'm sorry that there is a slight problem with the code provided above, which has been fixed as follows:\r\n**import Image\r\nfrom transformers import AutoProcessor, AutoModelForCausalLM\r\ndevice = \"cuda\"\r\nprocessor = AutoProcessor.from_pretrained(\"microsoft/git-large-textcaps\"))\r\nmodel = AutoModelForCausalLM.from_pretrained(\"microsoft/git-large-textcaps\")).to(device)\r\ndef test(image):\r\n image_cv2 = Image.open(requests.get(pic_url, stream=True).raw).convert('RGB')\r\n with torch.no_grad():\r\n pixel_values_org = processor(images=image_cv2, return_tensors=\"pt\").pixel_values\r\n pixel_values1 = pixel_values_org.to(device).detach()\r\n generated_ids = model.generate(pixel_values=pixel_values1, max_length=20)\r\n generated_ids1 = generated_ids.to(device).detach()\r\n generated_caption = processor.batch_decode(generated_ids1, skip_special_tokens=True)[0]\r\n return generated_caption**\r\n\r\n- I don't use decord in my code\r\n",
"Along with @amyeroberts 's comments, I would suggest:\r\n- First, please format the code snippet in a better way\r\n - you can enclose the code inside like done in the following screenshop\r\n \r\n <img width=\"500\" alt=\"Screenshot 2023-06-27 111112\" src=\"https://github.com/huggingface/transformers/assets/2521628/baadbd25-f2b0-4d80-9d7a-68fa413de429\">\r\n\r\n\r\n - with proper indent (currently it's difficult to read the code snippet)\r\n- provide a for loop that run the model and print some cpu/gpu memory usage \r\n\r\nThat would help us to provide help 🙏 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,686 | 1,690 | 1,690 |
NONE
| null |
### System Info
transformers version: 4.28.1
Platform: Linux-5.8.0+-x86_64-with-Ubuntu-20.04
Python version: 3.7.11
PyTorch version (GPU?): 1.10.0+cu113 (True)
Using GPU in script?: Yes
Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When I am using the following script some times, the memory rises and memory leak happend.
processor = AutoProcessor.from_pretrained("microsoft/git-large-textcaps"))
model = AutoModelForCausalLM.from_pretrained("microsoft/git-large-textcaps")).to(device)
def test(image):
with torch.no_grad():
pixel_values_org = processor(images=image_cv2, return_tensors="pt").pixel_values
pixel_values1 = pixel_values_org.to(device).detach()
generated_ids = model.generate(pixel_values=pixel_values1, max_length=20)
generated_ids1 = generated_ids.to(device).detach()
generated_caption = processor.batch_decode(generated_ids1, skip_special_tokens=True)[0]
return generated_caption
### Expected behavior
The memory is nearly stable
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24102/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24102/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24101
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24101/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24101/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24101/events
|
https://github.com/huggingface/transformers/pull/24101
| 1,747,182,354 |
PR_kwDOCUB6oc5Se3jJ
| 24,101 |
Change ProgressCallback to use dynamic_ncols=True
|
{
"login": "gmlwns2000",
"id": 4879345,
"node_id": "MDQ6VXNlcjQ4NzkzNDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4879345?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gmlwns2000",
"html_url": "https://github.com/gmlwns2000",
"followers_url": "https://api.github.com/users/gmlwns2000/followers",
"following_url": "https://api.github.com/users/gmlwns2000/following{/other_user}",
"gists_url": "https://api.github.com/users/gmlwns2000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gmlwns2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gmlwns2000/subscriptions",
"organizations_url": "https://api.github.com/users/gmlwns2000/orgs",
"repos_url": "https://api.github.com/users/gmlwns2000/repos",
"events_url": "https://api.github.com/users/gmlwns2000/events{/privacy}",
"received_events_url": "https://api.github.com/users/gmlwns2000/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @sgugger,\r\n\r\nI merge from `upstream/main`, and then run `make style`!\r\n\r\nI hope this will be fine.",
"Mm the diff now shows 43 files. COuld you limit your changes to the file you are actually modifying?",
"@sgugger \r\n\r\nSorry for the late reply, and it looks fine now!",
"Thanks! Failures are unrelated and due to a down time on the Hub."
] | 1,686 | 1,686 | 1,686 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #24100
## Who can review?
This is about the trainer module, therefore I will tag,
@sgugger
Thank you
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24101/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24101/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24101",
"html_url": "https://github.com/huggingface/transformers/pull/24101",
"diff_url": "https://github.com/huggingface/transformers/pull/24101.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24101.patch",
"merged_at": 1686574609000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24100
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24100/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24100/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24100/events
|
https://github.com/huggingface/transformers/issues/24100
| 1,747,181,989 |
I_kwDOCUB6oc5oI-Gl
| 24,100 |
[Trainer] Why not use `tqdm`'s `dynamic_ncols=True` option?
|
{
"login": "gmlwns2000",
"id": 4879345,
"node_id": "MDQ6VXNlcjQ4NzkzNDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4879345?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gmlwns2000",
"html_url": "https://github.com/gmlwns2000",
"followers_url": "https://api.github.com/users/gmlwns2000/followers",
"following_url": "https://api.github.com/users/gmlwns2000/following{/other_user}",
"gists_url": "https://api.github.com/users/gmlwns2000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gmlwns2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gmlwns2000/subscriptions",
"organizations_url": "https://api.github.com/users/gmlwns2000/orgs",
"repos_url": "https://api.github.com/users/gmlwns2000/repos",
"events_url": "https://api.github.com/users/gmlwns2000/events{/privacy}",
"received_events_url": "https://api.github.com/users/gmlwns2000/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,686 | 1,686 | 1,686 |
CONTRIBUTOR
| null |
### Feature request
# Problem
Tqdm progress bar is getting ugly when the width of the terminal is shrunk!

It progress bar makes the new line on every update! It is very ugly...
# Solution
Simply add the `dynamic_ncols=True` option to `tqdm`. It is located in `trainer_callbacks.ProgressCallback`.

You can check the progress bar is now dynamically resized when the terminal size is updated.
### Motivation
When I connect `tmux` session with different widths of the terminal, then the `tqdm` printing is getting ugly.
### Your contribution
Please check the PR #24101
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24100/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24100/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24099
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24099/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24099/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24099/events
|
https://github.com/huggingface/transformers/issues/24099
| 1,747,094,942 |
I_kwDOCUB6oc5oIo2e
| 24,099 |
generation error when same token in "forced_eos_token_id" and "supress_token" parameter.
|
{
"login": "idisuu",
"id": 85441953,
"node_id": "MDQ6VXNlcjg1NDQxOTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/85441953?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/idisuu",
"html_url": "https://github.com/idisuu",
"followers_url": "https://api.github.com/users/idisuu/followers",
"following_url": "https://api.github.com/users/idisuu/following{/other_user}",
"gists_url": "https://api.github.com/users/idisuu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/idisuu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/idisuu/subscriptions",
"organizations_url": "https://api.github.com/users/idisuu/orgs",
"repos_url": "https://api.github.com/users/idisuu/repos",
"events_url": "https://api.github.com/users/idisuu/events{/privacy}",
"received_events_url": "https://api.github.com/users/idisuu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
open
| false |
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hey @idisuu 👋 \r\n\r\nThis is a tricky one that we haven't considered. I'm leaning towards not allowing it -- if you want to suppress the eos token until a certain point, you can also set `min_length`. Suppressing and forcing at the same time is ambiguous 😅 \r\n\r\nWe have very little argument verification atm. Over the next month, I'll be working on validation of `generate` arguments. I'll make sure this one goes in! (and keep the issue open until then)\r\n\r\nThank you for raising this issue!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,686 | 1,689 | null |
NONE
| null |
### System Info
- `transformers` version: 4.28.0.dev0
- Platform: Linux-5.4.0-139-generic-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.15.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. HuggingFace's generation method was being used in the Open-Assistant library that used HuggingFace's Model class.
https://github.com/LAION-AI/Open-Assistant/blob/0fcf3e08fe62295d4696e590005b0f33383342ea/model/model_training/utils/ppo_utils.py#L264-L267
2. generate method throw an error: RuntimeError:probability tensor contains either `int`, `nan` or element < 0

3. I found that if same token is in "forced_eos_token_id" and "suppress_tokens", error occurs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "EleutherAI/pythia-70m-deduped"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
gen_kwargs = {'max_new_tokens': 10,
'top_k': 0,
'top_p': 0.7,
'do_sample': True,
'temperature': 1.0}
gen_kwargs["forced_eos_token_id"] = tokenizer.eos_token_id
gen_kwargs["suppress_tokens"] = [tokenizer.eos_token_id]
print(gen_kwargs)
question = """
Where is Gangnam?
"""
batch = tokenizer.encode(f"<|prompter|>{question}<|assistant|>", return_tensors="pt")
out = model.generate(
input_ids=batch.to(model.device),
**gen_kwargs
)
```
### Expected behavior
I want to know, using same token in "forced_eos_token_id" and "suppress_token" is restricted usage, because it seems to me bit contradictory.
If it is an general error, I think that something like warning should be recommended.
Or it might be my own error.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24099/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24099/timeline
|
reopened
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24098
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24098/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24098/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24098/events
|
https://github.com/huggingface/transformers/issues/24098
| 1,747,086,097 |
I_kwDOCUB6oc5oImsR
| 24,098 |
Trainer throws error when using torchrun and small dataset
|
{
"login": "Joshuaclymer",
"id": 35242201,
"node_id": "MDQ6VXNlcjM1MjQyMjAx",
"avatar_url": "https://avatars.githubusercontent.com/u/35242201?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Joshuaclymer",
"html_url": "https://github.com/Joshuaclymer",
"followers_url": "https://api.github.com/users/Joshuaclymer/followers",
"following_url": "https://api.github.com/users/Joshuaclymer/following{/other_user}",
"gists_url": "https://api.github.com/users/Joshuaclymer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Joshuaclymer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Joshuaclymer/subscriptions",
"organizations_url": "https://api.github.com/users/Joshuaclymer/orgs",
"repos_url": "https://api.github.com/users/Joshuaclymer/repos",
"events_url": "https://api.github.com/users/Joshuaclymer/events{/privacy}",
"received_events_url": "https://api.github.com/users/Joshuaclymer/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"> Dataset size should not affect whether the code throws an error.\r\n\r\nI disagree here. The dataset is too small so you will have an error anyway, but it could be clearer I agree. This might be linked to FSDP so tagging @pacman100 ",
"For anyone who happens to be fine-tuning on tiny datasets: you can fix this issue by reducing `gradient_accumulation_steps`. \r\n\r\nI think num_examples has to be greater than `gradient_accumulation_steps*num_GPUs*per_device_train_batch_size`\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,686 | 1,689 | 1,689 |
NONE
| null |
### System Info
- `transformers` version: 4.28.0.dev0
- Platform: Linux-5.4.0-149-generic-x86_64-with-glibc2.31
- Python version: 3.10.10
- Huggingface_hub version: 0.15.1
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes using torchrun
- Using distributed or parallel set-up in script?: Yes, using torchrun
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
## Command
torchrun --nproc_per_node=2 --master_port=8080 train/train_old_2.py \
--model_name_or_path /workspace/models/llama7b \
--data_path /workspace/instruct-generalize/data/test.json \
--bf16 True \
--output_dir /workspace/fine-tune-result \
--num_train_epochs 3 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 8 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 2000 \
--save_total_limit 1 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--fsdp "full_shard auto_wrap" \
--fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \
--tf32 True
## Stacktrace excerpt
Note: these are actually two stack traces that are printed over each other because there are two threads.
File "/usr/local/lib/python3.10/dist-packages/torch/optim/optimizer.py", line 33, in _use_grad
ret = func(self, *args, **kwargs)
exp_avg.mul_(beta1).add_(grad, alpha=1 - beta1)
File "/usr/local/lib/python3.10/dist-packages/torch/optim/adamw.py", line 171, in step
adamw(
RuntimeError: The size of tensor a (131078144) must match the size of tensor b (262156288) at non-singleton dimension 0
File "/usr/local/lib/python3.10/dist-packages/torch/optim/adamw.py", line 321, in adamw
func(
File "/usr/local/lib/python3.10/dist-packages/torch/optim/adamw.py", line 389, in _single_tensor_adamw
exp_avg.mul_(beta1).add_(grad, alpha=1 - beta1)
RuntimeError: The size of tensor a (131078144) must match the size of tensor b (262156288) at non-singleton dimension
## Dataset that throws error:
`
[{"instruction":"Do the addition.", "input": "Hi. What is zero plus zero?","output": "zero"},
{"instruction":"Do the addition.","input": "Hi. What is zero plus one?","output": "one"},
{"instruction":"Do the addition.","input": "Hi. What is zero plus two?","output": "two"},
{"instruction":"Do the addition.","input": "Hi. What is zero plus three?","output": "three"},
{"instruction":"Do the addition.","input": "Hi. What is zero plus four?","output": "four"},
{"instruction":"Do the addition.","input": "Hi. What is zero plus five?","output": "five"},
{"instruction":"Do the addition.","input": "Hi. What is zero plus six?","output": "six"}]
`
## Dataset that doesn't throw error:
Just copy and paste the code above about 10 times to increase the size of the dataset.
## Training code
```
import copy
import logging
from dataclasses import dataclass, field
from typing import Dict, Optional, Sequence
import torch
import transformers
import utils
from torch.utils.data import Dataset
from transformers import Trainer
IGNORE_INDEX = -100
DEFAULT_PAD_TOKEN = "[PAD]"
DEFAULT_EOS_TOKEN = "</s>"
DEFAULT_BOS_TOKEN = "<s>"
DEFAULT_UNK_TOKEN = "<unk>"
PROMPT_DICT = {
"prompt_input": (
"Below is an instruction that describes a task, paired with an input that provides further context. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:"
),
"prompt_no_input": (
"Below is an instruction that describes a task. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Response:"
),
}
@dataclass
class ModelArguments:
model_name_or_path: Optional[str] = field(default="facebook/opt-125m")
@dataclass
class DataArguments:
data_path: str = field(default=None, metadata={"help": "Path to the training data."})
@dataclass
class TrainingArguments(transformers.TrainingArguments):
cache_dir: Optional[str] = field(default=None)
optim: str = field(default="adamw_torch")
model_max_length: int = field(
default=512,
metadata={"help": "Maximum sequence length. Sequences will be right padded (and possibly truncated)."},
)
def smart_tokenizer_and_embedding_resize(
special_tokens_dict: Dict,
tokenizer: transformers.PreTrainedTokenizer,
model: transformers.PreTrainedModel,
):
"""Resize tokenizer and embedding.
Note: This is the unoptimized version that may make your embedding size not be divisible by 64.
"""
num_new_tokens = tokenizer.add_special_tokens(special_tokens_dict)
model.resize_token_embeddings(len(tokenizer))
if num_new_tokens > 0:
input_embeddings = model.get_input_embeddings().weight.data
output_embeddings = model.get_output_embeddings().weight.data
input_embeddings_avg = input_embeddings[:-num_new_tokens].mean(dim=0, keepdim=True)
output_embeddings_avg = output_embeddings[:-num_new_tokens].mean(dim=0, keepdim=True)
input_embeddings[-num_new_tokens:] = input_embeddings_avg
output_embeddings[-num_new_tokens:] = output_embeddings_avg
def _tokenize_fn(strings: Sequence[str], tokenizer: transformers.PreTrainedTokenizer) -> Dict:
"""Tokenize a list of strings."""
tokenized_list = [
tokenizer(
text,
return_tensors="pt",
padding="longest",
max_length=tokenizer.model_max_length,
truncation=True,
)
for text in strings
]
input_ids = labels = [tokenized.input_ids[0] for tokenized in tokenized_list]
input_ids_lens = labels_lens = [
tokenized.input_ids.ne(tokenizer.pad_token_id).sum().item() for tokenized in tokenized_list
]
return dict(
input_ids=input_ids,
labels=labels,
input_ids_lens=input_ids_lens,
labels_lens=labels_lens,
)
def preprocess(
sources: Sequence[str],
targets: Sequence[str],
tokenizer: transformers.PreTrainedTokenizer,
) -> Dict:
"""Preprocess the data by tokenizing."""
examples = [s + t for s, t in zip(sources, targets)]
examples_tokenized, sources_tokenized = [_tokenize_fn(strings, tokenizer) for strings in (examples, sources)]
input_ids = examples_tokenized["input_ids"]
labels = copy.deepcopy(input_ids)
for label, source_len in zip(labels, sources_tokenized["input_ids_lens"]):
label[:source_len] = IGNORE_INDEX
return dict(input_ids=input_ids, labels=labels)
class SupervisedDataset(Dataset):
"""Dataset for supervised fine-tuning."""
def __init__(self, data_path: str, tokenizer: transformers.PreTrainedTokenizer):
super(SupervisedDataset, self).__init__()
logging.warning("Loading data...")
list_data_dict = utils.jload(data_path)
logging.warning("Formatting inputs...")
sources = [f"{example['input']}{tokenizer.eos_token}" for example in list_data_dict["examples"]]
targets = [f"{example['target']}{tokenizer.eos_token}" for example in list_data_dict["examples"]]
logging.warning("Tokenizing inputs... This may take some time...")
data_dict = preprocess(sources, targets, tokenizer)
self.input_ids = data_dict["input_ids"]
self.labels = data_dict["labels"]
def __len__(self):
return len(self.input_ids)
def __getitem__(self, i) -> Dict[str, torch.Tensor]:
return dict(input_ids=self.input_ids[i], labels=self.labels[i])
@dataclass
class DataCollatorForSupervisedDataset(object):
"""Collate examples for supervised fine-tuning."""
tokenizer: transformers.PreTrainedTokenizer
def __call__(self, instances: Sequence[Dict]) -> Dict[str, torch.Tensor]:
input_ids, labels = tuple([instance[key] for instance in instances] for key in ("input_ids", "labels"))
input_ids = torch.nn.utils.rnn.pad_sequence(
input_ids, batch_first=True, padding_value=self.tokenizer.pad_token_id
)
labels = torch.nn.utils.rnn.pad_sequence(labels, batch_first=True, padding_value=IGNORE_INDEX)
return dict(
input_ids=input_ids,
labels=labels,
attention_mask=input_ids.ne(self.tokenizer.pad_token_id),
)
def make_supervised_data_module(tokenizer: transformers.PreTrainedTokenizer, task_name) -> Dict:
"""Make dataset and collator for supervised fine-tuning."""
train_dataset = SupervisedDataset(f"/workspace/instruct-generalize/data/{task_name}.json", tokenizer)
data_collator = DataCollatorForSupervisedDataset(tokenizer=tokenizer)
return dict(train_dataset=train_dataset, eval_dataset=None, data_collator=data_collator)
@dataclass
class DataCollatorBCAllTargets(object):
tokenizer: transformers.PreTrainedTokenizer
def __call__(self, instances: Sequence[Dict]):
# Get inputs and labels as strings
input_strings = []
label_strings = []
for item in instances:
if "target" not in item:
raise Exception(f"The bc_all_targets training strategy requires every example to have a 'target' key. But the following example does not:\n{item}")
input_strings.append(item["input"])
if isinstance(item["target"], list):
for target in item["target"]:
label_strings.append(target)
else:
label_strings.append(item["target"])
# Combine them and tokenize to produce tokenized inputs
combined = [input_strings[i] + label_strings[i] for i in range(len(input_strings))]
encoded_combined = self.tokenizer.batch_encode_plus(combined, padding="longest", return_tensors="pt")
# append eos tokens
ones = torch.ones((encoded_combined["input_ids"].shape[0], 1), dtype=int)
encoded_combined["input_ids"] = torch.cat((encoded_combined["input_ids"], self.tokenizer.eos_token_id*ones), dim=1)
encoded_combined["attention_mask"] = torch.cat((encoded_combined["attention_mask"], ones), dim=1)
encoded_labels = self.tokenizer.batch_encode_plus(label_strings, padding="longest", return_tensors="pt")
label_lengths = [sum(mask) + 1 for mask in encoded_labels["attention_mask"]]
labels = copy.deepcopy(encoded_combined)["input_ids"]
for label, label_len in zip(labels, label_lengths):
label[:-label_len] = IGNORE_INDEX
print(encoded_combined["input_ids"].shape)
print(labels.shape)
print(encoded_combined["attention_mask"].shape)
return dict(
input_ids = encoded_combined["input_ids"],
labels=labels,
attention_mask=encoded_combined["attention_mask"]
)
from classes import Model, Task
def train():
parser = transformers.HfArgumentParser((ModelArguments, DataArguments, TrainingArguments))
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
training_args.remove_unused_columns = False
yoyo = Model(model_args.model_name_or_path)
model = yoyo.model
tokenizer = yoyo.tokenizer
special_tokens_dict = dict()
if tokenizer.pad_token is None:
special_tokens_dict["pad_token"] = DEFAULT_PAD_TOKEN
if tokenizer.eos_token is None:
special_tokens_dict["eos_token"] = DEFAULT_EOS_TOKEN
if tokenizer.bos_token is None:
special_tokens_dict["bos_token"] = DEFAULT_BOS_TOKEN
if tokenizer.unk_token is None:
special_tokens_dict["unk_token"] = DEFAULT_UNK_TOKEN
#tokenizer.add_special_tokens(special_tokens_dict)
smart_tokenizer_and_embedding_resize(
special_tokens_dict=special_tokens_dict,
tokenizer=tokenizer,
model=model,
)
data_module = make_supervised_data_module(tokenizer=tokenizer, task_name=data_args.data_path)
trainer = Trainer(model=model, tokenizer=tokenizer, args=training_args, **data_module)
trainer.train()
trainer.save_state()
trainer.save_model(output_dir=training_args.output_dir)
if __name__ == "__main__":
train()
```
### Expected behavior
Dataset size should not affect whether the code throws an error.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24098/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24098/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/24097
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24097/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24097/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24097/events
|
https://github.com/huggingface/transformers/pull/24097
| 1,747,080,815 |
PR_kwDOCUB6oc5SehZc
| 24,097 |
add trust_remote_code option to CLI download cmd
|
{
"login": "radames",
"id": 102277,
"node_id": "MDQ6VXNlcjEwMjI3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/102277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/radames",
"html_url": "https://github.com/radames",
"followers_url": "https://api.github.com/users/radames/followers",
"following_url": "https://api.github.com/users/radames/following{/other_user}",
"gists_url": "https://api.github.com/users/radames/gists{/gist_id}",
"starred_url": "https://api.github.com/users/radames/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/radames/subscriptions",
"organizations_url": "https://api.github.com/users/radames/orgs",
"repos_url": "https://api.github.com/users/radames/repos",
"events_url": "https://api.github.com/users/radames/events{/privacy}",
"received_events_url": "https://api.github.com/users/radames/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger add the decorator, waiting to se if the tests here are passing. on my local env all the 4 test passes successfully 🤔 ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24097). All of your documentation changes will be reflected on that endpoint."
] | 1,686 | 1,686 | 1,686 |
MEMBER
| null |
# What does this PR do?
Add option to allow trust remote code download via CLI command
Address #24063
Fixes # (issue)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). #24063
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. #24063
- [] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
Could't find much documentation about `transformers-cli download`
- [X] Did you write any new necessary tests?
Added two test, one for a simple download pointing to `tmp` and check if folders `blobs,snapshots,refs` are present, and a test for `--trust-remote-code`.
Duplicated testing model https://huggingface.co/hf-internal-testing/test_dynamic_model_with_tokenizer adding a tokenizer to work as expected
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24097/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24097/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/24097",
"html_url": "https://github.com/huggingface/transformers/pull/24097",
"diff_url": "https://github.com/huggingface/transformers/pull/24097.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/24097.patch",
"merged_at": 1686237238000
}
|
https://api.github.com/repos/huggingface/transformers/issues/24096
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/24096/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/24096/comments
|
https://api.github.com/repos/huggingface/transformers/issues/24096/events
|
https://github.com/huggingface/transformers/issues/24096
| 1,746,834,831 |
I_kwDOCUB6oc5oHpWP
| 24,096 |
Exception when saving weights from QLORA due to UnboundLocalError
|
{
"login": "ethanhs",
"id": 9504279,
"node_id": "MDQ6VXNlcjk1MDQyNzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9504279?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ethanhs",
"html_url": "https://github.com/ethanhs",
"followers_url": "https://api.github.com/users/ethanhs/followers",
"following_url": "https://api.github.com/users/ethanhs/following{/other_user}",
"gists_url": "https://api.github.com/users/ethanhs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ethanhs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ethanhs/subscriptions",
"organizations_url": "https://api.github.com/users/ethanhs/orgs",
"repos_url": "https://api.github.com/users/ethanhs/repos",
"events_url": "https://api.github.com/users/ethanhs/events{/privacy}",
"received_events_url": "https://api.github.com/users/ethanhs/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @younesbelkada ",
"Hi @ethanhs \r\nThanks for reporting #24103 should solve the issue",
"Thanks for the quick response and quick fix!"
] | 1,686 | 1,686 | 1,686 |
NONE
| null |
### System Info
- `transformers` version: 4.30.0.dev0
- Platform: Linux-6.1.0-9-amd64-x86_64-with-glibc2.36
- Python version: 3.10.10
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@sgugger you reviewed the PR so it looks like your eyes were on this most recently. Relevant commit: https://github.com/huggingface/transformers/commit/357f281ba24af8d49afd84c7628329d99868f411
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Run the following notebook: https://colab.research.google.com/github/utensil/llm-playground/blob/main/notebooks/axolotl/colab/axolotl_falcon_1b_qlora_gsm8k.ipynb.
I get the following error:
```
Traceback (most recent call last):
File "/home/e/et/ethanhs/axolotl/axolotl/scripts/finetune.py", line 295, in <module>
fire.Fire(train)
File "/home/e/et/ethanhs/miniconda3/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/home/e/et/ethanhs/miniconda3/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/home/e/et/ethanhs/miniconda3/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/home/e/et/ethanhs/axolotl/axolotl/scripts/finetune.py", line 282, in train
trainer.train(resume_from_checkpoint=resume_from_checkpoint)
File "/home/e/et/ethanhs/miniconda3/lib/python3.10/site-packages/transformers/trainer.py", line 1661, in train
return inner_training_loop(
File "/home/e/et/ethanhs/miniconda3/lib/python3.10/site-packages/transformers/trainer.py", line 2070, in _inner_training_loop
self._load_best_model()
File "/home/e/et/ethanhs/miniconda3/lib/python3.10/site-packages/transformers/trainer.py", line 2258, in _load_best_model
self._issue_warnings_after_load(load_result)
UnboundLocalError: local variable 'load_result' referenced before assignment
```
### Expected behavior
I expect for transformers not to raise an exception.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/24096/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/24096/timeline
|
completed
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.