url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
β | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
β | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/7121 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7121/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7121/comments | https://api.github.com/repos/huggingface/transformers/issues/7121/events | https://github.com/huggingface/transformers/pull/7121 | 701,197,363 | MDExOlB1bGxSZXF1ZXN0NDg2Njk1NDcx | 7,121 | [blk] backtranslation script | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,600 | 1,601 | 1,601 | CONTRIBUTOR | null | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #{issue number}
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7121/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7121",
"html_url": "https://github.com/huggingface/transformers/pull/7121",
"diff_url": "https://github.com/huggingface/transformers/pull/7121.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7121.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/7120 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7120/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7120/comments | https://api.github.com/repos/huggingface/transformers/issues/7120/events | https://github.com/huggingface/transformers/issues/7120 | 701,171,438 | MDU6SXNzdWU3MDExNzE0Mzg= | 7,120 | modeling_xlnet.py:283: UserWarning: Mixed memory format inputs detected while calling the operator. | {
"login": "fhamborg",
"id": 18700166,
"node_id": "MDQ6VXNlcjE4NzAwMTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/18700166?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fhamborg",
"html_url": "https://github.com/fhamborg",
"followers_url": "https://api.github.com/users/fhamborg/followers",
"following_url": "https://api.github.com/users/fhamborg/following{/other_user}",
"gists_url": "https://api.github.com/users/fhamborg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fhamborg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fhamborg/subscriptions",
"organizations_url": "https://api.github.com/users/fhamborg/orgs",
"repos_url": "https://api.github.com/users/fhamborg/repos",
"events_url": "https://api.github.com/users/fhamborg/events{/privacy}",
"received_events_url": "https://api.github.com/users/fhamborg/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hey! I haven't been able to reproduce this with what you've given, maybe it is linked to your 'token_type_ids' ? I don't really have enough info on what you're trying to do with only this line. In any case:\r\n\r\n - The warning text seems innocuous.\r\n - I don't see this line in the current (master) codebase, so I'd suggest upgrading and seeing if this issue still crops up.",
"@TevenLeScao FYI: I have the same warning message, on Colab,using Xlnet-base-cased (custom script , custom dataset)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,600 | 1,608 | 1,608 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.7
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@TevenLeScao
## Information
Model I am using (Bert, XLNet ...): xlnet-base-uncased
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Simply invoke XLNet using the following
```
last_hidden_state, mems = lm(
input_ids=input_ids,
token_type_ids=token_type_ids,
output_hidden_states=True,
)
```
I receive a warning as follows:
```
...\anaconda3\envs\newstsc2\lib\site-packages\transformers\modeling_xlnet.py:283: UserWarning: Mixed memory format inputs detected while calling the operator. The operator will output contiguous tensor even if some of the inputs are in channels_last format. (Triggered internally at ..\aten\src\ATen\native\TensorIterator.cpp:918.)
attn_score = (ac + bd + ef) * self.scale
```
## Expected behavior
no warning | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7120/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7120/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7119 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7119/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7119/comments | https://api.github.com/repos/huggingface/transformers/issues/7119/events | https://github.com/huggingface/transformers/pull/7119 | 701,142,340 | MDExOlB1bGxSZXF1ZXN0NDg2NjQ5NTk0 | 7,119 | Fix reproducible tests in Trainer | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7119?src=pr&el=h1) Report\n> Merging [#7119](https://codecov.io/gh/huggingface/transformers/pull/7119?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5636cbb25d248b61bb9027d026dddcd6d1599b0b?el=desc) will **decrease** coverage by `1.26%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7119?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7119 +/- ##\n==========================================\n- Coverage 79.48% 78.22% -1.27% \n==========================================\n Files 168 168 \n Lines 32281 32281 \n==========================================\n- Hits 25660 25251 -409 \n- Misses 6621 7030 +409 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7119?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/7119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `26.05% <0.00%> (-63.52%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/7119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `25.24% <0.00%> (-55.76%)` | :arrow_down: |\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/7119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `34.28% <0.00%> (-48.00%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `57.28% <0.00%> (-15.08%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/7119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `67.10% <0.00%> (-12.67%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `74.18% <0.00%> (-12.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `70.33% <0.00%> (-11.97%)` | :arrow_down: |\n| ... and [16 more](https://codecov.io/gh/huggingface/transformers/pull/7119/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7119?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7119?src=pr&el=footer). Last update [5636cbb...5d7c2ae](https://codecov.io/gh/huggingface/transformers/pull/7119?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"with both gpus seen:\r\n\r\n```\r\npytest tests/test_trainer.py\r\nTest session starts (platform: linux, Python 3.8.5, pytest 6.0.1, pytest-sugar 0.9.4)\r\nUsing --randomly-seed=3075405078\r\nrootdir: /mnt/nvme1/code/huggingface/transformers-master\r\nplugins: xdist-2.1.0, forked-1.3.0, hydra-core-1.0.0, pspec-0.0.4, sugar-0.9.4, randomly-3.4.1, cov-2.10.1, flakefinder-1.0.0\r\ncollecting ... \r\n tests/test_trainer.py βββββ 45% βββββ \r\n\r\nβββββββββββββββββββββββββββββββββββββββββββββββ TrainerIntegrationTest.test_train_and_eval_dataloaders βββββββββββββββββββββββββββββββββββββββββββββββ\r\n\r\nself = <tests.test_trainer.TrainerIntegrationTest testMethod=test_train_and_eval_dataloaders>\r\n\r\n def test_train_and_eval_dataloaders(self):\r\n trainer = get_regression_trainer(learning_rate=0.1, per_device_train_batch_size=16)\r\n> self.assertEqual(trainer.get_train_dataloader().batch_size, 16)\r\nE AssertionError: 32 != 16\r\n\r\ntests/test_trainer.py:144: AssertionError\r\n```\r\n\r\nwith zero or 1 gpu it passes.\r\n",
"Could you retest now?",
"Success!",
"Thanks for checking @stas00 !"
] | 1,600 | 1,600 | 1,600 | COLLABORATOR | null | This fixes the tests that were using hardcoded values for reproducible training by testing instead the training yields the same result as one run with the same seed during setup. As a result, it can also run in a multigpu-environment (which is why I removed the decorator).
I also fixed the batch size to count the number of steps, which should also fix those tests for multigpu-environent. @stas00, when you get a chance to run the tests on that file, it would be great to know if the fix worked as intended.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7119/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7119",
"html_url": "https://github.com/huggingface/transformers/pull/7119",
"diff_url": "https://github.com/huggingface/transformers/pull/7119.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7119.patch",
"merged_at": 1600155164000
} |
https://api.github.com/repos/huggingface/transformers/issues/7118 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7118/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7118/comments | https://api.github.com/repos/huggingface/transformers/issues/7118/events | https://github.com/huggingface/transformers/pull/7118 | 701,018,050 | MDExOlB1bGxSZXF1ZXN0NDg2NTQ1MjQx | 7,118 | Temporarily skip failing tests due to dependency change | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@julien-c wrote that test\r\nhttps://github.com/huggingface/transformers/pull/3800/files#diff-d524f2b47fd9e3af8aea82b9fac55079R88-R90\r\nso it's probably a good idea to ping him instead, as I'm unfamiliar, yet, with that part of `transformers`."
] | 1,600 | 1,600 | 1,600 | MEMBER | null | cc @sgugger @stas00 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7118/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7118/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7118",
"html_url": "https://github.com/huggingface/transformers/pull/7118",
"diff_url": "https://github.com/huggingface/transformers/pull/7118.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7118.patch",
"merged_at": 1600083733000
} |
https://api.github.com/repos/huggingface/transformers/issues/7117 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7117/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7117/comments | https://api.github.com/repos/huggingface/transformers/issues/7117/events | https://github.com/huggingface/transformers/issues/7117 | 700,983,454 | MDU6SXNzdWU3MDA5ODM0NTQ= | 7,117 | Feature request: State the goals and none goals of the library in the README | {
"login": "talolard",
"id": 5352830,
"node_id": "MDQ6VXNlcjUzNTI4MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5352830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/talolard",
"html_url": "https://github.com/talolard",
"followers_url": "https://api.github.com/users/talolard/followers",
"following_url": "https://api.github.com/users/talolard/following{/other_user}",
"gists_url": "https://api.github.com/users/talolard/gists{/gist_id}",
"starred_url": "https://api.github.com/users/talolard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/talolard/subscriptions",
"organizations_url": "https://api.github.com/users/talolard/orgs",
"repos_url": "https://api.github.com/users/talolard/repos",
"events_url": "https://api.github.com/users/talolard/events{/privacy}",
"received_events_url": "https://api.github.com/users/talolard/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"That's a good idea! The closest document we have to this right now is our [philosophy](https://huggingface.co/transformers/philosophy.html), have you checked it out?",
"No, I just saw it now. \r\nIt in fact has exactly what I would have wanted to find. Particularly the sentence\r\n> As a consequence, this library is NOT a modular toolbox of building blocks for neural nets. If you want to extend/build-upon the library, just use regular Python/PyTorch/TensorFlow/Keras modules and inherit from the base classes of the library to reuse functionalities like model loading/saving.\r\n\r\n",
"We could clean up the readme indeed. It's probably time to remove the migration sections about `pytorh-pretrained-bert` and `pytorch-transformers` and the quick-tour about the examples could be shortened.",
"Cleaning the README has been on my TODO for a long time, but I have been actively procrastinating. Will do that this week.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,600 | 1,606 | 1,606 | NONE | null | # π Feature request
I'd like the library's README to say what it's goals and none goals are.
For example, as a (new) user I really like [Typescripts non-goals](https://github.com/microsoft/TypeScript/wiki/TypeScript-Design-Goals) because it made it clear what to expect from Typescript and thus how to best use it. I'd like the same for transformers.
## Motivation
As a new user I look at the README and think this library can and wants to do everything that I do.
But I think it has a tighter more well defined (implicit) scope and that peripheral use cases are better handled elsewhere, such as aligning [offset annotations](https://github.com/huggingface/transformers/issues/7019#issuecomment-691965153) or having trainers [support other data structures](https://github.com/huggingface/transformers/issues/6860).
I wish that was stated (as clearly as possible) so that I could make the best use of transformers instead of forcing it to be something it doesn't want to be.
## Your contribution
I think it's up to the Huggingface team to decide what transformers is and what it will be. After digging around for a few days I would suggest something like
> Transformers makes pre-trained language models available to everyone through clear, consistent and cross platform APIs. Transformer's goal(s) are to make pretrained models accessible and plug into end users existing infrastructure, as well as support common academic tasks.
> Transformers does not aim to
* Implement end to end applications of NLP
* Implement domain specific models (Custom entities in Swahili, joint classification and entity annotation)
* ....
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7117/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7117/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7116 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7116/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7116/comments | https://api.github.com/repos/huggingface/transformers/issues/7116/events | https://github.com/huggingface/transformers/pull/7116 | 700,958,861 | MDExOlB1bGxSZXF1ZXN0NDg2NDk1MzY1 | 7,116 | fix link to paper | {
"login": "btel",
"id": 41565,
"node_id": "MDQ6VXNlcjQxNTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/41565?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/btel",
"html_url": "https://github.com/btel",
"followers_url": "https://api.github.com/users/btel/followers",
"following_url": "https://api.github.com/users/btel/following{/other_user}",
"gists_url": "https://api.github.com/users/btel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/btel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/btel/subscriptions",
"organizations_url": "https://api.github.com/users/btel/orgs",
"repos_url": "https://api.github.com/users/btel/repos",
"events_url": "https://api.github.com/users/btel/events{/privacy}",
"received_events_url": "https://api.github.com/users/btel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,600 | 1,600 | 1,600 | CONTRIBUTOR | null | change to url of the cited paper (the previous one linked to a different paper) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7116/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7116/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7116",
"html_url": "https://github.com/huggingface/transformers/pull/7116",
"diff_url": "https://github.com/huggingface/transformers/pull/7116.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7116.patch",
"merged_at": 1600083820000
} |
https://api.github.com/repos/huggingface/transformers/issues/7115 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7115/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7115/comments | https://api.github.com/repos/huggingface/transformers/issues/7115/events | https://github.com/huggingface/transformers/issues/7115 | 700,944,470 | MDU6SXNzdWU3MDA5NDQ0NzA= | 7,115 | can i self-define the decoder when i user EncoderDecoderModel? | {
"login": "lonelydancer",
"id": 548443,
"node_id": "MDQ6VXNlcjU0ODQ0Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/548443?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lonelydancer",
"html_url": "https://github.com/lonelydancer",
"followers_url": "https://api.github.com/users/lonelydancer/followers",
"following_url": "https://api.github.com/users/lonelydancer/following{/other_user}",
"gists_url": "https://api.github.com/users/lonelydancer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lonelydancer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lonelydancer/subscriptions",
"organizations_url": "https://api.github.com/users/lonelydancer/orgs",
"repos_url": "https://api.github.com/users/lonelydancer/repos",
"events_url": "https://api.github.com/users/lonelydancer/events{/privacy}",
"received_events_url": "https://api.github.com/users/lonelydancer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hey @lonelydancer, \r\n\r\nyou can define which decoder you want to use. It can be either a pretrained model or a randomly initialized model. The model just has to be part of `AutoModelForCausalLM` to be loaded into the EncoderDecoderModel. If you give me some more details on your use-case I can probably help you :-) ",
"@patrickvonplaten \r\nHi,here is my code.It seems ok.\r\n\r\ntokenizer = BertTokenizer.from_pretrained(model_name)\r\ntokenizer.bos_token = tokenizer.cls_token\r\ntokenizer2 = BertTokenizer(vocab_file=voc_file, tokenize_chinese_chars=False) # Initialize tokenizer use my own vocab\r\ntokenizer2.bos_token = tokenizer2.cls_token\r\ndecoder_model = AutoModelForCausalLM.from_config(config=decoder_config)\r\nconfig = EncoderDecoderConfig.from_encoder_decoder_configs(encoder_config, decoder_config)\r\n\r\n",
"Why would you need two tokenizers ? \r\nShould the model encode in English and decode to Chinese?",
"@patrickvonplaten inmy taskοΌencoder model tokenizer is char levelοΌ and the decoder model is a word sequence.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,600 | 1,607 | 1,607 | NONE | null | # β Questions & Help
In my task, i have to define a decoder by myself, because the vocabulary is Chinese words.
But it seems the class method do not support that, i have to init the decoder from the pretrained model.
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7115/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7115/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7114 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7114/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7114/comments | https://api.github.com/repos/huggingface/transformers/issues/7114/events | https://github.com/huggingface/transformers/issues/7114 | 700,869,582 | MDU6SXNzdWU3MDA4Njk1ODI= | 7,114 | How to return the word embeddings and how to understand the hidden_states in return? | {
"login": "island99",
"id": 18048381,
"node_id": "MDQ6VXNlcjE4MDQ4Mzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/18048381?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/island99",
"html_url": "https://github.com/island99",
"followers_url": "https://api.github.com/users/island99/followers",
"following_url": "https://api.github.com/users/island99/following{/other_user}",
"gists_url": "https://api.github.com/users/island99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/island99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/island99/subscriptions",
"organizations_url": "https://api.github.com/users/island99/orgs",
"repos_url": "https://api.github.com/users/island99/repos",
"events_url": "https://api.github.com/users/island99/events{/privacy}",
"received_events_url": "https://api.github.com/users/island99/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Well if you want the embeddings of words with their context, your best bet is to take the entire output of the model. If you take only the word embeddings, these are contextless. \r\n\r\nYou should take the model with no head, i.e., the base model: `BertModel`, `GPT2Model`, etc. output to get this value.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,600 | 1,606 | 1,606 | NONE | null | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
I am new here, transformers is amazing. What I want is to get the embeddings of words with context , βlogitsβ is obviously a bit thin, I think the hidden_states may represent the word embedding, but I can not understand the output of the embeddings and output of each layer.
Which one should I choose?
Thanks very much!
> hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) β Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7114/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7114/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7113 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7113/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7113/comments | https://api.github.com/repos/huggingface/transformers/issues/7113/events | https://github.com/huggingface/transformers/issues/7113 | 700,808,780 | MDU6SXNzdWU3MDA4MDg3ODA= | 7,113 | Generate coherent text with T5 | {
"login": "parthplc",
"id": 35425925,
"node_id": "MDQ6VXNlcjM1NDI1OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/35425925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/parthplc",
"html_url": "https://github.com/parthplc",
"followers_url": "https://api.github.com/users/parthplc/followers",
"following_url": "https://api.github.com/users/parthplc/following{/other_user}",
"gists_url": "https://api.github.com/users/parthplc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/parthplc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/parthplc/subscriptions",
"organizations_url": "https://api.github.com/users/parthplc/orgs",
"repos_url": "https://api.github.com/users/parthplc/repos",
"events_url": "https://api.github.com/users/parthplc/events{/privacy}",
"received_events_url": "https://api.github.com/users/parthplc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@parthplc Did you find an answer to this question?",
"@jsrozner yeah, I was able to finetune t5 for text generation.",
"> @jsrozner yeah, I was able to finetune t5 for text generation.\r\n\r\nIs your work publicly visible anywhere? (colab or a repository) I was looking for some examples to adapt to my use case. Although I got a helpful response here: https://discuss.huggingface.co/t/t5-for-conditional-generation-getting-started/1284/2",
"I used the same @jsrozner ",
"@parthplc @jsrozner can someone share a nb of using finetune.py + t5, would love to see some examples for reference. ",
"Check out \r\nhttps://discuss.huggingface.co/t/t5-seq2seq-custom-fine-tuning/1497/6 (details on tweaking your vocab if you need it)\r\n\r\nThe link above (https://discuss.huggingface.co/t/t5-for-conditional-generation-getting-started/1284) has links to other useful T5 posts on huggingface.\r\n\r\nAnd I am looking for answers to this post for some other tips and tricks: \r\nhttps://discuss.huggingface.co/t/t5-tips-for-finetuning-on-crossword-clues-clue-answer/1514\r\n\r\nTo actually run, you should do this (called \"installing from source):\r\n- clone transformers repository\r\n- create new conda or other env\r\n- (from top level directory), pip install -e .\r\n- cd examples && pip install -r requirements.txt\r\n- cd seq2seq\r\n- ./finetune_t5_bart_tiny.sh (or whatever the name is) -- will run two test epochs\r\n\r\nAfter that it's just a matter of choosing the model, tokenizer, setting up your data, and then tweaking params. Most of that is in the posts I linked!\r\n"
] | 1,600 | 1,602 | 1,600 | NONE | null | Can we use T5 to generate coherent text given an input line like in case of GPT-2? If yes how do we fine-tune T5 for such a task?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7113/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7113/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7112 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7112/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7112/comments | https://api.github.com/repos/huggingface/transformers/issues/7112/events | https://github.com/huggingface/transformers/issues/7112 | 700,731,019 | MDU6SXNzdWU3MDA3MzEwMTk= | 7,112 | prepare for the label for EncoderDecoderModel | {
"login": "lonelydancer",
"id": 548443,
"node_id": "MDQ6VXNlcjU0ODQ0Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/548443?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lonelydancer",
"html_url": "https://github.com/lonelydancer",
"followers_url": "https://api.github.com/users/lonelydancer/followers",
"following_url": "https://api.github.com/users/lonelydancer/following{/other_user}",
"gists_url": "https://api.github.com/users/lonelydancer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lonelydancer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lonelydancer/subscriptions",
"organizations_url": "https://api.github.com/users/lonelydancer/orgs",
"repos_url": "https://api.github.com/users/lonelydancer/repos",
"events_url": "https://api.github.com/users/lonelydancer/events{/privacy}",
"received_events_url": "https://api.github.com/users/lonelydancer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"In BertLMHeadModel ,\r\nshift the label\r\n*prediction_scores = prediction_scores[:, :-1, :].contiguous()\r\n*labels = labels[:, 1:].contiguous()"
] | 1,600 | 1,600 | 1,600 | NONE | null | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
When i using a EncoderDecoderModel for a translation task, do i have to right shift the target token to generate the label?
The [tutorial ](https://github.com/huggingface/transformers/tree/master/model_cards/patrickvonplaten/bert2bert-cnn_dailymail-fp16) not do that, and the source code in forward function do neither.
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7112/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7112/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7111 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7111/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7111/comments | https://api.github.com/repos/huggingface/transformers/issues/7111/events | https://github.com/huggingface/transformers/issues/7111 | 700,728,197 | MDU6SXNzdWU3MDA3MjgxOTc= | 7,111 | MBART/Marian for low resource/backtranslation | {
"login": "tuhinjubcse",
"id": 3104771,
"node_id": "MDQ6VXNlcjMxMDQ3NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3104771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tuhinjubcse",
"html_url": "https://github.com/tuhinjubcse",
"followers_url": "https://api.github.com/users/tuhinjubcse/followers",
"following_url": "https://api.github.com/users/tuhinjubcse/following{/other_user}",
"gists_url": "https://api.github.com/users/tuhinjubcse/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tuhinjubcse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tuhinjubcse/subscriptions",
"organizations_url": "https://api.github.com/users/tuhinjubcse/orgs",
"repos_url": "https://api.github.com/users/tuhinjubcse/repos",
"events_url": "https://api.github.com/users/tuhinjubcse/events{/privacy}",
"received_events_url": "https://api.github.com/users/tuhinjubcse/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Nope, use Helsinki-NLP!",
"LMK if you have trouble finding your language.",
"**Swahili / Uzbek / Telugu / Gujrati** any of them will do. Also, I just need to do back-translation, do u have a code snippet to do translation using Helsinki-NLP",
"```python\r\nmname = 'Helsinki-NLP/opus-mt-en-sw'\r\nsrc_text = ['I am a small frog with tiny legs.']\r\ntorch_device = 'cpu'#'cuda' if torch.cuda.is_available() else 'cpu'\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(mname)\r\ntok = AutoTokenizer.from_pretrained(mname)\r\ntranslated = model.generate(**tok(src_text, return_tensors='pt'))\r\ntok.batch_decode(translated, skip_special_tokens=True)\r\n# ['Mimi ni chura mdogo mwenye miguu midogo.']\r\n```\r\nFull list of models [here](https://huggingface.co/Helsinki-NLP)\r\n\r\nlet's move to [here](https://discuss.huggingface.co/t/marian-language-discovery-questions/739/3) for further language discovery questions! Thanks!",
"Backtranslation snippet:\r\n```python\r\nfrom transformers import AutoModelForSeq2SeqLM, AutoTokenizer\r\nmname_fwd = 'Helsinki-NLP/opus-mt-en-ceb'\r\nmname_bwd = 'Helsinki-NLP/opus-mt-ceb-en'\r\nsrc_text = ['I am a small frog with tiny legs.']\r\ntorch_device = 'cpu'#'cuda' if torch.cuda.is_available() else 'cpu'\r\nfwd = AutoModelForSeq2SeqLM.from_pretrained(mname_fwd).to(torch_device)\r\nfwd_tok = AutoTokenizer.from_pretrained(mname_fwd)\r\nbwd_tok = AutoTokenizer.from_pretrained(mname_bwd)\r\nbwd = AutoModelForSeq2SeqLM.from_pretrained(mname_bwd).to(torch_device)\r\nif torch_device == 'cuda':\r\n fwd = fwd.half()\r\n bwd = bwd.half()\r\n\r\nfwd_batch = fwd_tok(src_text, return_tensors='pt').to(torch_device)\r\ntranslated = fwd.generate(**fwd_batch)\r\ntranslated_txt = fwd_tok.batch_decode(translated, skip_special_tokens=True)\r\nbwd_batch = bwd_tok(translated_txt, return_tensors='pt').to(torch_device)\r\nbacktranslated = bwd.generate(**bwd_batch)\r\nbwd_tok.batch_decode(backtranslated, skip_special_tokens=True)\r\n# ['I am a small toad with small feet.']\r\n```",
"```\r\nimport ast\r\nimport torch\r\nimport os\r\nfrom transformers import AutoModelForSeq2SeqLM, AutoTokenizer\r\nos.environ[\"CUDA_VISIBLE_DEVICES\"]=\"2\"\r\nmodel1 = 'Helsinki-NLP/opus-mt-en-ceb'\r\nmodel2 = 'Helsinki-NLP/opus-mt-ceb-en'\r\ntorch_device = 'cuda' if torch.cuda.is_available() else 'cpu'\r\nfwd = AutoModelForSeq2SeqLM.from_pretrained(model1).to(torch_device)\r\nfwd_tok = AutoTokenizer.from_pretrained(model1)\r\nbwd_tok = AutoTokenizer.from_pretrained(model2)\r\nbwd = AutoModelForSeq2SeqLM.from_pretrained(model2).to(torch_device)\r\nif torch_device == 'cuda':\r\n fwd = fwd.half()\r\n bwd = bwd.half()\r\n\r\n\r\nfor line in open('train_rep.txt'):\r\n line = ast.literal_eval(line.strip())\r\n fwd_batch = fwd_tok(line, return_tensors='pt').to(torch_device)\r\n translated = fwd.generate(**fwd_batch)\r\n translated_txt = fwd_tok.batch_decode(translated, skip_special_tokens=True)\r\n bwd_batch = bwd_tok(translated_txt, return_tensors='pt').to(torch_device)\r\n backtranslated = bwd.generate(**bwd_batch)\r\n orginal_text = bwd_tok.batch_decode(backtranslated, skip_special_tokens=True)\r\n print(orginal_text)\r\n break\r\n\r\n```\r\n\r\ngot\r\n\r\n\r\n\r\n ```\r\n File \"/nas/home/tuhinc/miniconda3/envs/advattack/lib/python3.6/site-packages/transformers/tokenization_utils_base.py\", line 555, in convert_to_tensors\r\n tensor = as_tensor(value)\r\nValueError: expected sequence of length 27 at dim 1 (got 47)\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"bt.py\", line 25, in <module>\r\n fwd_batch = fwd_tok(line, return_tensors='pt').to(torch_device)\r\n File \"/nas/home/tuhinc/miniconda3/envs/advattack/lib/python3.6/site-packages/transformers/tokenization_utils_base.py\", line 1954, in __call__\r\n **kwargs,\r\n File \"/nas/home/tuhinc/miniconda3/envs/advattack/lib/python3.6/site-packages/transformers/tokenization_utils_base.py\", line 2139, in batch_encode_plus\r\n **kwargs,\r\n File \"/nas/home/tuhinc/miniconda3/envs/advattack/lib/python3.6/site-packages/transformers/tokenization_utils.py\", line 548, in _batch_encode_plus\r\n verbose=verbose,\r\n File \"/nas/home/tuhinc/miniconda3/envs/advattack/lib/python3.6/site-packages/transformers/tokenization_utils.py\", line 614, in _batch_prepare_for_model\r\n batch_outputs = BatchEncoding(batch_outputs, tensor_type=return_tensors)\r\n File \"/nas/home/tuhinc/miniconda3/envs/advattack/lib/python3.6/site-packages/transformers/tokenization_utils_base.py\", line 186, in __init__\r\n self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis)\r\n File \"/nas/home/tuhinc/miniconda3/envs/advattack/lib/python3.6/site-packages/transformers/tokenization_utils_base.py\", line 572, in convert_to_tensors\r\n \"Unable to create tensor, you should probably activate truncation and/or padding \"\r\nValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.\r\n\r\n\r\n```\r\n\r\n\r\nIt works for single sentences but fails for an array or sentences",
"Got it had to do padding=True",
"`padding='longest'` is the best setting IMO."
] | 1,600 | 1,600 | 1,600 | NONE | null | Are there any finetuned checkpoints of mBART on any low resource language release by hugging face. I can see English- Romanian but was wondering if there is a way to access other languages. Did facebook release them actually?
@sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7111/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7111/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7110 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7110/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7110/comments | https://api.github.com/repos/huggingface/transformers/issues/7110/events | https://github.com/huggingface/transformers/pull/7110 | 700,723,859 | MDExOlB1bGxSZXF1ZXN0NDg2Mjk3OTgy | 7,110 | [s2s] distributed eval cleanup | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Failure is spurious, merging."
] | 1,600 | 1,600 | 1,600 | CONTRIBUTOR | null | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #{issue number}
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7110/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7110/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7110",
"html_url": "https://github.com/huggingface/transformers/pull/7110",
"diff_url": "https://github.com/huggingface/transformers/pull/7110.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7110.patch",
"merged_at": 1600054839000
} |
https://api.github.com/repos/huggingface/transformers/issues/7109 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7109/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7109/comments | https://api.github.com/repos/huggingface/transformers/issues/7109/events | https://github.com/huggingface/transformers/pull/7109 | 700,707,215 | MDExOlB1bGxSZXF1ZXN0NDg2Mjg1MDk0 | 7,109 | [s2s run_eval] new features | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"After signature discussion is complete, lets add some test coverage.\r\nCan copy/modify [`test_run_eval`](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/test_seq2seq_examples.py#L287)",
"> * Important that we can `utils.json_load` the dumped files.\r\n\r\nDo you see a reason why it should fail to do so? \r\n\r\nbut the point is moot if we merge into one dict as discussed above.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7109?src=pr&el=h1) Report\n> Merging [#7109](https://codecov.io/gh/huggingface/transformers/pull/7109?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/90cde2e938638e64a8696a12b79ee5f52364b162?el=desc) will **increase** coverage by `1.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7109?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7109 +/- ##\n==========================================\n+ Coverage 79.62% 80.63% +1.00% \n==========================================\n Files 168 168 \n Lines 32284 32284 \n==========================================\n+ Hits 25706 26031 +325 \n+ Misses 6578 6253 -325 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7109?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7109/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7109/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-23.43%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7109/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `20.53% <0.00%> (-21.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7109/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7109/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `67.10% <0.00%> (-12.67%)` | :arrow_down: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7109/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/7109/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7109/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7109/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `90.52% <0.00%> (-3.52%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7109/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.46% <0.00%> (-1.26%)` | :arrow_down: |\n| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/7109/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7109?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7109?src=pr&el=footer). Last update [90cde2e...875eac8](https://codecov.io/gh/huggingface/transformers/pull/7109?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"(after we sort out these few points you pointed out)\r\n\r\nI have discovered more work/functionality is needed for the new script - need to be able to recover from CUDA OOM, want to tap into `--info`, etc. So probably, let's merge this first round, and I will continue working on improving it in the future PRs. The changes of this PR for run_eval are all good to go and can be used right away. unless you want to merge all the return values from `run_generate` into one dict, instead of returning a tuple of 2 separate ones. Let me know and I will attend to that first then.\r\n\r\nAnd, yes, all the tests will be added too - it's on my short todo list.",
"Yes one dict please!",
"OK, test added.\r\n\r\n`test_run_eval_search` is about 3 times slower than `test_run_eval` (it runs eval 4 times). Still OK to leave it as a normal test (no slow)?\r\n\r\n`test_run_eval`:\r\n```\r\nResults (15.45s):\r\n 3 passed\r\n\r\nreal 0m16.193s\r\nuser 0m13.079s\r\nsys 0m1.010s\r\n```\r\n`test_run_eval_search`\r\n```\r\nResults (51.18s):\r\n 3 passed\r\n\r\nreal 0m51.962s\r\nuser 0m42.695s\r\nsys 0m1.688s\r\n```\r\n\r\nand currently supporting:\r\n```\r\ntask_score_names = {\r\n \"translation\": [\"bleu\"],\r\n \"translation_en_to_de\": [\"bleu\"],\r\n \"summarization\": [\"rouge1\", \"rouge2\", \"rougeL\"],\r\n}\r\n```\r\n",
"How do I go about creating a `TINY_FSMT` model, @sshleifer?",
"this looks like a hack:\r\n```\r\ntry:\r\n from .utils import calculate_bleu, calculate_rouge, parse_numeric_n_bool_cl_kwargs, use_task_specific_params\r\nexcept ImportError:\r\n from utils import calculate_bleu, calculate_rouge, parse_numeric_n_bool_cl_kwargs, use_task_specific_params\r\n```\r\nwould it be better to use `dirname(__file__)` and insert that into `sys.path` so it works no matter where it gets invoked from?\r\n\r\nActually adding `examples/seq2seq/conftest.py` that will insert a resolved `examples/seq2seq` into `sys.path` is probably a better fix as it'll solve this issue for all tests under that folder. (see `examples/conftest.py`). and then copy that into each of the examples folders that will tweak their corresponding path. (`pytest` always runs any `conftest.py` it finds first)\r\n\r\nIf you're in agreement I will make another PR to fix those.",
"\r\n\r\nPlease mark your test `@slow` mine should probably be `@slow` too. Ideally we have one fast one.\r\n\r\n\r\n`conftest` great idea! If you change imports, its important to verify that scripts actually run afterwards (you can get the unittests passing without try/except).",
"Tiny model code (roughly):\r\n```python\r\ntok = FSMTTokenizer.from_pretrained('facebook/wmt19-en-de')\r\nconfig = FMSTConfig(decoder_layers=1, encoder_layers=1, vocab_size=vocab_size, d_model=4, encoder_ffn_dim=4, decoder_ffn_dim=4, encoder_attention_heads=1, decoder_attention_heads=1)\r\ntiny_model = FSMTModel(config)\r\nprint(tiny_model.num_parameters())\r\n# Test it\r\nsrc_text, tgt_text = ['I am a small frog.'], ['Ich bin ein kleiner frosch.']\r\nbatch = tok.prepare_seq2seq_batch(src_text, tgt_texts=tgt_text)\r\noutputs = tiny_model(**batch, return_dict=True)\r\nprint(outputs.loss)\r\ntiny_model.save_pretrained('tiny_fsmt_en_de')\r\n# transformers-cli upload tiny_fsmt_en_de stas\r\n```",
"> Please mark your test `@slow` mine should probably be `@slow` too. Ideally we have one fast one.\r\n\r\nProbably having just one of of the models is good enough for a fast test, since it checks that the script works. And then do all the others slow?\r\n\r\n> `conftest` great idea! If you change imports, its important to verify that scripts actually run afterwards (you can get the unittests passing without try/except).\r\n\r\nWill do.\r\n",
"π ",
"> Tiny model code (roughly):\r\n\r\nThank you!\r\n\r\ndon't we want to:\r\n1. include the tokenizer files in the s3 model? (vocabs and merges?)\r\n2. would it make sense to create a tiny vocab and use that for tokenizer? Otherwise with these small settings I still get 336552 params. ",
"1) yes\r\n2) tiny vocab would definitely be better, but is not required! tiny_mbart vocab size is around 300K also.",
"OK, here is the full script:\r\n```\r\nfrom transformers import FSMTTokenizer, FSMTConfig, FSMTForConditionalGeneration\r\nmname = \"facebook/wmt19-en-de\"\r\ntokenizer = FSMTTokenizer.from_pretrained(mname)\r\n# get the correct vocab sizes, etc. from the master model\r\nconfig = FSMTConfig.from_pretrained(mname)\r\nconfig.update(dict(\r\n d_model=4,\r\n encoder_layers=1, decoder_layers=1,\r\n encoder_ffn_dim=4, decoder_ffn_dim=4,\r\n encoder_attention_heads=1, decoder_attention_heads=1))\r\ntiny_model = FSMTForConditionalGeneration(config)\r\nprint(f\"num of params {tiny_model.num_parameters()}\")\r\n# Test it\r\nbatch = tokenizer.prepare_seq2seq_batch([\"Making tiny model\"])\r\noutputs = tiny_model(**batch, return_dict=True)\r\n# Save\r\ntiny_model.save_pretrained('tiny-wmt19-en-de')\r\ntokenizer.save_pretrained('tiny-wmt19-en-de')\r\n```\r\n\r\nI end up with 3.1MB of files:\r\n\r\n```\r\nl tiny-wmt19-en-de/\r\ntotal 3.1M\r\n-rw-rw-r-- 1 stas stas 2.0K Sep 17 11:30 config.json\r\n-rw-rw-r-- 1 stas stas 308K Sep 17 11:30 merges.txt\r\n-rw-rw-r-- 1 stas stas 1.4M Sep 17 11:30 pytorch_model.bin\r\n-rw-rw-r-- 1 stas stas 85 Sep 17 11:30 special_tokens_map.json\r\n-rw-rw-r-- 1 stas stas 111 Sep 17 11:30 tokenizer_config.json\r\n-rw-rw-r-- 1 stas stas 747K Sep 17 11:30 vocab-src.json\r\n-rw-rw-r-- 1 stas stas 747K Sep 17 11:30 vocab-tgt.json\r\n```\r\n\r\nIs this reasonable for non-slow tests? \r\n> tiny vocab would definitely be better, but is not required! tiny_mbart vocab size is around 300K also.\r\n\r\nOr should I make a custom small vocab? It'd shave off about half the size. Probably letter-long bpe codes, so that it could tokenize any input still.\r\n\r\nHere currently we have 1.5M in dict files.",
"I would try it and give up if it's hard.\r\nI've given up on trying to shrink sentencepiece vocabs."
] | 1,600 | 1,600 | 1,600 | CONTRIBUTOR | null | this PR adds several features to `run_eval.py` and adds a new script `run_eval_search.py`
* adding 2 new args:
```
--dump-args print the custom hparams with the results
--info [INFO] use in conjunction w/ --dump-args to print with the results whatever other info you'd like,
e.g. lang=en-ru. If no value is passed, the current datetime string will be used.
```
* changed `parse_numeric_n_bool_cl_kwargs` to support bool args, so now we can pass ` --early_stopping true`
* added a new wrapper script `run_eval_search.py` that performs parametric search using `run_eval.py`.
Here is the new section in `README.md` that explains all the additions:
#### run_eval tips and tricks
When using `run_eval.py`, the following features can be useful:
* if you running the script multiple times and want to make it easier to track what arguments produced that output, use `--dump-args`. Along with the results it will also dump any custom params that were passed to the script. For example if you used: `--num_beams 8 --early_stopping true`, the output will be:
```
{'bleu': 26.887, 'n_obs': 10, 'runtime': 1, 'seconds_per_sample': 0.1, 'num_beams': 8, 'early_stopping': True}
```
`--info` is an additional argument available for the same purpose of tracking the conditions of the experiment. It's useful to pass things that weren't in the argument list, e.g. a language pair `--info "lang:en-ru"`. But also if you pass `--info` without a value it will fallback to the current date/time string, e.g. `2020-09-13 18:44:43`.
If using `--dump-args --info`, the output will be:
```
{'bleu': 26.887, 'n_obs': 10, 'runtime': 1, 'seconds_per_sample': 0.1, 'num_beams': 8, 'early_stopping': True, 'info': '2020-09-13 18:44:43'}
```
If using `--dump-args --info "pair:en-ru chkpt=best`, the output will be:
```
{'bleu': 26.887, 'n_obs': 10, 'runtime': 1, 'seconds_per_sample': 0.1, 'num_beams': 8, 'early_stopping': True, 'info': 'pair=en-ru chkpt=best'}
```
* if you need to perform a parametric search in order to find the best ones that lead to the highest BLEU score, let `run_eval_search.py` to do the searching for you.
The script accepts the exact same arguments as `run_eval.py`, plus an additional argument `--search`. The value of `--search` is parsed, reformatted and fed to ``run_eval.py`` as additional args.
The format for the ``--search`` value is a simple string with hparams and the values to try, e.g.:
```
--search "num_beams=5:10 length_penalty=0.8:1.0:1.2 early_stopping=true:false"
```
which will generate `12` `(2*3*2)` searches for a product of each hparam. For example the example that was just used will invoke `run_eval.py` repeatedly with:
```
--num_beams 5 --length_penalty 0.8 --early_stopping true
--num_beams 5 --length_penalty 0.8 --early_stopping false
[...]
--num_beams 10 --length_penalty 1.2 --early_stopping false
```
On completion this function prints a markdown table of the results sorted by the best BLEU score and the winning arguments.
```
bleu | num_beams | length_penalty | early_stopping
----- | --------- | -------------- | --------------
26.71 | 5 | 1.1 | 1
26.66 | 5 | 0.9 | 1
26.66 | 5 | 0.9 | 0
26.41 | 5 | 1.1 | 0
21.94 | 1 | 0.9 | 1
21.94 | 1 | 0.9 | 0
21.94 | 1 | 1.1 | 1
21.94 | 1 | 1.1 | 0
Best score args:
stas/wmt19-en-ru data/en-ru/val.source data/en-ru/test_translations.txt --reference_path data/en-ru/val.target --score_path data/en-ru/test_bleu.json --bs 8 --task translation --num_beams 5 --length_penalty 1.1 --early_stopping True
```
@sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7109/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7109/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7109",
"html_url": "https://github.com/huggingface/transformers/pull/7109",
"diff_url": "https://github.com/huggingface/transformers/pull/7109.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7109.patch",
"merged_at": 1600279198000
} |
https://api.github.com/repos/huggingface/transformers/issues/7108 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7108/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7108/comments | https://api.github.com/repos/huggingface/transformers/issues/7108/events | https://github.com/huggingface/transformers/pull/7108 | 700,686,552 | MDExOlB1bGxSZXF1ZXN0NDg2MjY5MjY4 | 7,108 | [QOL] add signature for prepare_seq2seq_batch | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Didn't know docstring inheritance worked like that. Very cool!",
"Docstring looks great to me, thanks for adding!",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7108?src=pr&el=h1) Report\n> Merging [#7108](https://codecov.io/gh/huggingface/transformers/pull/7108?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/206b78d4850d3c6fe85a015654293fc4b803ed7b?el=desc) will **increase** coverage by `0.02%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7108?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7108 +/- ##\n==========================================\n+ Coverage 80.84% 80.87% +0.02% \n==========================================\n Files 168 168 \n Lines 32284 32285 +1 \n==========================================\n+ Hits 26099 26109 +10 \n+ Misses 6185 6176 -9 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7108?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7108/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `100.00% <ΓΈ> (ΓΈ)` | |\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/7108/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `96.82% <ΓΈ> (ΓΈ)` | |\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/7108/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `95.04% <ΓΈ> (ΓΈ)` | |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7108/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `99.15% <100.00%> (ΓΈ)` | |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7108/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.88% <100.00%> (+0.03%)` | :arrow_up: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7108/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.83% <0.00%> (-0.36%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7108/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.08% <0.00%> (+0.24%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7108/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+2.25%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7108?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7108?src=pr&el=footer). Last update [206b78d...6adf419](https://codecov.io/gh/huggingface/transformers/pull/7108?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,600 | 1,600 | 1,600 | CONTRIBUTOR | null | This pr adds the signature, but no implementation of `prepare_seq2seq_batch` to `PretrainedTokenizer` to allow IDE autocompletion when `AutoTokenizer` is used.
+ The signature is quite large and it is nice to see what needs to be supplied.
+ The signature is enforced by the unittests, so it won't be misleading.
On this branch, where `self.tokenizer = AutoTokenizer(...)`:

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7108/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7108/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7108",
"html_url": "https://github.com/huggingface/transformers/pull/7108",
"diff_url": "https://github.com/huggingface/transformers/pull/7108.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7108.patch",
"merged_at": 1600129988000
} |
https://api.github.com/repos/huggingface/transformers/issues/7107 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7107/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7107/comments | https://api.github.com/repos/huggingface/transformers/issues/7107/events | https://github.com/huggingface/transformers/pull/7107 | 700,684,706 | MDExOlB1bGxSZXF1ZXN0NDg2MjY3Nzc2 | 7,107 | Update xsum length penalty to better values | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,600 | 1,600 | 1,600 | CONTRIBUTOR | null | + Improves ROUGE by 0.4 pts.
+ Already reflected on S3, so this is just a unittest fix. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7107/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7107/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7107",
"html_url": "https://github.com/huggingface/transformers/pull/7107",
"diff_url": "https://github.com/huggingface/transformers/pull/7107.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7107.patch",
"merged_at": 1600044528000
} |
https://api.github.com/repos/huggingface/transformers/issues/7106 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7106/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7106/comments | https://api.github.com/repos/huggingface/transformers/issues/7106/events | https://github.com/huggingface/transformers/issues/7106 | 700,650,900 | MDU6SXNzdWU3MDA2NTA5MDA= | 7,106 | One command to run+aggregate distributed evaluation results | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"@stas00 let me know if this interests you, no pressure",
"Yes, please. Especially since I asked for it ;)",
"Cleaned up a bit [here](https://github.com/huggingface/transformers/pull/7110)\r\nIt is super fast!\r\n\r\nThis is the current workflow though :(\r\n\r\n```\r\npython -m torch.distributed.launch --nproc_per_node=2 run_distributed_eval.py --model_name sshleifer/distilbart-xsum-12-3 --save_dir tmp_gen --input_path xsum --type_path test --max_source_length 1024 --length_penalty 0.6\r\npython aggregate_distributed_results.py tmp_gen tmp_gen --just_metrics\r\nmv tmp_gen/metrics.json test_rouge.json\r\nrm -rf tmp_gen\r\n```\r\n",
"Also metrics diverge a bit from 1 GPU, hopefully because `DistributedSortishSampler` adds extra examples here:\r\nhttps://github.com/huggingface/transformers/blob/master/examples/seq2seq/utils.py#L258\r\n\r\nThat issue is out of scope for this PR, just a note. I may PR a kwarg to the sampler to not add extra examples separately in a non conflicting way.\r\n\r\nAlso the code is still 4/10 clean, feel free to rename variables/improve readability as you see fit.",
"Still todo:\r\n\r\n[ ] better solution than writing a bunch of json files and hoping they all complete within 5 mins of each other.\r\n[ ] diagnosing differences in generations.\r\n\r\n\r\nI will flesh these out later on."
] | 1,600 | 1,600 | 1,600 | CONTRIBUTOR | null | ### Current Situation
In https://github.com/huggingface/transformers/pull/7105, I wrote a three command combo to run distributed eval.
The three commands are:
```
python -m torch.distributed.launch --nproc_per_node=2 run_distributed_eval.py --fp16 --bs 16
python aggregate_distributed_results.py tmp_gen tmp_gen2
rm -rf tmp_gen
```
+ The first command splits up the data and runs `generate` on a chunk foreach GPU, saving results to `rank_{rank}.json`
+ The second command combines the json results, either just calculating metrics or resaving the generations to disk as `{save_dir}.pred_target`, `save_dir.source` (optionally) `save_dir.target`.
+ the third command deletes the rank.json files.
+ the saving of these independent files to disk in the second command is useful for pseudolabeling, where we train a small model on the predictions of a big model. We have to do more book-keeping than in run_eval.py because I haven't yet determined how to reorder predictions to match the original data. So I just save the original data (roughly, there might be truncation issues) and then write it back to disk.
Goal: 1 command that uses multiple gpus, saves **aggregated** files and computes metrics `metrics.json`. If this command cannot guarantee the ordering of the generations, it must save necessary data to disk.
The design choices in my dirty first attempt do not need to be continued, as long as we can compute metrics and train a second model with the predictions as the new labels
There are many ways to accomplish this, here are a few ideas (not mutually exclusive).
Ideally, this would be 1 command with the order "figured out" somehow, possibly by returning ids from `Seq2SeqDataset`
```
python run_eval.py (existing_args) --gpus 2
```
would just work. No need to save source/labels since they are in the correct order.
To me this sounds hard to implement. I tried briefly and gave up.
The goal now is 1 command, it doesn't need to be the `run_eval.py` command.
### Figuring out ordering by having Seq2Seq dataset return ids
- Ideally, this would be 1 command with the order "figured out" somehow, possibly by returning ids from `Seq2SeqDataset`. Then you don't need to save labels/source documents to disk, you can just reorder the predictions.
### launch n processes in the code rather than from the command line
if we call `torch.multiprocessing.spawn` ourselves, as in
https://github.com/facebookresearch/ParlAI/blob/00efcbebb49524918692638ab580cadeebe70cf8/parlai/scripts/multiprocessing_eval.py#L49
we can wait for the results, join them, and do the reordering in one command.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7106/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7106/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7105 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7105/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7105/comments | https://api.github.com/repos/huggingface/transformers/issues/7105/events | https://github.com/huggingface/transformers/pull/7105 | 700,632,215 | MDExOlB1bGxSZXF1ZXN0NDg2MjI4Njk1 | 7,105 | [s2s] two stage run_distributed_eval.py | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,600 | 1,600 | 1,600 | CONTRIBUTOR | null | ```bash
python -m torch.distributed.launch --nproc_per_node=2 run_distributed_eval.py --fp16 --bs 16
python cleanup_distributed.py tmp_gen tmp_gen2
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7105/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7105/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7105",
"html_url": "https://github.com/huggingface/transformers/pull/7105",
"diff_url": "https://github.com/huggingface/transformers/pull/7105.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7105.patch",
"merged_at": 1600032498000
} |
https://api.github.com/repos/huggingface/transformers/issues/7104 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7104/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7104/comments | https://api.github.com/repos/huggingface/transformers/issues/7104/events | https://github.com/huggingface/transformers/pull/7104 | 700,631,655 | MDExOlB1bGxSZXF1ZXN0NDg2MjI4MjU4 | 7,104 | [s2s distill] allow pegasus-12-12 | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,600 | 1,600 | 1,600 | CONTRIBUTOR | null | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #{issue number}
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7104/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7104/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7104",
"html_url": "https://github.com/huggingface/transformers/pull/7104",
"diff_url": "https://github.com/huggingface/transformers/pull/7104.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7104.patch",
"merged_at": 1600056240000
} |
https://api.github.com/repos/huggingface/transformers/issues/7103 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7103/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7103/comments | https://api.github.com/repos/huggingface/transformers/issues/7103/events | https://github.com/huggingface/transformers/issues/7103 | 700,607,949 | MDU6SXNzdWU3MDA2MDc5NDk= | 7,103 | ValueError: Wrong shape for input_ids (shape torch.Size([18])) or attention_mask (shape torch.Size([18])) | {
"login": "youssefavx",
"id": 56129524,
"node_id": "MDQ6VXNlcjU2MTI5NTI0",
"avatar_url": "https://avatars.githubusercontent.com/u/56129524?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/youssefavx",
"html_url": "https://github.com/youssefavx",
"followers_url": "https://api.github.com/users/youssefavx/followers",
"following_url": "https://api.github.com/users/youssefavx/following{/other_user}",
"gists_url": "https://api.github.com/users/youssefavx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/youssefavx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/youssefavx/subscriptions",
"organizations_url": "https://api.github.com/users/youssefavx/orgs",
"repos_url": "https://api.github.com/users/youssefavx/repos",
"events_url": "https://api.github.com/users/youssefavx/events{/privacy}",
"received_events_url": "https://api.github.com/users/youssefavx/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I had the same error, I had it downgraded `pip install transformers==3.0.2` in order to work.",
"My problem was not that I needed to downgrade but the fact that I needed new functionality from the newest transformers to work while also working with the features of simalign. I solved this bug here: https://github.com/cisnlp/simalign/issues/10#issuecomment-694407502"
] | 1,600 | 1,600 | 1,600 | NONE | null | ## Environment info
- `transformers` version: 3.1.0
- Platform: Darwin-18.0.0-x86_64-i386-64bit
- Python version: 3.7.2
- PyTorch version (GPU?): 1.5.1 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No (Running on Macbook Pro MacOS Mojave 10.14)
### Who can help
albert, bert, GPT2, XLM: @LysandreJik
## Information
Model I am using (Bert, XLNet ...): Bert Multilingual
The problem arises when using:
* [ ] the official example scripts: (give details below)
Hi, this is an error I'm getting running a package someone else created called [simalign](https://github.com/cisnlp/simalign/). I believe their package works with transformers 2.3.0 (I tried it and it worked).
But after trying to upgrade to the newest transformers because I really wanted the fill-mask feature which was more recently released.
(Actually I wanted a way to get the probability of a word in a certain position given the words around it (left and right) in a sentence and have yet to find a way to do that).
At the same time I also want to run simalign in the same application (so I can't downgrade to the previous transformers make it work again).
I'm not exactly sure what is causing the error (as I'm not experienced enough) but I'll share the traceback below here:
```
>>> import simalign
>>>
>>> source_sentence = "Sir Nils Olav III. was knighted by the norwegian king ."
>>> target_sentence = "Nils Olav der Dritte wurde vom norwegischen KΓΆnig zum Ritter geschlagen ."
>>> model = simalign.SentenceAligner()
2020-09-13 18:02:40,806 - simalign.simalign - INFO - Initialized the EmbeddingLoader with model: bert-base-multilingual-cased
I0913 18:02:40.806071 4394976704 simalign.py:47] Initialized the EmbeddingLoader with model: bert-base-multilingual-cased
>>> result = model.get_word_aligns(source_sentence.split(), target_sentence.split())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/simalign/simalign.py", line 181, in get_word_aligns
vectors = self.embed_loader.get_embed_list(list(bpe_lists))
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/simalign/simalign.py", line 65, in get_embed_list
outputs = [self.emb_model(in_ids.to(self.device)) for in_ids in inputs]
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/simalign/simalign.py", line 65, in <listcomp>
outputs = [self.emb_model(in_ids.to(self.device)) for in_ids in inputs]
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/transformers/modeling_bert.py", line 806, in forward
extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape, device)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/transformers/modeling_utils.py", line 248, in get_extended_attention_mask
input_shape, attention_mask.shape
ValueError: Wrong shape for input_ids (shape torch.Size([18])) or attention_mask (shape torch.Size([18]))
```
I'd love it if anyone could tell me if I could replace a line in the code somewhere to fix this.
Here's a sample of that code region where that error occurs:
```
def get_embed_list(self, sent_pair):
if self.emb_model is not None:
sent_ids = [self.tokenizer.convert_tokens_to_ids(x) for x in sent_pair]
inputs = [self.tokenizer.prepare_for_model(sent, return_token_type_ids=True, return_tensors='pt')['input_ids'] for sent in sent_ids]
outputs = [self.emb_model(in_ids.to(self.device)) for in_ids in inputs]
# use vectors from layer 8
vectors = [x[2][self.layer].cpu().detach().numpy()[0][1:-1] for x in outputs]
return vectors
else:
return None
```
The tasks I am working on is:
* [ ] my own task or dataset: (give details below)
I'm trying to use simalign in conjunction with finding the probabilities of a word in a sentence given its position in that sentence
(meaning I want to assess the probability of the word 'to' and 'too' in this sentence: "I went to the store"
Given a function like this:
```
find_probability_of_word_in_given_sentence('to', f'I went {given_word} the store')
find_probability_of_word_in_given_sentence('too', f'I went {given_word} the store')
```
I'd want an output like this:
```
to: 0.849283
too: 0.021412
```
And I don't know if there's a way to do that with transformers version 2.3.0, since it does not have the `fill-mask` feature (although that feature does not return word probabilities for a given word, but word predictions.
## To reproduce
Steps to reproduce the behavior:
1. Install simalign
2. Upgrade transformers
3. Run simalign example here: https://github.com/cisnlp/simalign/blob/master/examples/align_example.py
## Expected behavior
Simalign works as it does with transformers 2.3.0, returns a list of tuples with numbers.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7103/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7103/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7102 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7102/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7102/comments | https://api.github.com/repos/huggingface/transformers/issues/7102/events | https://github.com/huggingface/transformers/issues/7102 | 700,570,237 | MDU6SXNzdWU3MDA1NzAyMzc= | 7,102 | Longformer inference time | {
"login": "cmdllx",
"id": 50104519,
"node_id": "MDQ6VXNlcjUwMTA0NTE5",
"avatar_url": "https://avatars.githubusercontent.com/u/50104519?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cmdllx",
"html_url": "https://github.com/cmdllx",
"followers_url": "https://api.github.com/users/cmdllx/followers",
"following_url": "https://api.github.com/users/cmdllx/following{/other_user}",
"gists_url": "https://api.github.com/users/cmdllx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cmdllx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cmdllx/subscriptions",
"organizations_url": "https://api.github.com/users/cmdllx/orgs",
"repos_url": "https://api.github.com/users/cmdllx/repos",
"events_url": "https://api.github.com/users/cmdllx/events{/privacy}",
"received_events_url": "https://api.github.com/users/cmdllx/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"hi @cmdllx , I don't think longformer will be necessarily faster than bert-base-cased. The goal of longformer is to make memory complexity of attention layer liner w.r.t to seq length instead of quadratic, so it saves memory, not compute. More the seq-length , more will be inference time.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,600 | 1,606 | 1,606 | NONE | null | The longformer-base-4096 model should be faster than bert-base-cased ,However I find that the former one takes more time to inference | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7102/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7102/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7101 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7101/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7101/comments | https://api.github.com/repos/huggingface/transformers/issues/7101/events | https://github.com/huggingface/transformers/pull/7101 | 700,503,762 | MDExOlB1bGxSZXF1ZXN0NDg2MTMzNzIw | 7,101 | [docs] add testing documentation | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7101?src=pr&el=h1) Report\n> Merging [#7101](https://codecov.io/gh/huggingface/transformers/pull/7101?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/90cde2e938638e64a8696a12b79ee5f52364b162?el=desc) will **increase** coverage by `0.31%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7101?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7101 +/- ##\n==========================================\n+ Coverage 79.62% 79.94% +0.31% \n==========================================\n Files 168 168 \n Lines 32284 32284 \n==========================================\n+ Hits 25706 25809 +103 \n+ Misses 6578 6475 -103 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7101?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7101/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `16.37% <0.00%> (-82.31%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7101/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `19.71% <0.00%> (-72.34%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert\\_generation.py](https://codecov.io/gh/huggingface/transformers/pull/7101/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydF9nZW5lcmF0aW9uLnB5) | `39.28% <0.00%> (-55.36%)` | :arrow_down: |\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/7101/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `57.14% <0.00%> (-39.69%)` | :arrow_down: |\n| [src/transformers/tokenization\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7101/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnVubmVsLnB5) | `62.79% <0.00%> (-34.89%)` | :arrow_down: |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7101/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `75.00% <0.00%> (-25.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7101/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `75.91% <0.00%> (-21.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7101/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7101/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/7101/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `80.75% <0.00%> (-0.25%)` | :arrow_down: |\n| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/7101/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7101?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7101?src=pr&el=footer). Last update [90cde2e...52a9e54](https://codecov.io/gh/huggingface/transformers/pull/7101?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I'm curious - why is it OK to have XXX in the code, but not in the docs? especially developer-oriented (i.e. end users most likely won't read this). Is it XXX that's the problem and not the fact that there is a note suggesting more to come in the future/this section is incomplete? Would TODO be acceptable?",
"@sgugger, thank you for much for this very detailed feedback. I think I got it all. I will make another path on back quotes to catch any that you and I have missed that are in markdown style.\r\n\r\ngithub sucked big way on this PR with multiple code suggestions - every time I approved the suggested change, it'd reload the page, hide all the other suggestion and scroll away - I had to unhide suggestions, scroll to find the next item, and so on - about 20 times! :( May I suggest that in such a situation of dozens of proposed doc changes, it'd be faster for both of us, if you were to just commit the changes directly. (please correct me if I'm wrong and this is not faster for you) I will learn from the diffs, the changes are mostly self-explanatory.\r\n",
"> Note that you can check a preview of the docs built on top of this PR [here](https://84533-155220641-gh.circle-artifacts.com/0/docs/_build/html/testing.html).\r\n\r\nI just run `make docs` and checked the result - but obviously i didn't see many things you did see.\r\n\r\nLet me know if anything else needs to be changed. The doc will have extra additions later I just had to stop somewhere for the first pass.\r\n",
"Thanks for updating, this looks great so merging!"
] | 1,599 | 1,600 | 1,600 | CONTRIBUTOR | null | This PR adds the initial version of the testing doc.
* This is partially based on the work I did for [fastai last year](https://fastai1.fast.ai/dev/test.html), but rewritten to match the `transformers` environment. It's full of useful tips and tools for running tests. it's very useful for those who do a lot of testing.
* then adding most of the helpers in `testing_utils.py`
* adding CI information
More work is surely needed, but this is a start.
Thanks to @sshleifer for the detailed info on CIs.
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7101/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7101/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7101",
"html_url": "https://github.com/huggingface/transformers/pull/7101",
"diff_url": "https://github.com/huggingface/transformers/pull/7101.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7101.patch",
"merged_at": 1600212327000
} |
https://api.github.com/repos/huggingface/transformers/issues/7100 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7100/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7100/comments | https://api.github.com/repos/huggingface/transformers/issues/7100/events | https://github.com/huggingface/transformers/pull/7100 | 700,496,293 | MDExOlB1bGxSZXF1ZXN0NDg2MTI3NDI5 | 7,100 | [logging] remove no longer needed verbosity override | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7100?src=pr&el=h1) Report\n> Merging [#7100](https://codecov.io/gh/huggingface/transformers/pull/7100?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/90cde2e938638e64a8696a12b79ee5f52364b162?el=desc) will **decrease** coverage by `1.23%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7100?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7100 +/- ##\n==========================================\n- Coverage 79.62% 78.39% -1.24% \n==========================================\n Files 168 168 \n Lines 32284 32284 \n==========================================\n- Hits 25706 25308 -398 \n- Misses 6578 6976 +398 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7100?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7100/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7100/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-77.64%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7100/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.94% <0.00%> (-74.32%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7100/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.84% <0.00%> (-23.17%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7100/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7100/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.91% <0.00%> (-0.14%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7100/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `93.18% <0.00%> (ΓΈ)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7100/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.33% <0.00%> (+0.64%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7100/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `90.90% <0.00%> (+69.43%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7100/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `92.17% <0.00%> (+71.04%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7100?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7100?src=pr&el=footer). Last update [90cde2e...1935a23](https://codecov.io/gh/huggingface/transformers/pull/7100?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,599 | 1,600 | 1,600 | CONTRIBUTOR | null | As the project's verbosity level is now at `logging.WARN` this PR removes the now redundant code.
p.s. another change probably is needed to switch from `logging.getLogger()` to the new API
```
grep -Ir getLogger tests examples | wc -l
56
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7100/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7100/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7100",
"html_url": "https://github.com/huggingface/transformers/pull/7100",
"diff_url": "https://github.com/huggingface/transformers/pull/7100.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7100.patch",
"merged_at": 1600156874000
} |
https://api.github.com/repos/huggingface/transformers/issues/7099 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7099/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7099/comments | https://api.github.com/repos/huggingface/transformers/issues/7099/events | https://github.com/huggingface/transformers/pull/7099 | 700,433,534 | MDExOlB1bGxSZXF1ZXN0NDg2MDczMTg3 | 7,099 | [examples testing] restore code | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sorry, @stas00 I didn't notice that.",
"All is good, it was easy to miss, @Joel-hanson. "
] | 1,599 | 1,600 | 1,600 | CONTRIBUTOR | null | For some reason https://github.com/huggingface/transformers/pull/5512 re-added temp dir creation code that was removed by
https://github.com/huggingface/transformers/pull/6494 in the process undoing what the latter PR did those tests, leading to the temp dir created twice in a row.
I see now that it was an older PR that was committed much later, so that explains why the new code was not noticed.
@Joel-hanson | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7099/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7099/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7099",
"html_url": "https://github.com/huggingface/transformers/pull/7099",
"diff_url": "https://github.com/huggingface/transformers/pull/7099.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7099.patch",
"merged_at": 1600088063000
} |
https://api.github.com/repos/huggingface/transformers/issues/7098 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7098/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7098/comments | https://api.github.com/repos/huggingface/transformers/issues/7098/events | https://github.com/huggingface/transformers/issues/7098 | 700,431,402 | MDU6SXNzdWU3MDA0MzE0MDI= | 7,098 | broken pypi scipy package that affects tests under `examples` | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The solution is above, so closing this - as it was a FYI post if someone else runs into this."
] | 1,599 | 1,599 | 1,599 | CONTRIBUTOR | null | In case someone runs into this:
```
PYTHONPATH="src" pytest examples/test_examples.py
ImportError while importing test module '/mnt/nvme1/code/huggingface/transformers-examples/examples/test_examples.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/home/stas/anaconda3/envs/main-38/lib/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
examples/test_examples.py:38: in <module>
import run_glue
examples/text-classification/run_glue.py:30: in <module>
from transformers import (
E ImportError: cannot import name 'glue_compute_metrics' from 'transformers' (/mnt/nvme1/code/huggingface/transformers-examples/src/transformers/__init__.py)
```
Looking into the code, the problem was coming from:
```
if is_sklearn_available():
from .data import glue_compute_metrics, xnli_compute_metrics
```
for some reason, it was returning `False`.
Looking deeper, I got here:
```
try:
from sklearn.metrics import f1_score, matthews_corrcoef
from scipy.stats import pearsonr, spearmanr
_has_sklearn = True
except (AttributeError, ImportError):
_has_sklearn = False
def is_sklearn_available():
return _has_sklearn
```
So next trying:
```
python -c "from sklearn.metrics import f1_score, matthews_corrcoef"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/sklearn/__init__.py", line 80, in <module>
from .base import clone
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/sklearn/base.py", line 21, in <module>
from .utils import _IS_32BIT
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/sklearn/utils/__init__.py", line 20, in <module>
from scipy.sparse import issparse
ModuleNotFoundError: No module named 'scipy.sparse'
```
but the requirement was already installed:
```
pip install scipy
```
A search gave this answer: https://stackoverflow.com/a/59692528/9201239 which solved the problem.
```
pip uninstall scipy
conda install scipy
```
So there is something wrong with that `scipy` package on pypi.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7098/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7098/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7097 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7097/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7097/comments | https://api.github.com/repos/huggingface/transformers/issues/7097/events | https://github.com/huggingface/transformers/pull/7097 | 700,420,562 | MDExOlB1bGxSZXF1ZXN0NDg2MDYxODIw | 7,097 | Create README.md | {
"login": "tuner007",
"id": 46425391,
"node_id": "MDQ6VXNlcjQ2NDI1Mzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/46425391?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tuner007",
"html_url": "https://github.com/tuner007",
"followers_url": "https://api.github.com/users/tuner007/followers",
"following_url": "https://api.github.com/users/tuner007/following{/other_user}",
"gists_url": "https://api.github.com/users/tuner007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tuner007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tuner007/subscriptions",
"organizations_url": "https://api.github.com/users/tuner007/orgs",
"repos_url": "https://api.github.com/users/tuner007/repos",
"events_url": "https://api.github.com/users/tuner007/events{/privacy}",
"received_events_url": "https://api.github.com/users/tuner007/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,599 | 1,600 | 1,600 | CONTRIBUTOR | null | Model card for PEGASUS model finetuned for paraphrasing task
<!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #{issue number}
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7097/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7097/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7097",
"html_url": "https://github.com/huggingface/transformers/pull/7097",
"diff_url": "https://github.com/huggingface/transformers/pull/7097.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7097.patch",
"merged_at": 1600174106000
} |
https://api.github.com/repos/huggingface/transformers/issues/7096 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7096/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7096/comments | https://api.github.com/repos/huggingface/transformers/issues/7096/events | https://github.com/huggingface/transformers/pull/7096 | 700,419,652 | MDExOlB1bGxSZXF1ZXN0NDg2MDYxMDE2 | 7,096 | Trying to speed up lost speed of tokenizer.encode | {
"login": "LSinev",
"id": 12072891,
"node_id": "MDQ6VXNlcjEyMDcyODkx",
"avatar_url": "https://avatars.githubusercontent.com/u/12072891?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LSinev",
"html_url": "https://github.com/LSinev",
"followers_url": "https://api.github.com/users/LSinev/followers",
"following_url": "https://api.github.com/users/LSinev/following{/other_user}",
"gists_url": "https://api.github.com/users/LSinev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LSinev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LSinev/subscriptions",
"organizations_url": "https://api.github.com/users/LSinev/orgs",
"repos_url": "https://api.github.com/users/LSinev/repos",
"events_url": "https://api.github.com/users/LSinev/events{/privacy}",
"received_events_url": "https://api.github.com/users/LSinev/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7096?src=pr&el=h1) Report\n> Merging [#7096](https://codecov.io/gh/huggingface/transformers/pull/7096?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/26d5475d4b6644528956df3020dbaa436b443706?el=desc) will **increase** coverage by `3.91%`.\n> The diff coverage is `94.59%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7096?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7096 +/- ##\n==========================================\n+ Coverage 75.91% 79.82% +3.91% \n==========================================\n Files 195 168 -27 \n Lines 39827 32326 -7501 \n==========================================\n- Hits 30233 25803 -4430 \n+ Misses 9594 6523 -3071 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7096?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7096/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.70% <87.50%> (+0.76%)` | :arrow_up: |\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/7096/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `80.00% <100.00%> (-16.12%)` | :arrow_down: |\n| [src/transformers/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7096/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `97.31% <100.00%> (+3.80%)` | :arrow_up: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7096/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `99.20% <100.00%> (+0.06%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7096/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.89% <100.00%> (+1.18%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7096/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `83.39% <100.00%> (+0.32%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7096/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `16.37% <0.00%> (-82.22%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7096/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `19.71% <0.00%> (-54.44%)` | :arrow_down: |\n| [src/transformers/tokenization\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/7096/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY2FtZW1iZXJ0LnB5) | `37.03% <0.00%> (-53.13%)` | :arrow_down: |\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/7096/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `34.28% <0.00%> (-48.00%)` | :arrow_down: |\n| ... and [163 more](https://codecov.io/gh/huggingface/transformers/pull/7096/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7096?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7096?src=pr&el=footer). Last update [26d5475...6ca458e](https://codecov.io/gh/huggingface/transformers/pull/7096?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,599 | 1,619 | 1,614 | CONTRIBUTOR | null | Fixes #6962 (at least tries to fix)
@mfuntowicz | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7096/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7096/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7096",
"html_url": "https://github.com/huggingface/transformers/pull/7096",
"diff_url": "https://github.com/huggingface/transformers/pull/7096.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7096.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/7095 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7095/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7095/comments | https://api.github.com/repos/huggingface/transformers/issues/7095/events | https://github.com/huggingface/transformers/pull/7095 | 700,377,935 | MDExOlB1bGxSZXF1ZXN0NDg2MDI1NDAx | 7,095 | Create README.md | {
"login": "tuner007",
"id": 46425391,
"node_id": "MDQ6VXNlcjQ2NDI1Mzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/46425391?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tuner007",
"html_url": "https://github.com/tuner007",
"followers_url": "https://api.github.com/users/tuner007/followers",
"following_url": "https://api.github.com/users/tuner007/following{/other_user}",
"gists_url": "https://api.github.com/users/tuner007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tuner007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tuner007/subscriptions",
"organizations_url": "https://api.github.com/users/tuner007/orgs",
"repos_url": "https://api.github.com/users/tuner007/repos",
"events_url": "https://api.github.com/users/tuner007/events{/privacy}",
"received_events_url": "https://api.github.com/users/tuner007/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7095?src=pr&el=h1) Report\n> Merging [#7095](https://codecov.io/gh/huggingface/transformers/pull/7095?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b76cb1c3dfc64d1dcaddc3d6d9313dddeb626d05?el=desc) will **decrease** coverage by `1.28%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7095?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7095 +/- ##\n==========================================\n- Coverage 81.63% 80.34% -1.29% \n==========================================\n Files 168 168 \n Lines 32257 32257 \n==========================================\n- Hits 26333 25918 -415 \n- Misses 5924 6339 +415 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7095?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7095/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7095/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7095/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7095/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7095/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7095/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.77% <0.00%> (-0.68%)` | :arrow_down: |\n| [...rc/transformers/data/datasets/language\\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/7095/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `92.94% <0.00%> (-0.59%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7095/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.38% <0.00%> (-0.36%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7095/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.77% <0.00%> (-0.14%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7095/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `93.18% <0.00%> (ΓΈ)` | |\n| ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/7095/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7095?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7095?src=pr&el=footer). Last update [b76cb1c...bfde495](https://codecov.io/gh/huggingface/transformers/pull/7095?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks!"
] | 1,599 | 1,600 | 1,600 | CONTRIBUTOR | null | Create model card for Pegasus QA
<!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #{issue number}
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7095/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7095/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7095",
"html_url": "https://github.com/huggingface/transformers/pull/7095",
"diff_url": "https://github.com/huggingface/transformers/pull/7095.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7095.patch",
"merged_at": 1600413586000
} |
https://api.github.com/repos/huggingface/transformers/issues/7094 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7094/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7094/comments | https://api.github.com/repos/huggingface/transformers/issues/7094/events | https://github.com/huggingface/transformers/pull/7094 | 700,370,214 | MDExOlB1bGxSZXF1ZXN0NDg2MDE4NzUx | 7,094 | fix bug in pegasus converter | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,599 | 1,600 | 1,600 | CONTRIBUTOR | null | <!-- This line specifies which issue to close after the pull request is merged. -->
Reported https://discuss.huggingface.co/t/pegasus-questions/838/14
The bug was that I was passing `**cfg_updates` instead of `**cfg_kwargs`. Then I did some extra cleanup to make the code refer to bart less. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7094/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7094/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7094",
"html_url": "https://github.com/huggingface/transformers/pull/7094",
"diff_url": "https://github.com/huggingface/transformers/pull/7094.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7094.patch",
"merged_at": 1600024307000
} |
https://api.github.com/repos/huggingface/transformers/issues/7093 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7093/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7093/comments | https://api.github.com/repos/huggingface/transformers/issues/7093/events | https://github.com/huggingface/transformers/pull/7093 | 700,334,040 | MDExOlB1bGxSZXF1ZXN0NDg1OTg3MTMw | 7,093 | Update convert_pegasus_tf_to_pytorch.py | {
"login": "tuner007",
"id": 46425391,
"node_id": "MDQ6VXNlcjQ2NDI1Mzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/46425391?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tuner007",
"html_url": "https://github.com/tuner007",
"followers_url": "https://api.github.com/users/tuner007/followers",
"following_url": "https://api.github.com/users/tuner007/following{/other_user}",
"gists_url": "https://api.github.com/users/tuner007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tuner007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tuner007/subscriptions",
"organizations_url": "https://api.github.com/users/tuner007/orgs",
"repos_url": "https://api.github.com/users/tuner007/repos",
"events_url": "https://api.github.com/users/tuner007/events{/privacy}",
"received_events_url": "https://api.github.com/users/tuner007/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7093?src=pr&el=h1) Report\n> Merging [#7093](https://codecov.io/gh/huggingface/transformers/pull/7093?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b76cb1c3dfc64d1dcaddc3d6d9313dddeb626d05?el=desc) will **decrease** coverage by `0.75%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7093?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7093 +/- ##\n==========================================\n- Coverage 81.63% 80.87% -0.76% \n==========================================\n Files 168 168 \n Lines 32257 32257 \n==========================================\n- Hits 26333 26088 -245 \n- Misses 5924 6169 +245 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7093?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7093/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7093/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7093/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7093/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7093/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-2.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7093/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.77% <0.00%> (-0.68%)` | :arrow_down: |\n| [...rc/transformers/data/datasets/language\\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/7093/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `92.94% <0.00%> (-0.59%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7093/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.38% <0.00%> (-0.36%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7093/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.77% <0.00%> (-0.14%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7093/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `93.18% <0.00%> (ΓΈ)` | |\n| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/7093/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7093?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7093?src=pr&el=footer). Last update [b76cb1c...c0db32e](https://codecov.io/gh/huggingface/transformers/pull/7093?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,599 | 1,599 | 1,599 | CONTRIBUTOR | null | Instead of full config we were only sending updated config dict.
βsshleifer/pegasusβ to βgoogle/pegasus-aeslcβ in Tokenizer
<!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #{issue number}
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7093/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7093/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7093",
"html_url": "https://github.com/huggingface/transformers/pull/7093",
"diff_url": "https://github.com/huggingface/transformers/pull/7093.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7093.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/7092 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7092/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7092/comments | https://api.github.com/repos/huggingface/transformers/issues/7092/events | https://github.com/huggingface/transformers/issues/7092 | 700,333,578 | MDU6SXNzdWU3MDAzMzM1Nzg= | 7,092 | needing area to put download/convert/eval scripts | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Feel free to PR `examples/seq2seq/scripts/fsmt`, `examples/seq2seq/fsmt_scripts`, or another git repo that you link to from model cards.\r\nI have some bulk marian converters I can check in also.\r\n\r\n",
"Thank you for the suggestions, @sshleifer.\r\n\r\nFor eval scripts I can see some place under `examples`, but for conversion scripts - these are core models we are talking about.\r\n\r\nI think those should be close to the code that generates/converts models - so that anybody in the future could regenerate things - if any issues were found. That's why I propose a dedicated area under `transformers` repo root directory.\r\n\r\nSome of those scripts are somewhat complex - it's not just downloading a single tar ball and running convert on it. In the case of allenai models that are about to be added different tarballs are needed and they need to be combined in a certain way. Hence I believe it'll save time to the project in the future.\r\n\r\nAnd the eval scripts now feed into the conversion scripts - so no longer examples either. we can now search the hparams and get the model config include `generate` params that are pre-optimized - so core, not examples.",
"OK, you've convinced me. @julien-c, @LysandreJik @sgugger what do you guys think about \r\n\r\n`transformers/scripts/{model_name}/` as a place to checkin end to end (possibly bulk) conversion scripts?\r\n\r\nRationale:\r\nMarian + FSMT require a few steps before `transformers-cli convert` + `transformers-cli upload` to\r\n \r\n+ (a) fetch correct tarballs \r\n+ (b) name them correctly\r\n+ (c) (just fsmt) decide on correct beam search parameters\r\n\r\nand it would aid reproducibility to have all that logic/knowledge checked in.",
"I have another set of scripts - automatic model card writers - useful for when we have sets of models, which are mainly the same, but the sample code/scores are unique. \r\n\r\nSo currently for 9 `fsmt` models that were just put on s3 I have 9 scripts:\r\n\r\n- 3 conversion scripts (bash)\r\n- 3 model_card scripts (python)\r\n- 3 hparam search eval scripts (bash)\r\n\r\nand I currently have 3 sets of the above (a set for 4 wmt19 fairseq models, a set for 3 wmt16 allenai models, a set for 2 wmt19 allenai models), so 9 scripts in total.\r\n",
"Made a PR: https://github.com/huggingface/transformers/pull/7155"
] | 1,599 | 1,600 | 1,600 | CONTRIBUTOR | null | # π Feature request
Would it be useful to allocate a sub-dir in the source code for conversion/eval bash scripts? Some of them are quite complex including a bunch of downloads, moving files around, etc. It'd be good to have those in the repo, so that it'd be easy to re-build data if there was a change/mistake/etc.
note: I'm not proposing to move `src/transformers/convert*py`.
Let the data speak for itself.
I currently have 2 scripts for fairseq transformer models:
```
# Convert fairseq transform wmt19 checkpoint.
# To convert run:
# assuming the fairseq data is under data/wmt19.ru-en.ensemble, data/wmt19.en-ru.ensemble, etc
export ROOT=/code/huggingface/transformers-fair-wmt
cd $ROOT
mkdir data
# get data (run once)
wget https://dl.fbaipublicfiles.com/fairseq/models/wmt19.en-de.joined-dict.ensemble.tar.gz
wget https://dl.fbaipublicfiles.com/fairseq/models/wmt19.de-en.joined-dict.ensemble.tar.gz
wget https://dl.fbaipublicfiles.com/fairseq/models/wmt19.en-ru.ensemble.tar.gz
wget https://dl.fbaipublicfiles.com/fairseq/models/wmt19.ru-en.ensemble.tar.gz
tar -xvzf wmt19.en-de.joined-dict.ensemble.tar.gz
tar -xvzf wmt19.de-en.joined-dict.ensemble.tar.gz
tar -xvzf wmt19.en-ru.ensemble.tar.gz
tar -xvzf wmt19.ru-en.ensemble.tar.gz
# run conversions and uploads
export PAIR=ru-en
PYTHONPATH="src" python src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py --fsmt_checkpoint_path data/wmt19.$PAIR.ensemble --pytorch_dump_folder_path data/fsmt-wmt19-$PAIR
export PAIR=en-ru
PYTHONPATH="src" python src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py --fsmt_checkpoint_path data/wmt19.$PAIR.ensemble --pytorch_dump_folder_path data/fsmt-wmt19-$PAIR
export PAIR=de-en
PYTHONPATH="src" python src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py --fsmt_checkpoint_path data/wmt19.$PAIR.joined-dict.ensemble --pytorch_dump_folder_path data/fsmt-wmt19-$PAIR
export PAIR=en-de
PYTHONPATH="src" python src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py --fsmt_checkpoint_path data/wmt19.$PAIR.joined-dict.ensemble --pytorch_dump_folder_path data/fsmt-wmt19-$PAIR
# upload
cd data
transformers-cli upload -y fsmt-wmt19-ru-en
transformers-cli upload -y fsmt-wmt19-en-ru
transformers-cli upload -y fsmt-wmt19-de-en
transformers-cli upload -y fsmt-wmt19-en-de
cd -
# if updating just small files and not the large models, here is a script to generate the right commands:
perl -le 'for $f (@ARGV) { print qq[transformers-cli upload -y $_/$f --filename $_/$f] for map { "fsmt-wmt19-$_" } ("en-ru", "ru-en", "de-en", "en-de")}' vocab-src.json vocab-tgt.json tokenizer_config.json config.json
# add/remove files as needed
```
Eval script:
```
# to match fairseq you need to set num_beams=50 in `configuration_fsmt.py` and lower BS
# quick estimate version for quick testing
export PAIR=en-ru
export DATA_DIR=data/$PAIR
export SAVE_DIR=data/$PAIR
export BS=8
export NUM_BEAMS=8
mkdir -p $DATA_DIR
sacrebleu -t wmt19 -l $PAIR --echo src | head -100 > $DATA_DIR/val.source
sacrebleu -t wmt19 -l $PAIR --echo ref | head -100 > $DATA_DIR/val.target
echo $PAIR
PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py stas/fsmt-wmt19-$PAIR $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS
# ru-en
export PAIR=ru-en
export DATA_DIR=data/$PAIR
export SAVE_DIR=data/$PAIR
export BS=8
export NUM_BEAMS=50
mkdir -p $DATA_DIR
sacrebleu -t wmt19 -l $PAIR --echo src > $DATA_DIR/val.source
sacrebleu -t wmt19 -l $PAIR --echo ref > $DATA_DIR/val.target
PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py stas/fsmt-wmt19-$PAIR $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS
# (expected BLEU: 41.3 http://matrix.statmt.org/matrix/output/1907?run_id=6937)
# en-ru
export PAIR=en-ru
export DATA_DIR=data/$PAIR
export SAVE_DIR=data/$PAIR
export BS=8
export NUM_BEAMS=50
mkdir -p $DATA_DIR
sacrebleu -t wmt19 -l $PAIR --echo src > $DATA_DIR/val.source
sacrebleu -t wmt19 -l $PAIR --echo ref > $DATA_DIR/val.target
echo $PAIR
PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py stas/fsmt-wmt19-$PAIR $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS
# (expected BLEU: 36.4 http://matrix.statmt.org/matrix/output/1914?score_id=37605)
# en-de
export PAIR=en-de
export DATA_DIR=data/$PAIR
export SAVE_DIR=data/$PAIR
export BS=8
mkdir -p $DATA_DIR
sacrebleu -t wmt19 -l $PAIR --echo src > $DATA_DIR/val.source
sacrebleu -t wmt19 -l $PAIR --echo ref > $DATA_DIR/val.target
echo $PAIR
PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py stas/fsmt-wmt19-$PAIR $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS
# (expected BLEU: 43.1 http://matrix.statmt.org/matrix/output/1909?run_id=6862)
# de-en
export PAIR=de-en
export DATA_DIR=data/$PAIR
export SAVE_DIR=data/$PAIR
export BS=8
export NUM_BEAMS=50
mkdir -p $DATA_DIR
sacrebleu -t wmt19 -l $PAIR --echo src > $DATA_DIR/val.source
sacrebleu -t wmt19 -l $PAIR --echo ref > $DATA_DIR/val.target
echo $PAIR
PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py stas/fsmt-wmt19-$PAIR $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS
# (expected BLEU: 42.3 http://matrix.statmt.org/matrix/output/1902?run_id=6750)
```
Then I have a different script for 2 sets of other models for wmt from allen nlp, with 2 scripts each:
```
# Convert fairseq transform wmt16 en-de checkpoints from https://github.com/jungokasai/deep-shallow
pip install gdown
# get data (run once)
cd data
gdown 'https://drive.google.com/uc?id=1x_G2cjvM1nW5hjAB8-vWxRqtQTlmIaQU'
gdown 'https://drive.google.com/uc?id=1oA2aqZlVNj5FarxBlNXEHpBS4lRetTzU'
gdown 'https://drive.google.com/uc?id=1Wup2D318QYBFPW_NKI1mfP_hXOfmUI9r'
tar -xvzf trans_ende_12-1_0.2.tar.gz
tar -xvzf trans_ende-dist_12-1_0.2.tar.gz
tar -xvzf trans_ende-dist_6-1_0.2.tar.gz
gdown 'https://drive.google.com/uc?id=1mNufoynJ9-Zy1kJh2TA_lHm2squji0i9'
gdown 'https://drive.google.com/uc?id=1iO7um-HWoNoRKDtw27YUSgyeubn9uXqj'
tar -xvzf wmt16.en-de.deep-shallow.dist.tar.gz
tar -xvzf wmt16.en-de.deep-shallow.tar.gz
cp wmt16.en-de.deep-shallow/data-bin/dict.*.txt trans_ende_12-1_0.2
cp wmt16.en-de.deep-shallow.dist/data-bin/dict.*.txt trans_ende-dist_12-1_0.2
cp wmt16.en-de.deep-shallow.dist/data-bin/dict.*.txt trans_ende-dist_6-1_0.2
cp wmt16.en-de.deep-shallow/bpecodes trans_ende_12-1_0.2
cp wmt16.en-de.deep-shallow.dist/bpecodes trans_ende-dist_12-1_0.2
cp wmt16.en-de.deep-shallow.dist/bpecodes trans_ende-dist_6-1_0.2
# another set wmt19-6-6-de-en
gdown 'https://drive.google.com/uc?id=1j6z9fYdlUyOYsh7KJoumRlr1yHczxR5T'
gdown 'https://drive.google.com/uc?id=1yT7ZjqfvUYOBXvMjeY8uGRHQFWoSo8Q5'
gdown 'https://drive.google.com/uc?id=15gAzHeRUCs-QV8vHeTReMPEh1j8excNE'
tar -xvzf wmt19.de-en.tar.gz
tar -xvzf wmt19_deen_base_dr0.1_1.tar.gz
tar -xvzf wmt19_deen_big_dr0.1_2.tar.gz
cp wmt19.de-en/data-bin/dict.en.txt wmt19_deen_base_dr0.1_1
cp wmt19.de-en/data-bin/dict.en.txt wmt19_deen_big_dr0.1_2
cp wmt19.de-en/data-bin/dict.de.txt wmt19_deen_base_dr0.1_1
cp wmt19.de-en/data-bin/dict.de.txt wmt19_deen_big_dr0.1_2
cd -
# run conversions and uploads
# wmt16-en-de set
PYTHONPATH="src" python src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py --fsmt_checkpoint_path data/trans_ende-dist_12-1_0.2 --pytorch_dump_folder_path data/fsmt-wmt16-en-de-dist-12-1
PYTHONPATH="src" python src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py --fsmt_checkpoint_path data/trans_ende-dist_6-1_0.2 --pytorch_dump_folder_path data/fsmt-wmt16-en-de-dist-6-1
PYTHONPATH="src" python src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py --fsmt_checkpoint_path data/trans_ende_12-1_0.2 --pytorch_dump_folder_path data/fsmt-wmt16-en-de-12-1
# wmt19-de-en set
PYTHONPATH="src" python src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py --fsmt_checkpoint_path data/wmt19_deen_base_dr0.1_1 --pytorch_dump_folder_path data/fsmt-wmt19-de-en-6-6-base
PYTHONPATH="src" python src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py --fsmt_checkpoint_path data/wmt19_deen_big_dr0.1_2 --pytorch_dump_folder_path data/fsmt-wmt19-de-en-6-6-big
```
Eval:
```
git clone https://github.com/huggingface/transformers
cd transformers
export PAIR=en-de
export DATA_DIR=data/$PAIR
export SAVE_DIR=data/$PAIR
export BS=64
export NUM_BEAMS=5
mkdir -p $DATA_DIR
sacrebleu -t wmt19 -l $PAIR --echo src > $DATA_DIR/val.source
sacrebleu -t wmt19 -l $PAIR --echo ref > $DATA_DIR/val.target
MODEL_PATH=/code/huggingface/transformers-fair-wmt/data/fsmt-wmt16-en-de-dist-12-1
echo $PAIR $MODEL_PATH
PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py $MODEL_PATH $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS
MODEL_PATH=/code/huggingface/transformers-fair-wmt/data/fsmt-wmt16-en-de-dist-6-1
echo $PAIR $MODEL_PATH
PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py $MODEL_PATH $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS
MODEL_PATH=/code/huggingface/transformers-fair-wmt/data/fsmt-wmt16-en-de-12-1
echo $PAIR $MODEL_PATH
PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py $MODEL_PATH $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS
# wmt19-de-en set
export PAIR=de-en
export DATA_DIR=data/$PAIR
export SAVE_DIR=data/$PAIR
export BS=64
export NUM_BEAMS=5
mkdir -p $DATA_DIR
sacrebleu -t wmt19 -l $PAIR --echo src > $DATA_DIR/val.source
sacrebleu -t wmt19 -l $PAIR --echo ref > $DATA_DIR/val.target
MODEL_PATH=/code/huggingface/transformers-fair-wmt/data/fsmt-wmt19-de-en-6-6-base
echo $PAIR $MODEL_PATH
PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py $MODEL_PATH $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS
MODEL_PATH=/code/huggingface/transformers-fair-wmt/data/fsmt-wmt19-de-en-6-6-big
echo $PAIR $MODEL_PATH
PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py $MODEL_PATH $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS
```
So perhaps:
```
model_scripts/
arch/
model1-build.sh
model1-eval.sh
model2-build.sh
model2-eval.sh
[...]
```
So in the case of the above scripts, they could be:
```
model_scripts/fsmt/fairseq-build.sh
model_scripts/fsmt/fairseq-eval.sh
model_scripts/fsmt/allennlp-build.sh
model_scripts/fsmt/allennlp-eval.sh
```
Thoughts?
Of course, I could just start with this proposal as a PR and we can adjust from there.
Thank you.
([fsmt](https://github.com/huggingface/transformers/pull/6940) is not yet merged, in case you wonder about an unfamiliar name) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7092/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7092/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7091 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7091/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7091/comments | https://api.github.com/repos/huggingface/transformers/issues/7091/events | https://github.com/huggingface/transformers/issues/7091 | 700,302,508 | MDU6SXNzdWU3MDAzMDI1MDg= | 7,091 | is config argument necessary for XXModel.from_pretrained method? And when is needed? | {
"login": "xixiaoyao",
"id": 24541791,
"node_id": "MDQ6VXNlcjI0NTQxNzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/24541791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xixiaoyao",
"html_url": "https://github.com/xixiaoyao",
"followers_url": "https://api.github.com/users/xixiaoyao/followers",
"following_url": "https://api.github.com/users/xixiaoyao/following{/other_user}",
"gists_url": "https://api.github.com/users/xixiaoyao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xixiaoyao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xixiaoyao/subscriptions",
"organizations_url": "https://api.github.com/users/xixiaoyao/orgs",
"repos_url": "https://api.github.com/users/xixiaoyao/repos",
"events_url": "https://api.github.com/users/xixiaoyao/events{/privacy}",
"received_events_url": "https://api.github.com/users/xixiaoyao/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @xixiaoyao \r\nThe `config` argument is not required for `from_pretrained`, when `config` is not passed it loads the saved config from the model dir. However, it's necessary to pass `config` when you wan't to override some config value, like setting `gradient_checkpointing` to `True`. Here's how you can override the `config` and pass it to `from_pretrained`.\r\n\r\n```python\r\nconfig = XXConfig.from_pretrained(path, gradient_checkpointing=True)\r\nmodel = XXModel.from_pretrained(path, config=config)\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,599 | 1,606 | 1,606 | NONE | null | # β Questions & Help
I find the model can be loaded with `XXModel.from_pretrained` function even if no `config` argument is given. But if the `config` is given, the argument `gradient_checkpointing` cannot be enabled. I wonder why the config argument can lead to the unable of `gradient_checkpointing`???
I test the conclusion with longformer model. Thanks!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7091/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7091/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7090 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7090/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7090/comments | https://api.github.com/repos/huggingface/transformers/issues/7090/events | https://github.com/huggingface/transformers/issues/7090 | 700,289,841 | MDU6SXNzdWU3MDAyODk4NDE= | 7,090 | TypeError: __init__() got an unexpected keyword argument 'gradient_checkpointing' | {
"login": "xixiaoyao",
"id": 24541791,
"node_id": "MDQ6VXNlcjI0NTQxNzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/24541791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xixiaoyao",
"html_url": "https://github.com/xixiaoyao",
"followers_url": "https://api.github.com/users/xixiaoyao/followers",
"following_url": "https://api.github.com/users/xixiaoyao/following{/other_user}",
"gists_url": "https://api.github.com/users/xixiaoyao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xixiaoyao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xixiaoyao/subscriptions",
"organizations_url": "https://api.github.com/users/xixiaoyao/orgs",
"repos_url": "https://api.github.com/users/xixiaoyao/repos",
"events_url": "https://api.github.com/users/xixiaoyao/events{/privacy}",
"received_events_url": "https://api.github.com/users/xixiaoyao/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @xixiaoyao,\r\n\r\nSorry I cannot reproduce the error. Both when running `run_squad.py` and this snippet (which is essentially the same as in `run_squad.py`):\r\n\r\n```python\r\nfrom transformers import AutoModelForQuestionAnswering\r\nmodel = AutoModelForQuestionAnswering.from_pretrained(\"allenai/longformer-base-4096\", gradient_checkpointing=True)\r\n```\r\n\r\nI do not get any error. Can you post a code snippet (at short as possible) that I could copy paste to reproduce the error? \r\n\r\nThanks!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,599 | 1,606 | 1,606 | NONE | null | ## Environment info
- `transformers` version: 3.1.0
- Platform: Linux-3.10.0_3-0-0-17-x86_64-with-debian-buster-sid
- Python version: 3.7.7
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
Longformer/Reformer: @patrickvonplaten
-->
## Information
Model I am using (Longformer):
The problem arises when using:
[ * ] the official example scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: SQuAD v1.1
## To reproduce
Steps to reproduce the behavior:
1. add gradient_checkpointing argument for AutoModelForQuestionAnswering in examples/question-answering/run_squad.py
2. run with longformer-base-4096
runtime errors as belows
```
File "run_squad.py", line 821, in <module>
main()
File "run_squad.py", line 739, in main
gradient_checkpointing=True,
File "/opt/conda/lib/python3.7/site-packages/transformers/modeling_utils.py", line 852, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
TypeError: __init__() got an unexpected keyword argument 'gradient_checkpointing'
```
even if you explicitly replace AutoModel with LongformerModel like following, error is the same.

When I run with python/ipython interactive mode, the model loaded success.

And I have ensured the python enviroment is the same during these two runs.
## Expected behavior
Longformer can be loaded with gradient checkpointing
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7090/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7090/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7089 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7089/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7089/comments | https://api.github.com/repos/huggingface/transformers/issues/7089/events | https://github.com/huggingface/transformers/pull/7089 | 700,282,012 | MDExOlB1bGxSZXF1ZXN0NDg1OTQyNjgz | 7,089 | German electra model card v3 update | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,599 | 1,600 | 1,600 | CONTRIBUTOR | null | Model Card Update | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7089/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7089/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7089",
"html_url": "https://github.com/huggingface/transformers/pull/7089",
"diff_url": "https://github.com/huggingface/transformers/pull/7089.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7089.patch",
"merged_at": 1600174094000
} |
https://api.github.com/repos/huggingface/transformers/issues/7088 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7088/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7088/comments | https://api.github.com/repos/huggingface/transformers/issues/7088/events | https://github.com/huggingface/transformers/issues/7088 | 700,227,136 | MDU6SXNzdWU3MDAyMjcxMzY= | 7,088 | train/eval step results log not shown in terminal for tf_trainer.py | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You are user the wrong logger I think. Transformers now uses a centralized logger so you should go:\r\n```\r\nimport transformers\r\ntransformers.logging.set_verbosity_info()\r\n```\r\nto set the verbosity to the INFO level. Not sure it there is an additional issue in tf_trainer or not.",
"> \r\n> \r\n> You are user the wrong logger I think. Transformers now uses a centralized logger so you should go:\r\n> \r\n> ```\r\n> import transformers\r\n> transformers.logging.set_verbosity_info()\r\n> ```\r\n> \r\n> to set the verbosity to the INFO level. Not sure it there is an additional issue in tf_trainer or not.\r\n\r\n@sgugger , but I am testing `run_tf_glue.py`, not my own script. I assume it should work directly, no? Other scripts like `run_tf_ner.py` also have the same issue. And it seems there is no logging level command-line argument to specify.",
"As @sgugger said, all the examples uses the Python logging lib instead of the HF wrapper.\n\nI will do a fix early next week.",
"@sgugger \r\n\r\nWhile I worked on #7125, I found that in `trainer.py`, I found that\r\n\r\n logger.info(\" Continuing training from checkpoint, will skip to saved global_step\")\r\n logger.info(\" Continuing training from epoch %d\", epochs_trained)\r\n logger.info(\" Continuing training from global step %d\", self.global_step)\r\n logger.info(\" Continuing training from %d non-embedding floating-point operations\", self.total_flos)\r\n logger.info(\" Will skip the first %d steps in the first epoch\", steps_trained_in_current_epoch)\r\n\r\nare not shown while I launched example scripts like `run_glue.py`.",
"Yes the default log level is `warning`, you have to change it in the script to info if you want, by adding the line:\r\n```\r\nlogging.set_verbosity_info()\r\n```\r\nOr if you always want info, there is an env variable you can set called `TRANSFORMERS_VERBOSITY` (set it to 20 for info level).",
"There is a small issue with the HF logger. I'm currently working on it and checking with @LysandreJik ",
"No problem. If this is the default behavior expected (at least for pytorch trainer), I am fine.",
"@jplu As you might know, I open this issue, but I don't necessary have the whole context. So I leave you to decide the desired behavior for tf_trainer.",
"alternatively you can simply modify the following line in transormers/utils/logging.py to change the default behaviour:\r\n\r\nfrom `_default_log_level = logging.WARNING ` to `_default_log_level = logging.INFO`",
"A fix has been pushed, you just have to write `transformers.logging.set_verbosity_info()`",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,599 | 1,607 | 1,607 | COLLABORATOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.1.0
- Platform: Linux-5.4.0-42-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
-->
Trainer: @sgugger
tensorflow: @jplu
@LysandreJik
## Information
In the current code, which is without setting `logger.setLevel(logging.INFO)` in `trainer_tf.py`:
09/12/2020 03:42:41 - INFO - absl - Load dataset info from /home/imo/tensorflow_datasets/glue/sst2/1.0.0
09/12/2020 03:42:41 - INFO - absl - Reusing dataset glue (/home/imo/tensorflow_datasets/glue/sst2/1.0.0)
09/12/2020 03:42:41 - INFO - absl - Constructing tf.data.Dataset for split validation, from /home/imo/tensorflow_datasets/glue/sst2/1.0.0
2020-09-12 03:42:57.010229: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:184] Filling up shuffle buffer (this may take a while): 41707 of 67349
2020-09-12 03:43:03.412045: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:233] Shuffle buffer filled.
2020-09-12 03:43:56.636791: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:184] Filling up shuffle buffer (this may take a while): 36279 of 67349
2020-09-12 03:44:04.474751: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:233] Shuffle buffer filled.
09/12/2020 03:44:51 - INFO - __main__ - *** Evaluate ***
09/12/2020 03:45:02 - INFO - __main__ - ***** Eval results *****
09/12/2020 03:45:02 - INFO - __main__ - eval_loss = 0.712074209790711
09/12/2020 03:45:02 - INFO - __main__ - eval_acc = 0.48977272727272725
You can see that the train/eval step logs are not shown.
If I specify, manually, `logger.setLevel(logging.INFO)` in `trainer_tf.py`:
09/12/2020 06:04:39 - INFO - absl - Load dataset info from /home/imo/tensorflow_datasets/glue/sst2/1.0.0
09/12/2020 06:04:39 - INFO - absl - Reusing dataset glue (/home/imo/tensorflow_datasets/glue/sst2/1.0.0)
09/12/2020 06:04:39 - INFO - absl - Constructing tf.data.Dataset for split validation, from /home/imo/tensorflow_datasets/glue/sst2/1.0.0
You are instantiating a Trainer but W&B is not installed. To use wandb logging, run `pip install wandb; wandb login` see https://docs.wandb.com/huggingface.
To use comet_ml logging, run `pip/conda install comet_ml` see https://www.comet.ml/docs/python-sdk/huggingface/
***** Running training *****
Num examples = 67349
Num Epochs = 1
Instantaneous batch size per device = 4
Total train batch size (w. parallel, distributed & accumulation) = 4
Gradient Accumulation steps = 1
Steps per epoch = 4
Total optimization steps = 4
2020-09-12 06:04:49.637373: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:184] Filling up shuffle buffer (this may take a while): 39626 of 67349
2020-09-12 06:04:56.805687: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:233] Shuffle buffer filled.
{'loss': 0.6994307, 'learning_rate': 3.7499998e-05, 'epoch': 0.5, 'step': 1}
{'loss': 0.6897122, 'learning_rate': 2.5e-05, 'epoch': 0.75, 'step': 2}
Saving checkpoint for step 2 at ./sst-2/checkpoint/ckpt-1
{'loss': 0.683386, 'learning_rate': 1.25e-05, 'epoch': 1.0, 'step': 3}
{'loss': 0.68290234, 'learning_rate': 0.0, 'epoch': 1.25, 'step': 4}
Saving checkpoint for step 4 at ./sst-2/checkpoint/ckpt-2
Training took: 0:00:43.099437
Saving model in ./sst-2/
09/12/2020 06:05:26 - INFO - __main__ - *** Evaluate ***
***** Running Evaluation *****
Num examples = 872
Batch size = 8
{'eval_loss': 0.6990196158032899, 'eval_acc': 0.49204545454545456, 'epoch': 1.25, 'step': 4}
09/12/2020 06:05:35 - INFO - __main__ - ***** Eval results *****
09/12/2020 06:05:35 - INFO - __main__ - eval_loss = 0.6990196158032899
09/12/2020 06:05:35 - INFO - __main__ - eval_acc = 0.49204545454545456
We see more information like
{'loss': 0.6994307, 'learning_rate': 3.7499998e-05, 'epoch': 0.5, 'step': 1}
More importantly, we also see this message
You are instantiating a Trainer but W&B is not installed. To use wandb logging, run `pip install wandb; wandb login` see https://docs.wandb.com/huggingface.
To use comet_ml logging, run `pip/conda install comet_ml` see https://www.comet.ml/docs/python-sdk/huggingface/
, which won't be shown if logging level is not set to INFO.
## Related
In the PR #6097, @LysandreJik changed `logger.info(output)` to `print(output)` in `trainer.py` in order to show logs on the screen.
Maybe we should do the same thing for `tf_trainer.py`. If not, could we set logging level to INFO in `tf_trainer.py` - however this would become different from `trainer.py` where the logging level is not set (at least, not in the trainer script).
## To reproduce
python3 run_tf_glue.py \
--task_name sst-2 \
--model_name_or_path distilbert-base-uncased \
--output_dir ./sst-2/ \
--max_seq_length 16 \
--num_train_epochs 2 \
--per_device_train_batch_size 4 \
--gradient_accumulation_steps 1 \
--max_steps 4 \
--logging_steps 1 \
--save_steps 2 \
--seed 1 \
--do_train \
--do_eval \
--do_predict \
--overwrite_output_dir
## Expected behavior
I expect the train/eval step logs will be shown on the screen.
## Remark
I can make a PR once a decision is made by the team. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7088/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7088/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7087 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7087/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7087/comments | https://api.github.com/repos/huggingface/transformers/issues/7087/events | https://github.com/huggingface/transformers/pull/7087 | 700,173,107 | MDExOlB1bGxSZXF1ZXN0NDg1ODQ5NTk5 | 7,087 | Transformer-XL: Remove unused parameters | {
"login": "RafaelWO",
"id": 38643099,
"node_id": "MDQ6VXNlcjM4NjQzMDk5",
"avatar_url": "https://avatars.githubusercontent.com/u/38643099?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RafaelWO",
"html_url": "https://github.com/RafaelWO",
"followers_url": "https://api.github.com/users/RafaelWO/followers",
"following_url": "https://api.github.com/users/RafaelWO/following{/other_user}",
"gists_url": "https://api.github.com/users/RafaelWO/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RafaelWO/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RafaelWO/subscriptions",
"organizations_url": "https://api.github.com/users/RafaelWO/orgs",
"repos_url": "https://api.github.com/users/RafaelWO/repos",
"events_url": "https://api.github.com/users/RafaelWO/events{/privacy}",
"received_events_url": "https://api.github.com/users/RafaelWO/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"As far I can see only the test `tests/test_trainer.py` is failing...",
"Hey! We changed the name of `nlp` to `datasets` from Thursday to Friday and I am pretty sure that's what's causing the CI bug, but it was fixed just afterwards. Could you pull the latest changes from master and rebase on your branch so you get the fix?",
"Sure!\r\n\r\nIs there a way to avoid that all \"new\" commits from master will end up in this PR even if they are already in the master branch? So that after the rebase still only my changes are listed in this PR?",
"I am not sure, as I haven't had this issue before (I remember your previous PR did). But using `git pull --rebase` on your local branch should do the trick.",
"Thanks I will try that!",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7087?src=pr&el=h1) Report\n> Merging [#7087](https://codecov.io/gh/huggingface/transformers/pull/7087?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/85ffda96fcadf70d2558ba0a59c84b9f5a2d6f0f?el=desc) will **increase** coverage by `0.20%`.\n> The diff coverage is `50.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7087?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7087 +/- ##\n==========================================\n+ Coverage 78.44% 78.65% +0.20% \n==========================================\n Files 168 168 \n Lines 32309 32306 -3 \n==========================================\n+ Hits 25346 25411 +65 \n+ Misses 6963 6895 -68 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7087?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/configuration\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7087/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3RyYW5zZm9feGwucHk=) | `87.03% <0.00%> (-2.06%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7087/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `88.10% <57.14%> (-0.03%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7087/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `79.73% <57.14%> (-0.04%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7087/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/7087/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `34.03% <0.00%> (-64.79%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/7087/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `25.85% <0.00%> (-55.15%)` | :arrow_down: |\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/7087/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `34.28% <0.00%> (-48.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/7087/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `57.14% <0.00%> (-39.69%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7087/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7087/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: |\n| ... and [22 more](https://codecov.io/gh/huggingface/transformers/pull/7087/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7087?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7087?src=pr&el=footer). Last update [b00cafb...775bc0a](https://codecov.io/gh/huggingface/transformers/pull/7087?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I am unsure whether it's better, in terms of user experience, to add default values to `tgt_len` and `ext_len` in `reset_length` and a deprecation warning or to change the name of the function altogether to `reset_memory_length` as you do. @LysandreJik could you take a look at this?",
"@TevenLeScao Since we remove `tgt_len` and `ext_len` from the model anyway I thought that the name `reset_memory_length` gives a better description of what the function is actually doing.",
"Alright, LGTM!"
] | 1,599 | 1,600 | 1,600 | CONTRIBUTOR | null | Fixes #6943
Since `tgt_len` and `ext_len` is removed, the method `reset_length` was renamed to `reset_memory_length` to be more meaningful.
I'm not sure whether a deprecation warning should be included in `TransfoXLConfig`. Maybe someone can make suggestions regarding this. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7087/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7087/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7087",
"html_url": "https://github.com/huggingface/transformers/pull/7087",
"diff_url": "https://github.com/huggingface/transformers/pull/7087.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7087.patch",
"merged_at": 1600337435000
} |
https://api.github.com/repos/huggingface/transformers/issues/7086 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7086/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7086/comments | https://api.github.com/repos/huggingface/transformers/issues/7086/events | https://github.com/huggingface/transformers/issues/7086 | 700,138,934 | MDU6SXNzdWU3MDAxMzg5MzQ= | 7,086 | Longformer run error | {
"login": "Yangxiaojun1230",
"id": 59246446,
"node_id": "MDQ6VXNlcjU5MjQ2NDQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/59246446?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yangxiaojun1230",
"html_url": "https://github.com/Yangxiaojun1230",
"followers_url": "https://api.github.com/users/Yangxiaojun1230/followers",
"following_url": "https://api.github.com/users/Yangxiaojun1230/following{/other_user}",
"gists_url": "https://api.github.com/users/Yangxiaojun1230/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Yangxiaojun1230/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Yangxiaojun1230/subscriptions",
"organizations_url": "https://api.github.com/users/Yangxiaojun1230/orgs",
"repos_url": "https://api.github.com/users/Yangxiaojun1230/repos",
"events_url": "https://api.github.com/users/Yangxiaojun1230/events{/privacy}",
"received_events_url": "https://api.github.com/users/Yangxiaojun1230/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"i fixed finally",
"> i fixed finally\r\n\r\nWhat was the solution?",
"@Yangxiaojun1230 How to fix it? I meet this problem too.",
"For my case, the problem is due to the miss correct of the length padding"
] | 1,599 | 1,607 | 1,600 | NONE | null | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
When I train on a classification model by Longformer
def forward(self,input):
embding=input['enc']
att_mask=input['mask']
att_mask[:,[100,300,500,800,1200,]]=2
labels=input['targets']
print('jeff:',embding.device,att_mask.device,self.l1.device,embding.shape,att_mask.shape,self.maxlen)
logit=self.l1(inputs_embeds=embding,attention_mask=att_mask)#[:2]
return [logit,labels]
Meet error:
pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [205,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed
max_num_extra_indices_per_batch = num_extra_indices_per_batch.max()
RuntimeError: cuda runtime error (710) : device-side assert triggered at /pytorch/aten/src/THC/THCReduceAll.cuh:327
I checked the length of the attention_mask is same as the config.max_len which is like [bs,max_len]
Do anyone meet the same issue?
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7086/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7086/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7085 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7085/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7085/comments | https://api.github.com/repos/huggingface/transformers/issues/7085/events | https://github.com/huggingface/transformers/issues/7085 | 700,092,309 | MDU6SXNzdWU3MDAwOTIzMDk= | 7,085 | Distilbart's summaries start with an empty space? | {
"login": "songwang41",
"id": 6013961,
"node_id": "MDQ6VXNlcjYwMTM5NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6013961?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/songwang41",
"html_url": "https://github.com/songwang41",
"followers_url": "https://api.github.com/users/songwang41/followers",
"following_url": "https://api.github.com/users/songwang41/following{/other_user}",
"gists_url": "https://api.github.com/users/songwang41/gists{/gist_id}",
"starred_url": "https://api.github.com/users/songwang41/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/songwang41/subscriptions",
"organizations_url": "https://api.github.com/users/songwang41/orgs",
"repos_url": "https://api.github.com/users/songwang41/repos",
"events_url": "https://api.github.com/users/songwang41/events{/privacy}",
"received_events_url": "https://api.github.com/users/songwang41/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"I think i figure it out now. in BART, \" The\", \" Republic\" are individual words.",
"I'm having a similar issue as @songwanguw where each line of my output summaries start with a blank space when using Distilbart. This does not happen when I use facebook/bart-large-cnn.",
"Unfortunately, I don't know any way to fix this besides postprocessing or retraining.\r\nYou're using the model correctly. I probably trained on targets with an extra prefixed space. My bad!\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,599 | 1,607 | 1,607 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: linux
- Python version: 3.7
- PyTorch version (GPU?): 1.51, gpu
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
@sshleifer
Steps to reproduce the behavior:
```
from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig
model = BartForConditionalGeneration.from_pretrained('sshleifer/distilbart-xsum-12-3')
tokenizer = BartTokenizer.from_pretrained('sshleifer/distilbart-xsum-12-3')
ARTICLE_TO_SUMMARIZE = "\"The accident meant the motorway was closed, making travel to Mourneview Park impossible for the team and fans travelling from Belfast, \" said the Irish Football Association . A new date for the match has yet to be confirmed by Uefa . Northern Ireland have three points from their first two Group Six qualifiers."
inputs = tokenizer.batch_encode_plus([ARTICLE_TO_SUMMARIZE], max_length=512, return_tensors='pt')
summary_ids = model.generate(inputs['input_ids'], num_beams=5, max_length=62, min_length=10, early_stopping=True)
print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True) for g in summary_ids])
```
[" The Republic of Ireland's Euro 2016 qualifier against Northern Ireland has been postponed because of a motorway crash."]
facebook/large-bart-xsum will give a correct summary. Why this summary starts with an empty space? What shall we pay attention to these inconsitency
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7085/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7085/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7084 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7084/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7084/comments | https://api.github.com/repos/huggingface/transformers/issues/7084/events | https://github.com/huggingface/transformers/issues/7084 | 700,089,400 | MDU6SXNzdWU3MDAwODk0MDA= | 7,084 | How to implement LayoutLM for information extraction | {
"login": "SandyRSK",
"id": 49015499,
"node_id": "MDQ6VXNlcjQ5MDE1NDk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49015499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SandyRSK",
"html_url": "https://github.com/SandyRSK",
"followers_url": "https://api.github.com/users/SandyRSK/followers",
"following_url": "https://api.github.com/users/SandyRSK/following{/other_user}",
"gists_url": "https://api.github.com/users/SandyRSK/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SandyRSK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SandyRSK/subscriptions",
"organizations_url": "https://api.github.com/users/SandyRSK/orgs",
"repos_url": "https://api.github.com/users/SandyRSK/repos",
"events_url": "https://api.github.com/users/SandyRSK/events{/privacy}",
"received_events_url": "https://api.github.com/users/SandyRSK/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"I think the model's integration is still a work-in-progress @SandyRSK, but will let model author @liminghao1630 chime in if necessary",
"Similar help required. I want to use the layoutLM model finetuned on DocBank data. As per my understanding, this will be a token classification task, but any example code will be extremely helpful.\r\nThanks ",
"@SandyRSK the integration is still on-going. You may refer to https://github.com/microsoft/unilm/tree/master/layoutlm if you want to use the model right now.",
"Is this still work-in-progress?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi, is it possible to load the docbank pretrained model in your implementation?",
"It should be possible to load the LayoutLM weights from the official repository in ours. Please feel free to try and upload it on the hub! \r\n\r\nLet us know if you run into trouble by opening a new issue and we'll take a look.",
"Thanks! I'm just worried about the changes you made (all the classes commented with `Copied from transformers.models.bert.modeling_bert.BertEncoder with Bert->LayoutLM`): https://huggingface.co/transformers/_modules/transformers/models/layoutlm/modeling_layoutlm.html",
"Right, this is for maintainability purposes. Instead of refactoring models in shared layers, we instead keep the entire forward pass in a single file. Doing it this way makes it easy to edit a single file or to read a paper side by side with the code, whereas refactored code would be harder to navigate in. These comments allow us to ensure that the files do not diverge: we have tests that check that the contents of `LayoutEncoder` is identical to `BertEncoder`, with the only change being utterances of \"bert\" changed to \"layoutlm\".\r\n\r\nAs you can see, the differences between BERT and LayoutLM are especially the embeddings.",
"Hi, I loaded the pretained DocBank model, which is a LayoutLM from the original unilm repository, using the LayoutLM from HuggingFace. \r\n\r\nI get a warning of the form:\r\n\r\n```\r\nSome weights of the model checkpoint at layoutlm_large_500k_epoch_1/ were not used when initializing LayoutLMForTokenClassification: ['bert.embeddings.word_embeddings.weight' [...]\r\n- This IS expected if you are initializing LayoutLMForTokenClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing LayoutLMForTokenClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome weights of LayoutLMForTokenClassification were not initialized from the model checkpoint at layoutlm_large_500k_epoch_1/ and are newly initialized: ['embeddings.word_embeddings.weight' [...]\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n```\r\n\r\nFor the full warning see https://paste.ofcode.org/WxFiUf8bYL3TEjc9jLAG4J\r\n\r\nHowever, even after running model.eval(), I get random outputs. \r\n\r\nEdit:\r\nMoreover, the outputs don't seem to make any sense.\r\nFinally, >90% tokens always get the predicted label. But this label changes at each inference pass."
] | 1,599 | 1,612 | 1,610 | NONE | null | Hi,
I am new to nlp
Can someone please guide me on How to implement the layoutLM using transformers for information extraction (from images like receipt)
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("microsoft/layoutlm-large-uncased")
model = AutoModel.from_pretrained("microsoft/layoutlm-large-uncased")
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7084/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7084/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7083 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7083/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7083/comments | https://api.github.com/repos/huggingface/transformers/issues/7083/events | https://github.com/huggingface/transformers/pull/7083 | 699,960,602 | MDExOlB1bGxSZXF1ZXN0NDg1NjU1ODU3 | 7,083 | SqueezeBERT architecture | {
"login": "forresti",
"id": 2020010,
"node_id": "MDQ6VXNlcjIwMjAwMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2020010?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/forresti",
"html_url": "https://github.com/forresti",
"followers_url": "https://api.github.com/users/forresti/followers",
"following_url": "https://api.github.com/users/forresti/following{/other_user}",
"gists_url": "https://api.github.com/users/forresti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/forresti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/forresti/subscriptions",
"organizations_url": "https://api.github.com/users/forresti/orgs",
"repos_url": "https://api.github.com/users/forresti/repos",
"events_url": "https://api.github.com/users/forresti/events{/privacy}",
"received_events_url": "https://api.github.com/users/forresti/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7083?src=pr&el=h1) Report\n> Merging [#7083](https://codecov.io/gh/huggingface/transformers/pull/7083?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/de4d7b004a24e4bb087eb46d742ea7939bc74644?el=desc) will **increase** coverage by `0.97%`.\n> The diff coverage is `97.70%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7083?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7083 +/- ##\n==========================================\n+ Coverage 77.00% 77.98% +0.97% \n==========================================\n Files 184 184 \n Lines 36734 36216 -518 \n==========================================\n- Hits 28288 28244 -44 \n+ Misses 8446 7972 -474 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7083?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_squeezebert.py](https://codecov.io/gh/huggingface/transformers/pull/7083/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19zcXVlZXplYmVydC5weQ==) | `97.38% <97.38%> (ΓΈ)` | |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/7083/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.39% <100.00%> (ΓΈ)` | |\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7083/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `96.34% <100.00%> (ΓΈ)` | |\n| [src/transformers/configuration\\_squeezebert.py](https://codecov.io/gh/huggingface/transformers/pull/7083/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3NxdWVlemViZXJ0LnB5) | `100.00% <100.00%> (ΓΈ)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7083/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `86.08% <100.00%> (-1.04%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7083/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `92.64% <100.00%> (ΓΈ)` | |\n| [src/transformers/tokenization\\_squeezebert.py](https://codecov.io/gh/huggingface/transformers/pull/7083/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fc3F1ZWV6ZWJlcnQucHk=) | `100.00% <100.00%> (ΓΈ)` | |\n| [src/transformers/modeling\\_tf\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7083/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `17.46% <0.00%> (-81.13%)` | :arrow_down: |\n| [src/transformers/tokenization\\_fsmt.py](https://codecov.io/gh/huggingface/transformers/pull/7083/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnNtdC5weQ==) | `20.34% <0.00%> (-74.90%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7083/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.03% <0.00%> (-73.03%)` | :arrow_down: |\n| ... and [39 more](https://codecov.io/gh/huggingface/transformers/pull/7083/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7083?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7083?src=pr&el=footer). Last update [de4d7b0...c0521ee](https://codecov.io/gh/huggingface/transformers/pull/7083?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@sgugger @patrickvonplaten - Thanks so much for the feedback! I will make these changes later this week.",
"@sgugger @patrickvonplaten - I made all the changes that you suggested. Ready to merge?"
] | 1,599 | 1,601 | 1,601 | CONTRIBUTOR | null | # Burn-down list:
Here an overview of the general workflow:
- [x] Add model/configuration/tokenization classes.
- [ ] ~~Add conversion scripts.~~ This model was originally developed in PyTorch.
- [x] Add tests and a @slow integration test.
- [x] Document your model.
- [x] Finalize.
Let's detail what should be done at each step.
## Adding model/configuration/tokenization classes
Here is the workflow for adding model/configuration/tokenization classes:
- [x] Copy the python files from the present folder to the main folder and rename them, replacing `xxx` with your model
name.
- [x] Edit the files to replace `XXX` (with various casing) with your model name.
- [x] Copy-paste or create a simple configuration class for your model in the `configuration_...` file.
- [x] Copy-paste or create the code for your model in the `modeling_...` files (PyTorch ~~and TF 2.0~~).
- [x] Copy-paste or create a tokenizer class for your model in the `tokenization_...` file.
### loose ends:
- [ ] support for head_mask, encoder_hidden_states, encoder_attention_mask (Not planning to do for this PR)
- [x] Make sure finetuning works.
## ~~Adding conversion scripts~~
Here is the workflow for the conversion scripts:
- [ ] ~~Copy the conversion script (`convert_...`) from the present folder to the main folder.~~
- [ ] ~~Edit this script to convert your original checkpoint weights to the current pytorch ones.~~
## Adding tests:
Here is the workflow for the adding tests:
- [ ] ~~Copy the python files from the `tests` sub-folder of the present folder to the `tests` subfolder of the main
folder and rename them, replacing `xxx` with your model name.~~
- [ ] ~~Edit the tests files to replace `XXX` (with various casing) with your model name.~~
- [ ] ~~Edit the tests code as needed.~~
- [x] Create tests, using the DistilBERT tests as a starting point
- [x] `test_modeling_squeezebert.py` (based on `test_modeling_distilbert.py`)
- [x] `test_tokeization_squeezebert.py` (based on `test_tokenization_distilbert.py`)
## Documenting your model:
Here is the workflow for documentation:
- [x] Make sure all your arguments are properly documented in your configuration and tokenizer.
- [x] Most of the documentation of the models is automatically generated, you just have to make sure that
`XXX_START_DOCSTRING` contains an introduction to the model you're adding and a link to the original
article and that `XXX_INPUTS_DOCSTRING` contains all the inputs of your model.
- [x] Create a new page `xxx.rst` in the folder `docs/source/model_doc` and add this file in `docs/source/index.rst`.
Make sure to check you have no sphinx warnings when building the documentation locally and follow our
[documentation guide](https://github.com/huggingface/transformers/tree/master/docs#writing-documentation---specification).
## Final steps
You can then finish the addition step by adding imports for your classes in the common files:
- [x] Add import for all the relevant classes in `__init__.py`.
- [x] Add your configuration in `configuration_auto.py`.
- [x] Add your PyTorch and ~~TF 2.0~~ model respectively in `modeling_auto.py` ~~and `modeling_tf_auto.py`~~.
- [x] Add your tokenizer in `tokenization_auto.py`.
- [ ] ~~Add a link to your conversion script in the main conversion utility (in `commands/convert.py`)~~
- [ ] ~~Edit the PyTorch to TF 2.0 conversion script to add your model in the `convert_pytorch_checkpoint_to_tf2.py`
file.~~
- [x] Add a mention of your model in...
- [x] `README.md`
- [x] `docs/source/index.rst`
- [x] `docs/source/pretrained_models.rst`.
- [x] Upload the vocabulary files, configurations, and pretrained weights .
- squeezebert-uncased
- https://s3.amazonaws.com/models.huggingface.co/bert/squeezebert/squeezebert-uncased/vocab.txt
- https://s3.amazonaws.com/models.huggingface.co/bert/squeezebert/squeezebert-uncased/pytorch_model.bin
- https://s3.amazonaws.com/models.huggingface.co/bert/squeezebert/squeezebert-uncased/config.json
- squeezebert-mnli-headless
- https://s3.amazonaws.com/models.huggingface.co/bert/squeezebert/squeezebert-mnli-headless/vocab.txt
- https://s3.amazonaws.com/models.huggingface.co/bert/squeezebert/squeezebert-mnli-headless/pytorch_model.bin
- https://s3.amazonaws.com/models.huggingface.co/bert/squeezebert/squeezebert-mnli-headless/config.json
- squeezebert-mnli
- https://s3.amazonaws.com/models.huggingface.co/bert/squeezebert/squeezebert-mnli/vocab.txt
- https://s3.amazonaws.com/models.huggingface.co/bert/squeezebert/squeezebert-mnli/pytorch_model.bin
- https://s3.amazonaws.com/models.huggingface.co/bert/squeezebert/squeezebert-mnli/config.json
- [x] Create model card(s) for your models on huggingface.co (These go in the repo, not in the file upload).
For those last two steps, check the [model sharing documentation](https://huggingface.co/transformers/model_sharing.html).
- [ ] Delete these files (I uploaded these to the wrong directories)
- https://s3.amazonaws.com/models.huggingface.co/bert/squeezebert/squeezebert/squeezebert-uncased-vocab.txt
- https://s3.amazonaws.com/models.huggingface.co/squeezebert/squeezebert/squeezebert-uncased-config.json
- https://s3.amazonaws.com/models.huggingface.co/squeezebert/squeezebert/squeezebert-uncased.bin
- https://s3.amazonaws.com/models.huggingface.co/squeezebert/squeezebert/squeezebert-mnli.bin
- https://s3.amazonaws.com/models.huggingface.co/squeezebert/squeezebert/squeezebert-mnli-headless.bin
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7083/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7083/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7083",
"html_url": "https://github.com/huggingface/transformers/pull/7083",
"diff_url": "https://github.com/huggingface/transformers/pull/7083.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7083.patch",
"merged_at": 1601886344000
} |
https://api.github.com/repos/huggingface/transformers/issues/7082 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7082/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7082/comments | https://api.github.com/repos/huggingface/transformers/issues/7082/events | https://github.com/huggingface/transformers/issues/7082 | 699,858,141 | MDU6SXNzdWU2OTk4NTgxNDE= | 7,082 | Longformer output_hidden_states=True outputs sequence length=512 for all inputs of different lengths | {
"login": "xuewyang",
"id": 32026462,
"node_id": "MDQ6VXNlcjMyMDI2NDYy",
"avatar_url": "https://avatars.githubusercontent.com/u/32026462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xuewyang",
"html_url": "https://github.com/xuewyang",
"followers_url": "https://api.github.com/users/xuewyang/followers",
"following_url": "https://api.github.com/users/xuewyang/following{/other_user}",
"gists_url": "https://api.github.com/users/xuewyang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xuewyang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xuewyang/subscriptions",
"organizations_url": "https://api.github.com/users/xuewyang/orgs",
"repos_url": "https://api.github.com/users/xuewyang/repos",
"events_url": "https://api.github.com/users/xuewyang/events{/privacy}",
"received_events_url": "https://api.github.com/users/xuewyang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"When I use longformer-base-4096, it works well.",
"The reason is that `LongformerModel` pads its input to the `window_size` configuration param, which is 512 for `allenai/longformer-large-4096`. Therefore all `hidden_states` are of length 512. We could think about cutting the hidden states to the actual inputt length...",
"Gotcha. But I also tested with longformer-base. It works fine with hidden states cut. Thank you."
] | 1,599 | 1,600 | 1,600 | NONE | null | Can someone explain why cc[2][0].shape != dd[2][0].shape?
>>> from transformers import LongformerModel, RobertaModel
>>> import torch
>>> from transformers import RobertaTokenizer
>>> tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
>>> aa = RobertaModel.from_pretrained('roberta-base', return_dict=True)
>>> bb = LongformerModel.from_pretrained('allenai/longformer-large-4096', return_dict=True)
Some weights of LongformerModel were not initialized from the model checkpoint at allenai/longformer-large-4096 and are newly initialized: ['longformer.embeddings.position_ids']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> cc = aa(**inputs, output_hidden_states=True)
>>> cc[2][0].shape
torch.Size([1, 8, 768])
>>> dd = bb(**inputs, output_hidden_states=True)
>>> dd[2][0].shape
torch.Size([1, 512, 1024])
>>> dd[0].shape
torch.Size([1, 8, 1024])
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7082/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7082/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7081 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7081/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7081/comments | https://api.github.com/repos/huggingface/transformers/issues/7081/events | https://github.com/huggingface/transformers/pull/7081 | 699,772,799 | MDExOlB1bGxSZXF1ZXN0NDg1NDgwOTM3 | 7,081 | Clean up autoclass doc | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7081?src=pr&el=h1) Report\n> Merging [#7081](https://codecov.io/gh/huggingface/transformers/pull/7081?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ae736163d0d7a3a167ff0df3bf6c824437bbba2a?el=desc) will **increase** coverage by `0.09%`.\n> The diff coverage is `89.47%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7081?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7081 +/- ##\n==========================================\n+ Coverage 79.44% 79.54% +0.09% \n==========================================\n Files 168 168 \n Lines 32260 32285 +25 \n==========================================\n+ Hits 25630 25680 +50 \n+ Misses 6630 6605 -25 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7081?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7081/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `96.10% <ΓΈ> (ΓΈ)` | |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7081/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `72.36% <ΓΈ> (+1.77%)` | :arrow_up: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7081/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `91.80% <ΓΈ> (ΓΈ)` | |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7081/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `54.90% <71.42%> (+0.22%)` | :arrow_up: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7081/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `82.29% <100.00%> (+1.07%)` | :arrow_up: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7081/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `93.54% <0.00%> (+0.35%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7081/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+5.76%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7081?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7081?src=pr&el=footer). Last update [4cbd50e...dad4fcd](https://codecov.io/gh/huggingface/transformers/pull/7081?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,599 | 1,600 | 1,600 | COLLABORATOR | null | Clean up and standardize the documentation of the auto classes. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7081/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7081/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7081",
"html_url": "https://github.com/huggingface/transformers/pull/7081",
"diff_url": "https://github.com/huggingface/transformers/pull/7081.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7081.patch",
"merged_at": 1600090001000
} |
https://api.github.com/repos/huggingface/transformers/issues/7080 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7080/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7080/comments | https://api.github.com/repos/huggingface/transformers/issues/7080/events | https://github.com/huggingface/transformers/issues/7080 | 699,628,017 | MDU6SXNzdWU2OTk2MjgwMTc= | 7,080 | Importing unittests using python unittest framework | {
"login": "nikhil1008",
"id": 7234284,
"node_id": "MDQ6VXNlcjcyMzQyODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7234284?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nikhil1008",
"html_url": "https://github.com/nikhil1008",
"followers_url": "https://api.github.com/users/nikhil1008/followers",
"following_url": "https://api.github.com/users/nikhil1008/following{/other_user}",
"gists_url": "https://api.github.com/users/nikhil1008/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nikhil1008/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nikhil1008/subscriptions",
"organizations_url": "https://api.github.com/users/nikhil1008/orgs",
"repos_url": "https://api.github.com/users/nikhil1008/repos",
"events_url": "https://api.github.com/users/nikhil1008/events{/privacy}",
"received_events_url": "https://api.github.com/users/nikhil1008/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Does the problem go away if you install the latest stable pytorch 1.6.0, see https://pytorch.org/get-started/locally/?\r\n\r\nAlso the test suite uses `pytest` to run the tests - have you tried using it?\r\n\r\n```\r\npytest tests\r\n```\r\nDo you get the same issue running it this way?\r\n\r\nIs it possible that your script ends up running tests in a different order from how they are normally tested on CI and uncovers some bug where several tests are dependent on each other and can only be run in a particular order? If so it might be helpful to find out the minimal sequence that leads to this error. \r\n\r\nI'm asking since the test suite is not 100% perfect, and I saw just recently that if you change the order of tests things start to fail.\r\n",
"I have run your script and didn't get this error - but got 15 other errors, most `RuntimeError: Physical devices cannot be modified after being initialized` in tf tests.\r\nMy env is:\r\n```\r\n- `transformers` version: 3.1.0\r\n- Platform: Linux-4.15.0-112-generic-x86_64-with-glibc2.10\r\n- Python version: 3.8.5\r\n- PyTorch version (GPU?): 1.7.0.dev20200910 (True)\r\n- Tensorflow version (GPU?): 2.3.0 (True)\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,599 | 1,606 | 1,606 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Linux-5.3.0-1034-azure-x86_64-with-debian-buster-sid
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.0a0+8f84ded (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@sgugger
## Information
I am trying to run unit tests after cloning fresh repo. I use python unittest framework and try to load the tests using "unittest.defaultTestLoader". Most of the tests pass, but I get few errors. All the errors are same - "**cannot initialize type "_CudaDeviceProperties": an object with that name is already defined**". This problem goes away if I user python unittest command line .
The problem arises when using:
* [ ] my own modified scripts: (give details below)
I use following script to load tests.
```
import os
import glob
import unittest
test_files_path='./tests/*'
test_files = [os.path.basename(x) for x in glob.glob(test_files_path)]
module_strings = ['tests.'+test_file[0:len(test_file)-3] for test_file in test_files]
print(module_strings)
suites = [unittest.defaultTestLoader.loadTestsFromName(test_file) for test_file in module_strings]
test_suite = unittest.TestSuite(suites)
test_runner = unittest.TextTestRunner().run(test_suite)
```
## To reproduce
Steps to reproduce the behavior:
1. Paste above code in a python file under transformers directory.
2.Run python file.
## Expected behavior
All tests should pass since I just cloned a fresh repo. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7080/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7080/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7079 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7079/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7079/comments | https://api.github.com/repos/huggingface/transformers/issues/7079/events | https://github.com/huggingface/transformers/pull/7079 | 699,622,252 | MDExOlB1bGxSZXF1ZXN0NDg1MzQ0OTY3 | 7,079 | ignore FutureWarning in tests | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, I agree with this. Thanks @stas00!"
] | 1,599 | 1,600 | 1,600 | CONTRIBUTOR | null | As discussed in https://github.com/huggingface/transformers/pull/7033 we can't deal with transformers' `FutureWarning` in tests, since we have to keep those tests around until they become normal warnings and then the tests will get fixed/adjusted. So currently they just generate noise that can't be acted upon.
The only side-effect I can see is with other libraries' FutureWarnings, which now will be silenced too, but again we can easily fix those as soon as those aren't futuristic any more.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7079/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7079/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7079",
"html_url": "https://github.com/huggingface/transformers/pull/7079",
"diff_url": "https://github.com/huggingface/transformers/pull/7079.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7079.patch",
"merged_at": 1600084252000
} |
https://api.github.com/repos/huggingface/transformers/issues/7078 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7078/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7078/comments | https://api.github.com/repos/huggingface/transformers/issues/7078/events | https://github.com/huggingface/transformers/pull/7078 | 699,593,492 | MDExOlB1bGxSZXF1ZXN0NDg1MzE5MTM0 | 7,078 | [T5Tokenizer] remove prefix_tokens | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7078?src=pr&el=h1) Report\n> Merging [#7078](https://codecov.io/gh/huggingface/transformers/pull/7078?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ae736163d0d7a3a167ff0df3bf6c824437bbba2a?el=desc) will **decrease** coverage by `1.09%`.\n> The diff coverage is `77.77%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7078?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7078 +/- ##\n==========================================\n- Coverage 79.44% 78.35% -1.10% \n==========================================\n Files 168 168 \n Lines 32260 32257 -3 \n==========================================\n- Hits 25630 25274 -356 \n- Misses 6630 6983 +353 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7078?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7078/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `54.90% <71.42%> (+0.22%)` | :arrow_up: |\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/7078/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `95.04% <100.00%> (-0.19%)` | :arrow_down: |\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7078/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7078/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-77.64%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7078/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.94% <0.00%> (-74.32%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7078/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7078/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.65%)` | :arrow_up: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7078/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `89.45% <0.00%> (+10.24%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7078/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `88.13% <0.00%> (+68.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7078/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `92.17% <0.00%> (+71.04%)` | :arrow_up: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/7078/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7078?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7078?src=pr&el=footer). Last update [4cbd50e...c0dcfc7](https://codecov.io/gh/huggingface/transformers/pull/7078?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks @suraj, great catch!",
"Also very clean implem :)"
] | 1,599 | 1,599 | 1,599 | MEMBER | null |
Fixes #7077
@sshleifer
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7078/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7078/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7078",
"html_url": "https://github.com/huggingface/transformers/pull/7078",
"diff_url": "https://github.com/huggingface/transformers/pull/7078.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7078.patch",
"merged_at": 1599848326000
} |
https://api.github.com/repos/huggingface/transformers/issues/7077 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7077/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7077/comments | https://api.github.com/repos/huggingface/transformers/issues/7077/events | https://github.com/huggingface/transformers/issues/7077 | 699,591,347 | MDU6SXNzdWU2OTk1OTEzNDc= | 7,077 | T5Tokenizer shouldn't add pad token as prefix to labels | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,599 | 1,599 | 1,599 | MEMBER | null |
## Information
`prepare_seq2seq_batch` method in `T5Tokenizer` now prefixes `pad` token to the `labels`. [here](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_t5.py#L362)
But in finetune.py [here](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune.py#L149) we are calling `_shift_right` for T5 , which again adds another `pad` token at the beginning, so now `decoder_input_ids` contain two `pad` tokens.
## To reproduce
```python
from transformers import T5Tokenizer, T5Model
model = T5Model.from_pretrained("t5-small")
tok = T5Tokenizer.from_pretrained("t5-small")
enc = tok.prepare_seq2seq_batch("src text", "target text", return_tensors="pt")
print(enc["labels"])
# tensor([[ 0, 2387, 1499, 1]])
decoder_input_ids = model._shift_right(enc["labels"]) # call _shift_right
print(decoder_input_ids)
#tensor([[ 0, 0, 2387, 1499]])
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
There should be no special prefix token for T5 `labels`
@sshleifer
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7077/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7077/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7076 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7076/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7076/comments | https://api.github.com/repos/huggingface/transformers/issues/7076/events | https://github.com/huggingface/transformers/issues/7076 | 699,504,557 | MDU6SXNzdWU2OTk1MDQ1NTc= | 7,076 | some sshleifer/xsum hub models have bart-large-cnn task_specific_params['summarization'] | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Done. ",
"- [x] pegasus xsum students\r\n- [x] uploaded distilled pegasus\r\n- [x] re-evaluate updated layerdrop model (te: 23.5291: quickly worsens)\r\n- [x] re-evaluate bart-xsum pseudolabel model(s). "
] | 1,599 | 1,600 | 1,600 | CONTRIBUTOR | null | S3 only fix!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7076/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7076/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7075 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7075/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7075/comments | https://api.github.com/repos/huggingface/transformers/issues/7075/events | https://github.com/huggingface/transformers/pull/7075 | 699,498,232 | MDExOlB1bGxSZXF1ZXN0NDg1MjMzMTMz | 7,075 | [Benchmarks] Change all args to from `no_...` to their positive form | {
"login": "fmcurti",
"id": 7762516,
"node_id": "MDQ6VXNlcjc3NjI1MTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7762516?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fmcurti",
"html_url": "https://github.com/fmcurti",
"followers_url": "https://api.github.com/users/fmcurti/followers",
"following_url": "https://api.github.com/users/fmcurti/following{/other_user}",
"gists_url": "https://api.github.com/users/fmcurti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fmcurti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fmcurti/subscriptions",
"organizations_url": "https://api.github.com/users/fmcurti/orgs",
"repos_url": "https://api.github.com/users/fmcurti/repos",
"events_url": "https://api.github.com/users/fmcurti/events{/privacy}",
"received_events_url": "https://api.github.com/users/fmcurti/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @fmcurti - this looks great, thanks a lot for your contribution here!\r\n\r\nIn order to pass the `check_code_quality` test it would be great if you can run `make style` as described in https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md . \r\nAnd it would be important to make the benchmarks.py test pass.\r\n\r\nAs said in the comment above, it'd be great if you could add a line stating how to disable `default=True` params from the config.\r\n\r\nLet me know if you are stuck and need help :-) ",
"Hi @patrickvonplaten =) \r\nI think I modified the tests so that they pass, but i think the tests running on circleci are the ones on the master branch and not on my commit maybe? because the error messages I see seem to be from things I already changed",
"@LysandreJik @sgugger - For some reason the git-diff is messed up in this PR. I checked the changes offline using `vimdiff` and the changes look good to me! \r\nThink the breaking changes are not super relevant here because it's only the benchmarking argument class that is affected and not too many people seem to be using the benchmarking tools.",
"Great! My first contribution to an open source library π€\r\nSmall one but it's a start,",
"The first post looks confusing, it looks like someone with admin rights edited it, but didn't indicate so, so it appears that @fmcurti praises himself ;) perhaps a ----- bar and a note that it was edited, so that it's clear to the readers that \"this and below\" wasn't part of the original post?",
"I'm in agreement with @patrickvonplaten - I don't think this API is being used much and it's an important change for the long run, so it's worth the breaking. Good APIs are not easy.\r\n\r\nbut if it's really needed one could keep the old args and code them to set the new args. and have a deprecation cycle. Definitely doable.\r\n\r\n@fmcurti - great work, thank you for your contribution!",
"> @LysandreJik @sgugger - For some reason the git-diff is messed up in this PR. I checked the changes offline using `vimdiff` and the changes look good to me!\r\n\r\nYou need to tell github to ignore whitespaces and then it works:\r\nhttps://github.com/huggingface/transformers/pull/7075/files?diff=split&w=1\r\nGot it via this:\r\n\r\n\r\n",
"> I'm in agreement with @patrickvonplaten - I don't think this API is being used much and it's an important change for the long run, so it's worth the breaking. Good APIs are not easy.\r\n>\r\n> but if it's really needed one could keep the old args and code them to set the new args. and have a deprecation cycle. Definitely doable.\r\n\r\nI agree that this change is welcome, and the proposed API here is indeed much better than the previous one. However, I think this can be done better by having, as you mentioned, some deprecation warnings while still maintaining backwards compatibility for some time.",
"@LysandreJik - I'm fine with keeping the old arguments with deprecation warnings - will add this to the PR.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7075?src=pr&el=h1) Report\n> Merging [#7075](https://codecov.io/gh/huggingface/transformers/pull/7075?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/28cf873036d078b47fb9dd38ac3421a7c874da44?el=desc) will **increase** coverage by `1.81%`.\n> The diff coverage is `71.45%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7075?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7075 +/- ##\n==========================================\n+ Coverage 76.58% 78.39% +1.81% \n==========================================\n Files 181 181 \n Lines 34828 34851 +23 \n==========================================\n+ Hits 26674 27323 +649 \n+ Misses 8154 7528 -626 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7075?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/benchmark/benchmark.py](https://codecov.io/gh/huggingface/transformers/pull/7075/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrLnB5) | `82.17% <ΓΈ> (ΓΈ)` | |\n| [src/transformers/benchmark/benchmark\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7075/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3RmLnB5) | `65.51% <ΓΈ> (ΓΈ)` | |\n| [src/transformers/benchmark/benchmark\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7075/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `69.00% <69.00%> (-0.50%)` | :arrow_down: |\n| [src/transformers/benchmark/benchmark\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/7075/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3MucHk=) | `83.33% <76.92%> (-2.72%)` | :arrow_down: |\n| [src/transformers/benchmark/benchmark\\_args\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7075/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdGYucHk=) | `85.29% <78.57%> (-2.21%)` | :arrow_down: |\n| [src/transformers/benchmark/benchmark\\_args\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7075/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdXRpbHMucHk=) | `89.13% <89.13%> (ΓΈ)` | |\n| [src/transformers/modeling\\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7075/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sYXlvdXRsbS5weQ==) | `25.06% <0.00%> (-69.40%)` | :arrow_down: |\n| [src/transformers/tokenization\\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7075/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmFnLnB5) | `32.55% <0.00%> (-37.21%)` | :arrow_down: |\n| [src/transformers/tokenization\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7075/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnVubmVsLnB5) | `62.79% <0.00%> (-34.89%)` | :arrow_down: |\n| [src/transformers/configuration\\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7075/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xheW91dGxtLnB5) | `80.00% <0.00%> (-20.00%)` | :arrow_down: |\n| ... and [16 more](https://codecov.io/gh/huggingface/transformers/pull/7075/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7075?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7075?src=pr&el=footer). Last update [28cf873...9019232](https://codecov.io/gh/huggingface/transformers/pull/7075?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,599 | 1,600 | 1,600 | CONTRIBUTOR | null | Fixes #7072
As @stas00 pointed out a while back, there were a bunch of negative args formulations in `benchmarks.py` which are hard to understand and make the code unnecessarly complex.
This PR fixes this problem by
a) Formulating all args in their positive form
b) Improving their docstring
c) Making the code more readable
**!!! This has breaking changes for the command line args of `examples/run_benchmark.py `and `examples/run_benchmark_tf.py` !!!**
People that were using `--no_...` on command line previously, now should use `--no-...`. *E.g.* `--no_inference` is changed to `--no-inference` on the command line.
Thanks a lot @fmcurti for leading this PR! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7075/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7075/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7075",
"html_url": "https://github.com/huggingface/transformers/pull/7075",
"diff_url": "https://github.com/huggingface/transformers/pull/7075.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7075.patch",
"merged_at": 1600881925000
} |
https://api.github.com/repos/huggingface/transformers/issues/7074 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7074/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7074/comments | https://api.github.com/repos/huggingface/transformers/issues/7074/events | https://github.com/huggingface/transformers/pull/7074 | 699,482,225 | MDExOlB1bGxSZXF1ZXN0NDg1MjE4OTIx | 7,074 | Compute loss method | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7074?src=pr&el=h1) Report\n> Merging [#7074](https://codecov.io/gh/huggingface/transformers/pull/7074?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e841b75decf5ec9c8829dc1a3c43426ffa9f6907?el=desc) will **increase** coverage by `1.15%`.\n> The diff coverage is `71.42%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7074?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7074 +/- ##\n==========================================\n+ Coverage 78.27% 79.43% +1.15% \n==========================================\n Files 168 168 \n Lines 32251 32252 +1 \n==========================================\n+ Hits 25246 25618 +372 \n+ Misses 7005 6634 -371 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7074?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7074/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `54.90% <71.42%> (+0.22%)` | :arrow_up: |\n| [...c/transformers/modeling\\_tf\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7074/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7074/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7074/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `19.85% <0.00%> (-68.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7074/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.84% <0.00%> (-23.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7074/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7074/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7074/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `79.19% <0.00%> (-4.27%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7074/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (-0.66%)` | :arrow_down: |\n| [...rc/transformers/data/datasets/language\\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/7074/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `92.94% <0.00%> (-0.59%)` | :arrow_down: |\n| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/7074/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7074?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7074?src=pr&el=footer). Last update [e841b75...a3cf42f](https://codecov.io/gh/huggingface/transformers/pull/7074?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This is very nice. Many thanks for this feature. :-)"
] | 1,599 | 1,599 | 1,599 | COLLABORATOR | null | Make it easier to write a custom loss by isolating the loss computation in a separate method.
Also document how to write a Trainer using such a custom loss. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7074/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7074/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7074",
"html_url": "https://github.com/huggingface/transformers/pull/7074",
"diff_url": "https://github.com/huggingface/transformers/pull/7074.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7074.patch",
"merged_at": 1599840392000
} |
https://api.github.com/repos/huggingface/transformers/issues/7073 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7073/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7073/comments | https://api.github.com/repos/huggingface/transformers/issues/7073/events | https://github.com/huggingface/transformers/pull/7073 | 699,460,899 | MDExOlB1bGxSZXF1ZXN0NDg1MTk5ODk0 | 7,073 | Add tests and fix various bugs in ModelOutput | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7073?src=pr&el=h1) Report\n> Merging [#7073](https://codecov.io/gh/huggingface/transformers/pull/7073?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0054a48cdd64e7309184a64b399ab2c58d75d4e5?el=desc) will **decrease** coverage by `1.76%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7073?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7073 +/- ##\n==========================================\n- Coverage 80.53% 78.76% -1.77% \n==========================================\n Files 168 168 \n Lines 32179 32188 +9 \n==========================================\n- Hits 25915 25354 -561 \n- Misses 6264 6834 +570 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7073?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7073/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.80% <100.00%> (+0.13%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/7073/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `34.03% <0.00%> (-64.79%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/7073/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `25.85% <0.00%> (-55.15%)` | :arrow_down: |\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/7073/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `34.28% <0.00%> (-48.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7073/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `48.80% <0.00%> (-46.43%)` | :arrow_down: |\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/7073/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `57.14% <0.00%> (-39.69%)` | :arrow_down: |\n| [src/transformers/tokenization\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7073/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnVubmVsLnB5) | `62.79% <0.00%> (-34.89%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7073/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7073/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `49.10% <0.00%> (-17.97%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/7073/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: |\n| ... and [15 more](https://codecov.io/gh/huggingface/transformers/pull/7073/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7073?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7073?src=pr&el=footer). Last update [0054a48...18e2891](https://codecov.io/gh/huggingface/transformers/pull/7073?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,599 | 1,599 | 1,599 | COLLABORATOR | null | Fix a few bugs in `ModelOutput` and adds tests, mainly:
- if there is only one attribute not `None`, it wasn't accessible by key (and recognized inside the dictionary)
- if an attribute was set with `model_output.att_name` = ..., the corresponding key in the dictionary was not updated
- if an attribute was set with `model_output["key"] = ...`, the corresponding attribute was not updated. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7073/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7073/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7073",
"html_url": "https://github.com/huggingface/transformers/pull/7073",
"diff_url": "https://github.com/huggingface/transformers/pull/7073.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7073.patch",
"merged_at": 1599840094000
} |
https://api.github.com/repos/huggingface/transformers/issues/7072 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7072/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7072/comments | https://api.github.com/repos/huggingface/transformers/issues/7072/events | https://github.com/huggingface/transformers/issues/7072 | 699,394,141 | MDU6SXNzdWU2OTkzOTQxNDE= | 7,072 | Clean up `benchmark_args_utils.py` "no_..." arguments | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hmm I seem to have forgotten to change the references in the tests"
] | 1,599 | 1,600 | 1,600 | MEMBER | null | # π Feature request
Currently we have a mixture of negative and positive formulated arguments, *e.g.* `no_cuda` and `training` here: https://github.com/huggingface/transformers/blob/0054a48cdd64e7309184a64b399ab2c58d75d4e5/src/transformers/benchmark/benchmark_args_utils.py#L61.
We should change all arguments to be positively formulated, *e.g. from `no_cuda` to `cuda`. These arguments should then change their default value from `False` to `True`.
Also the help text should be updated to something that is better formulated: "Don't ...." as a help text is not very easy to understand.
The motivation is clear: It's better to be consistent in a library and have the code as easy and intuitive to understand.
## Your contribution
This is a "good first issue", so I'm happy to help anybody who wants to take a shot at this :-)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7072/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7072/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7071 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7071/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7071/comments | https://api.github.com/repos/huggingface/transformers/issues/7071/events | https://github.com/huggingface/transformers/pull/7071 | 699,318,460 | MDExOlB1bGxSZXF1ZXN0NDg1MDczMDI2 | 7,071 | added bangla-bert-base model card and also modified other model cards | {
"login": "sagorbrur",
"id": 10723655,
"node_id": "MDQ6VXNlcjEwNzIzNjU1",
"avatar_url": "https://avatars.githubusercontent.com/u/10723655?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sagorbrur",
"html_url": "https://github.com/sagorbrur",
"followers_url": "https://api.github.com/users/sagorbrur/followers",
"following_url": "https://api.github.com/users/sagorbrur/following{/other_user}",
"gists_url": "https://api.github.com/users/sagorbrur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sagorbrur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sagorbrur/subscriptions",
"organizations_url": "https://api.github.com/users/sagorbrur/orgs",
"repos_url": "https://api.github.com/users/sagorbrur/repos",
"events_url": "https://api.github.com/users/sagorbrur/events{/privacy}",
"received_events_url": "https://api.github.com/users/sagorbrur/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"This is great, thanks for sharing! \r\n\r\nIn case you have time and want to contribute default sample inputs for Bengali, you can just open a pull request against https://github.com/huggingface/widgets-server/blob/master/DefaultWidget.ts\r\n\r\nYour model page's inference widgets will then display relevant sample inputs.",
"PS/ your models are now correctly linked from https://huggingface.co/datasets/lince",
"Hi,\r\nThank you so much. \r\nI will create a pull request for DefaultWidget.ts\r\n\r\nregards\r\nSagor"
] | 1,599 | 1,599 | 1,599 | CONTRIBUTOR | null | Hi,
I added model card for bangla-bert-base model and also modified some typo in other model cards also.
Please check and if possible please merge.
thanks and regards
Sagor
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7071/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7071/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7071",
"html_url": "https://github.com/huggingface/transformers/pull/7071",
"diff_url": "https://github.com/huggingface/transformers/pull/7071.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7071.patch",
"merged_at": 1599851846000
} |
https://api.github.com/repos/huggingface/transformers/issues/7070 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7070/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7070/comments | https://api.github.com/repos/huggingface/transformers/issues/7070/events | https://github.com/huggingface/transformers/issues/7070 | 699,133,368 | MDU6SXNzdWU2OTkxMzMzNjg= | 7,070 | MobileBERT inconsistent output (padded / not padded text) | {
"login": "sweco",
"id": 11132999,
"node_id": "MDQ6VXNlcjExMTMyOTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/11132999?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sweco",
"html_url": "https://github.com/sweco",
"followers_url": "https://api.github.com/users/sweco/followers",
"following_url": "https://api.github.com/users/sweco/following{/other_user}",
"gists_url": "https://api.github.com/users/sweco/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sweco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sweco/subscriptions",
"organizations_url": "https://api.github.com/users/sweco/orgs",
"repos_url": "https://api.github.com/users/sweco/repos",
"events_url": "https://api.github.com/users/sweco/events{/privacy}",
"received_events_url": "https://api.github.com/users/sweco/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Just confirming that the issue happens also for the latest `transformers`, PyTorch and Python 3.8.\r\n\r\n- `transformers` version: 3.1.0\r\n- Platform: Linux-5.4.0-47-generic-x86_64-with-glibc2.10\r\n- Python version: 3.8.5\r\n- PyTorch version (GPU?): 1.6.0 (False)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Using GPU in script?: No\r\n- Using distributed or parallel set-up in script?: No",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,599 | 1,606 | 1,606 | NONE | null | ## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-5.4.0-47-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.3
- PyTorch version (GPU?): 1.5.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik, @mfuntowicz
## Information
When using MobileBERT model `google/mobilebert-uncased`, I get different model outputs for the same text when it contains padding and when it does not contain any padding (is the longest text in the batch). The same code produces correct results for other models, such as `bert-base-uncased`, so the issue is probably only with MobileBERT. The problem happens for both `CLS` output and all features.
Model I am using (Bert, XLNet ...): MobileBERT
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
```python
from transformers import AutoModel, AutoTokenizer
model = 'google/mobilebert-uncased'
tokenizer = AutoTokenizer.from_pretrained(model)
model = AutoModel.from_pretrained(model)
text = 'Hey, how are you?'
i1 = tokenizer.batch_encode_plus([text], padding=True, return_tensors='pt')
emb1 = model(**i1)[1][0] # Only one in batch (not padded): [-2.5088e+07, 7.3279e+04, 1.7544e+05, ...]
i2 = tokenizer.batch_encode_plus([text, text + ' hey'], padding=True, return_tensors='pt')
emb2 = model(**i2)[1][0] # Not longest (padded): [-2.4871e+07, 8.1873e+04, 1.6693e+05, ...]
i3 = tokenizer.batch_encode_plus([text, text[:-10]], padding=True, return_tensors='pt')
emb3 = model(**i3)[1][0] # Longest in batch (not padded): [-2.5088e+07, 7.3281e+04, 1.7544e+05, ...]
i4 = tokenizer.encode(text, return_tensors='pt')
emb4 = model(i4)[1][0] # Without attention masks (not padded): [-2.5088e+07, 7.3279e+04, 1.7544e+05, ...]
i5 = tokenizer.encode_plus(text, return_tensors='pt')
emb5 = model(**i5)[1][0] # With attention masks (not padded): [-2.5088e+07, 7.3279e+04, 1.7544e+05, ...]
```
## Expected behavior
I would expect the model to always return the same embeddings, but I don't really know which of the two variants is correct. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7070/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7070/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7069 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7069/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7069/comments | https://api.github.com/repos/huggingface/transformers/issues/7069/events | https://github.com/huggingface/transformers/issues/7069 | 699,095,927 | MDU6SXNzdWU2OTkwOTU5Mjc= | 7,069 | How to use hugging on onw embedding | {
"login": "Yangxiaojun1230",
"id": 59246446,
"node_id": "MDQ6VXNlcjU5MjQ2NDQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/59246446?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yangxiaojun1230",
"html_url": "https://github.com/Yangxiaojun1230",
"followers_url": "https://api.github.com/users/Yangxiaojun1230/followers",
"following_url": "https://api.github.com/users/Yangxiaojun1230/following{/other_user}",
"gists_url": "https://api.github.com/users/Yangxiaojun1230/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Yangxiaojun1230/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Yangxiaojun1230/subscriptions",
"organizations_url": "https://api.github.com/users/Yangxiaojun1230/orgs",
"repos_url": "https://api.github.com/users/Yangxiaojun1230/repos",
"events_url": "https://api.github.com/users/Yangxiaojun1230/events{/privacy}",
"received_events_url": "https://api.github.com/users/Yangxiaojun1230/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello @Yangxiaojun1230 ,\r\n\r\nFrom your message I assume you are trying to train a Longformer model using embeddings as the input of the model.\r\n\r\nYou can either use input_ids or inputs_embeds in the forward function of the models. I think your idea will work, please let us know if it does.\r\n\r\nYou can look at more information about your model in the documentation: https://huggingface.co/transformers/model_doc/longformer.html\r\n",
"@na\r\n\r\n> Hello @Yangxiaojun1230 ,\r\n> \r\n> From your message I assume you are trying to train a Longformer model using embeddings as the input of the model.\r\n> \r\n> You can either use input_ids or inputs_embeds in the forward function of the models. I think your idea will work, please let us know if it does.\r\n> \r\n> You can look at more information about your model in the documentation: https://huggingface.co/transformers/model_doc/longformer.html\r\n\r\nThank you for your answer, actually I try to train the model and meet an error.\r\nMy code:\r\natt_mask[:,[100,300,500,800,1200,]]=2\r\nlogit=LongformerForSequenceClassification(inputs_embeds=embding,attention_mask=att_mask)\r\n\r\nbut error occurs:\r\npytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [205,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\nTHCudaCheck FAIL file=/pytorch/aten/src/THC/THCReduceAll.cuh line=327 error=710 : device-side assert triggered\r\n\r\n\"/home/yangjeff/anaconda3/lib/python3.6/site-packages/transformers/modeling_longformer.py\", line 272, in forward\r\n max_num_extra_indices_per_batch = num_extra_indices_per_batch.max()\r\nRuntimeError: cuda runtime error (710) : device-side assert triggered at /pytorch/aten/src/THC/THCReduceAll.cuh:327\r\n\r\nIn the modeling_longformer.py:\r\nafter:extra_attention_mask = attention_mask > 0\r\nI could not access to extra_attention_maskοΌ\r\nBut in the example:\r\nattention_mask = torch.ones(input_ids.shape, dtype=torch.long, device=input_ids.device) # initialize to local attention\r\n attention_mask[:, [1, 4, 21,]] = 2 # Set global attention based on the task. For example,\r\n # classification: the <s> token\r\n # QA: question tokens\r\n # LM: potentially on the beginning of sentences and paragraphs\r\n sequence_output, pooled_output = model(input_ids, attention_mask=attention_mask)\r\nThe only different in my code is I input inputs_embeds instead input_ids",
"I fixed finally"
] | 1,599 | 1,600 | 1,600 | NONE | null | i already get a embeddingοΌor i have a matrix for input. Then I want to use transformer to train on. For example use LongformerοΌi set input_ids to NoneοΌand set the input on parameter embed_input. Could it be OKοΌ anything else need to take careοΌ I also input the mask. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7069/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7069/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7068 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7068/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7068/comments | https://api.github.com/repos/huggingface/transformers/issues/7068/events | https://github.com/huggingface/transformers/pull/7068 | 698,920,656 | MDExOlB1bGxSZXF1ZXN0NDg0NzE3MjAw | 7,068 | [BertGeneration] Clean naming | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7068?src=pr&el=h1) Report\n> Merging [#7068](https://codecov.io/gh/huggingface/transformers/pull/7068?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8fcbe486e1592321e868f872545c8fd9d359a515?el=desc) will **decrease** coverage by `2.39%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7068?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7068 +/- ##\n==========================================\n- Coverage 80.93% 78.53% -2.40% \n==========================================\n Files 168 168 \n Lines 32179 32179 \n==========================================\n- Hits 26044 25273 -771 \n- Misses 6135 6906 +771 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7068?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7068/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `93.61% <ΓΈ> (ΓΈ)` | |\n| [src/transformers/modeling\\_bert\\_generation.py](https://codecov.io/gh/huggingface/transformers/pull/7068/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0X2dlbmVyYXRpb24ucHk=) | `69.19% <ΓΈ> (ΓΈ)` | |\n| [src/transformers/configuration\\_bert\\_generation.py](https://codecov.io/gh/huggingface/transformers/pull/7068/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JlcnRfZ2VuZXJhdGlvbi5weQ==) | `100.00% <100.00%> (ΓΈ)` | |\n| [src/transformers/tokenization\\_bert\\_generation.py](https://codecov.io/gh/huggingface/transformers/pull/7068/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydF9nZW5lcmF0aW9uLnB5) | `94.64% <100.00%> (ΓΈ)` | |\n| [src/transformers/modeling\\_tf\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7068/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.49% <0.00%> (-71.63%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7068/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7068/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `67.79% <0.00%> (-31.36%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7068/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `75.91% <0.00%> (-21.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7068/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `71.60% <0.00%> (-20.44%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7068/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: |\n| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/7068/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7068?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7068?src=pr&el=footer). Last update [8fcbe48...351c75d](https://codecov.io/gh/huggingface/transformers/pull/7068?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks for catching that @sgugger !"
] | 1,599 | 1,599 | 1,599 | MEMBER | null | This PR removes all "old" naming of "bert-for-seq-generation" and "BertForSeqGeneration" repectively.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7068/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7068/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7068",
"html_url": "https://github.com/huggingface/transformers/pull/7068",
"diff_url": "https://github.com/huggingface/transformers/pull/7068.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7068.patch",
"merged_at": 1599811074000
} |
https://api.github.com/repos/huggingface/transformers/issues/7067 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7067/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7067/comments | https://api.github.com/repos/huggingface/transformers/issues/7067/events | https://github.com/huggingface/transformers/pull/7067 | 698,914,497 | MDExOlB1bGxSZXF1ZXN0NDg0NzExNjcx | 7,067 | Create model card | {
"login": "liminghao1630",
"id": 23090329,
"node_id": "MDQ6VXNlcjIzMDkwMzI5",
"avatar_url": "https://avatars.githubusercontent.com/u/23090329?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liminghao1630",
"html_url": "https://github.com/liminghao1630",
"followers_url": "https://api.github.com/users/liminghao1630/followers",
"following_url": "https://api.github.com/users/liminghao1630/following{/other_user}",
"gists_url": "https://api.github.com/users/liminghao1630/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liminghao1630/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liminghao1630/subscriptions",
"organizations_url": "https://api.github.com/users/liminghao1630/orgs",
"repos_url": "https://api.github.com/users/liminghao1630/repos",
"events_url": "https://api.github.com/users/liminghao1630/events{/privacy}",
"received_events_url": "https://api.github.com/users/liminghao1630/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7067?src=pr&el=h1) Report\n> Merging [#7067](https://codecov.io/gh/huggingface/transformers/pull/7067?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8fcbe486e1592321e868f872545c8fd9d359a515?el=desc) will **decrease** coverage by `2.41%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7067?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7067 +/- ##\n==========================================\n- Coverage 80.93% 78.52% -2.42% \n==========================================\n Files 168 168 \n Lines 32179 32179 \n==========================================\n- Hits 26044 25267 -777 \n- Misses 6135 6912 +777 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7067?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/7067/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `34.03% <0.00%> (-64.79%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/7067/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `25.85% <0.00%> (-55.15%)` | :arrow_down: |\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/7067/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `34.28% <0.00%> (-48.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7067/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnVubmVsLnB5) | `62.79% <0.00%> (-34.89%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7067/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `67.79% <0.00%> (-31.36%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7067/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7067/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `20.53% <0.00%> (-21.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7067/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7067/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `49.10% <0.00%> (-17.97%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/7067/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: |\n| ... and [19 more](https://codecov.io/gh/huggingface/transformers/pull/7067/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7067?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7067?src=pr&el=footer). Last update [8fcbe48...a4dd71e](https://codecov.io/gh/huggingface/transformers/pull/7067?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,599 | 1,599 | 1,599 | CONTRIBUTOR | null | Create model card for microsoft/layoutlm-large-uncased | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7067/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7067/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7067",
"html_url": "https://github.com/huggingface/transformers/pull/7067",
"diff_url": "https://github.com/huggingface/transformers/pull/7067.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7067.patch",
"merged_at": 1599852055000
} |
https://api.github.com/repos/huggingface/transformers/issues/7066 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7066/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7066/comments | https://api.github.com/repos/huggingface/transformers/issues/7066/events | https://github.com/huggingface/transformers/pull/7066 | 698,912,443 | MDExOlB1bGxSZXF1ZXN0NDg0NzA5ODA1 | 7,066 | Create model card | {
"login": "liminghao1630",
"id": 23090329,
"node_id": "MDQ6VXNlcjIzMDkwMzI5",
"avatar_url": "https://avatars.githubusercontent.com/u/23090329?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liminghao1630",
"html_url": "https://github.com/liminghao1630",
"followers_url": "https://api.github.com/users/liminghao1630/followers",
"following_url": "https://api.github.com/users/liminghao1630/following{/other_user}",
"gists_url": "https://api.github.com/users/liminghao1630/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liminghao1630/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liminghao1630/subscriptions",
"organizations_url": "https://api.github.com/users/liminghao1630/orgs",
"repos_url": "https://api.github.com/users/liminghao1630/repos",
"events_url": "https://api.github.com/users/liminghao1630/events{/privacy}",
"received_events_url": "https://api.github.com/users/liminghao1630/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7066?src=pr&el=h1) Report\n> Merging [#7066](https://codecov.io/gh/huggingface/transformers/pull/7066?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8fcbe486e1592321e868f872545c8fd9d359a515?el=desc) will **decrease** coverage by `1.46%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7066?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7066 +/- ##\n==========================================\n- Coverage 80.93% 79.47% -1.47% \n==========================================\n Files 168 168 \n Lines 32179 32179 \n==========================================\n- Hits 26044 25573 -471 \n- Misses 6135 6606 +471 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7066?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [...c/transformers/modeling\\_tf\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7066/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7066/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7066/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `19.85% <0.00%> (-68.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7066/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.84% <0.00%> (-23.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7066/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7066/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7066/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (-0.66%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7066/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7066/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7066/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <0.00%> (+0.25%)` | :arrow_up: |\n| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/7066/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7066?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7066?src=pr&el=footer). Last update [8fcbe48...71e0391](https://codecov.io/gh/huggingface/transformers/pull/7066?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thank you!"
] | 1,599 | 1,599 | 1,599 | CONTRIBUTOR | null | Create model card for microsoft/layoutlm-base-uncased | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7066/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7066",
"html_url": "https://github.com/huggingface/transformers/pull/7066",
"diff_url": "https://github.com/huggingface/transformers/pull/7066.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7066.patch",
"merged_at": 1599852066000
} |
https://api.github.com/repos/huggingface/transformers/issues/7065 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7065/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7065/comments | https://api.github.com/repos/huggingface/transformers/issues/7065/events | https://github.com/huggingface/transformers/issues/7065 | 698,879,739 | MDU6SXNzdWU2OTg4Nzk3Mzk= | 7,065 | Further Pretraining of Longformer RAM Consumption | {
"login": "baradl",
"id": 42384404,
"node_id": "MDQ6VXNlcjQyMzg0NDA0",
"avatar_url": "https://avatars.githubusercontent.com/u/42384404?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/baradl",
"html_url": "https://github.com/baradl",
"followers_url": "https://api.github.com/users/baradl/followers",
"following_url": "https://api.github.com/users/baradl/following{/other_user}",
"gists_url": "https://api.github.com/users/baradl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/baradl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/baradl/subscriptions",
"organizations_url": "https://api.github.com/users/baradl/orgs",
"repos_url": "https://api.github.com/users/baradl/repos",
"events_url": "https://api.github.com/users/baradl/events{/privacy}",
"received_events_url": "https://api.github.com/users/baradl/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, there's been some issues with memory in our recent script versions. It is possible that on kaggle you have the very latest version of the scripts, or a way older version of the scripts where the memory issue didn't exist.\r\n\r\nI recommend you use the latest version of the script, as the memory issue is fixed and it is the most feature-complete version.",
"Thanks for info and help! However, the issue persists in the current version of the script. \r\nIs it an issue specific to the longformer or do other models experience the same behaviour?\r\nDo you know when the memory issues began so I can check whether older versions might work? ",
"So I tested older versions of the script, specifically from the commits that resulted in a new version (3.0.0, 3.0.1, 3.0.2, 3.1.0). The issue persists with all of them *but* only on google colab. RAM consumption begins at around 1.88gb and after around 5 minutes of training increases continuously.\r\nI tried the same script from the commits on kaggle and observed a very different behaviour. RAM consumption at training is constant around 5.5gb. \r\nWhen running the script on roberta the same behaviour occurs.\r\n\r\nI'd be happy about any help or suggestions!",
"So the docker image that kaggle uses seemed to be a work-around. If more ressources are necessary than those provided by kaggle then one can use the image easily on GCP. Though kaggle publishes the image as well in their repo.\r\n\r\nHowever by now the issues seems resolved as I switched back to colab and no problems with RAM consumption occured. I'm not sure what caused this effect but no issues for now - thus I'm closing this thread."
] | 1,599 | 1,601 | 1,601 | NONE | null | Hello everybody,
I'm interested in continued pretraining of the longformer on in-domain text. I'm using the `run_language_modeling.py` script from the examples with the following options.
!python run_language_modeling.py \
--output_dir=$output_dir$ \
--model_type=longformer \
--model_name_or_path=allenai/longformer-base-4096 \
--do_train \
--train_data_file=$train_file$ \
--save_steps=5000 \
--save_total_limit=1 \
--overwrite_output_dir \
--num_train_epochs=1 \
--fp16 \
--fp16_opt_level="O1" \
--gradient_accumulation_steps=8 \
--per_device_train_batch_size=1 \
--logging_steps=100 \
--line_by_line \
--mlm
I tried running the script on Google Colab, spell.run and kaggle-notebooks with different behaviour. While on colab and spell.run the consumption of RAM continuously increases while training, it's constant on kaggle. On colab and spell.run the script terminates whenever the limit is reached (around 26gb on spell.run). Since AllenAI provides some code for pretraining as well I tried their implementation as well with the same behaviour, however it reaches RAM limits faster. The text file I'm using is rather small with a few hundred MB in size.
Can someone explain what's going on here and how solve this issue? I'm linking @patrickvonplaten as it was suggested by the repo. Thanks a lot for any help!
Cheers
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7065/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7065/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7064 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7064/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7064/comments | https://api.github.com/repos/huggingface/transformers/issues/7064/events | https://github.com/huggingface/transformers/pull/7064 | 698,788,254 | MDExOlB1bGxSZXF1ZXN0NDg0NTk5MjE0 | 7,064 | Add LayoutLM Model | {
"login": "liminghao1630",
"id": 23090329,
"node_id": "MDQ6VXNlcjIzMDkwMzI5",
"avatar_url": "https://avatars.githubusercontent.com/u/23090329?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liminghao1630",
"html_url": "https://github.com/liminghao1630",
"followers_url": "https://api.github.com/users/liminghao1630/followers",
"following_url": "https://api.github.com/users/liminghao1630/following{/other_user}",
"gists_url": "https://api.github.com/users/liminghao1630/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liminghao1630/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liminghao1630/subscriptions",
"organizations_url": "https://api.github.com/users/liminghao1630/orgs",
"repos_url": "https://api.github.com/users/liminghao1630/repos",
"events_url": "https://api.github.com/users/liminghao1630/events{/privacy}",
"received_events_url": "https://api.github.com/users/liminghao1630/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7064?src=pr&el=h1) Report\n> Merging [#7064](https://codecov.io/gh/huggingface/transformers/pull/7064?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7cbf0f722d23440f3342aafc27697b50ead5996b?el=desc) will **decrease** coverage by `0.09%`.\n> The diff coverage is `31.25%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7064?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7064 +/- ##\n==========================================\n- Coverage 80.32% 80.23% -0.10% \n==========================================\n Files 174 177 +3 \n Lines 33446 33878 +432 \n==========================================\n+ Hits 26867 27181 +314 \n- Misses 6579 6697 +118 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7064?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7064/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sYXlvdXRsbS5weQ==) | `25.56% <25.56%> (ΓΈ)` | |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7064/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `86.36% <50.00%> (-3.64%)` | :arrow_down: |\n| [src/transformers/configuration\\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7064/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xheW91dGxtLnB5) | `80.00% <80.00%> (ΓΈ)` | |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/7064/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.37% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7064/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `96.20% <100.00%> (+0.04%)` | :arrow_up: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7064/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `82.46% <100.00%> (+0.08%)` | :arrow_up: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7064/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `92.30% <100.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/tokenization\\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7064/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbGF5b3V0bG0ucHk=) | `100.00% <100.00%> (ΓΈ)` | |\n| [src/transformers/modeling\\_tf\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7064/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7064/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| ... and [18 more](https://codecov.io/gh/huggingface/transformers/pull/7064/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7064?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7064?src=pr&el=footer). Last update [7cbf0f7...4e95220](https://codecov.io/gh/huggingface/transformers/pull/7064?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hi, I have moved and renamed the classes from modeling_bert to modeling_layoutlm and the code is ready for review and merge. @JetRunner @sgugger ",
"@JetRunner @sgugger @LysandreJik The new suggestions have been adopted. "
] | 1,599 | 1,600 | 1,600 | CONTRIBUTOR | null | # Introduction
This pull request implements the LayoutLM model, as defined in [the paper](https://arxiv.org/abs/1912.13318):
```
LayoutLM: Pre-training of Text and Layout for Document Image Understanding
Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou, KDD 2020
```
LayoutLM is a simple but effective pre-training method of text and layout for document image understanding and information extraction tasks, such as form understanding and receipt understanding. LayoutLM archives the SOTA results on multiple datasets.
# Typical workflow for including a model
Here an overview of the general workflow:
- [x] Add model/configuration/tokenization classes.
- [ ] Add conversion scripts.
- [x] Add tests and a @slow integration test.
- [ ] Document your model.
- [ ] Finalize.
Let's detail what should be done at each step.
## Adding model/configuration/tokenization classes
Here is the workflow for adding model/configuration/tokenization classes:
- [x] Copy the python files from the present folder to the main folder and rename them, replacing `xxx` with your model
name.
- [x] Edit the files to replace `XXX` (with various casing) with your model name.
- [x] Copy-paste or create a simple configuration class for your model in the `configuration_...` file.
- [x] Copy-paste or create the code for your model in the `modeling_...` files (PyTorch and TF 2.0).
- [x] Copy-paste or create a tokenizer class for your model in the `tokenization_...` file.
## Adding conversion scripts
Here is the workflow for the conversion scripts:
- [ ] Copy the conversion script (`convert_...`) from the present folder to the main folder.
- [ ] Edit this script to convert your original checkpoint weights to the current pytorch ones.
## Adding tests:
Here is the workflow for the adding tests:
- [x] Copy the python files from the `tests` sub-folder of the present folder to the `tests` subfolder of the main
folder and rename them, replacing `xxx` with your model name.
- [x] Edit the tests files to replace `XXX` (with various casing) with your model name.
- [x] Edit the tests code as needed.
## Documenting your model:
Here is the workflow for documentation:
- [ ] Make sure all your arguments are properly documented in your configuration and tokenizer.
- [x] Most of the documentation of the models is automatically generated, you just have to make sure that
`XXX_START_DOCSTRING` contains an introduction to the model you're adding and a link to the original
article and that `XXX_INPUTS_DOCSTRING` contains all the inputs of your model.
- [x] Create a new page `xxx.rst` in the folder `docs/source/model_doc` and add this file in `docs/source/index.rst`.
Make sure to check you have no sphinx warnings when building the documentation locally and follow our
[documentaiton guide](https://github.com/huggingface/transformers/tree/master/docs#writing-documentation---specification).
## Final steps
You can then finish the addition step by adding imports for your classes in the common files:
- [x] Add import for all the relevant classes in `__init__.py`.
- [x] Add your configuration in `configuration_auto.py`.
- [x] Add your PyTorch and TF 2.0 model respectively in `modeling_auto.py` and `modeling_tf_auto.py`.
- [x] Add your tokenizer in `tokenization_auto.py`.
- [ ] Add a link to your conversion script in the main conversion utility (in `commands/convert.py`)
- [ ] Edit the PyTorch to TF 2.0 conversion script to add your model in the `convert_pytorch_checkpoint_to_tf2.py`
file.
- [x] Add a mention of your model in the doc: `README.md` and the documentation itself
in `docs/source/index.rst` and `docs/source/pretrained_models.rst`.
- [x] Upload the pretrained weights, configurations and vocabulary files.
- [ ] Create model card(s) for your models on huggingface.co. For those last two steps, check the
[model sharing documentation](https://huggingface.co/transformers/model_sharing.html). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7064/reactions",
"total_count": 14,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 6,
"confused": 0,
"heart": 5,
"rocket": 3,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7064/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7064",
"html_url": "https://github.com/huggingface/transformers/pull/7064",
"diff_url": "https://github.com/huggingface/transformers/pull/7064.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7064.patch",
"merged_at": 1600781282000
} |
https://api.github.com/repos/huggingface/transformers/issues/7063 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7063/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7063/comments | https://api.github.com/repos/huggingface/transformers/issues/7063/events | https://github.com/huggingface/transformers/issues/7063 | 698,733,995 | MDU6SXNzdWU2OTg3MzM5OTU= | 7,063 | EncoderDecoderModel generate function | {
"login": "lonelydancer",
"id": 548443,
"node_id": "MDQ6VXNlcjU0ODQ0Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/548443?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lonelydancer",
"html_url": "https://github.com/lonelydancer",
"followers_url": "https://api.github.com/users/lonelydancer/followers",
"following_url": "https://api.github.com/users/lonelydancer/following{/other_user}",
"gists_url": "https://api.github.com/users/lonelydancer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lonelydancer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lonelydancer/subscriptions",
"organizations_url": "https://api.github.com/users/lonelydancer/orgs",
"repos_url": "https://api.github.com/users/lonelydancer/repos",
"events_url": "https://api.github.com/users/lonelydancer/events{/privacy}",
"received_events_url": "https://api.github.com/users/lonelydancer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
},
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @lonelydancer ,\r\n\r\nyeah I think I know why you get this error message. I very recently fixed the error message you get. \r\n1) Could you try to pull from master and run your code again? If I am correct you should now get an error message saying: \r\n\r\n```\r\n\"decoder_start_token_id or bos_token_id has to be defined for encoder-decoder generation\"\r\n```\r\n\r\n2) You have to provide either a `bos_token_id` or `decoder_token_id` for generation, otherwise the decoder part for `EncoderDecoderModel` does not know with which token it should start the generation.\r\n\r\nPlease let me know if this helps. ",
"@patrickvonplaten It works! Thanks~"
] | 1,599 | 1,600 | 1,600 | NONE | null | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
I follow the EncoderDecoder tutorials(https://huggingface.co/transformers/model_doc/encoderdecoder.html), but it seems that when i use generate function,
generated = model.generate(input_ids, decoder_start_token_id=tokenizer.pad_token_id)
i have to use pad_token_id.If i ommit that parmas, there is a error message, I can't figure out why.
device=next(self.parameters()).device,
TypeError: full() received an invalid combination of arguments - got (tuple, NoneType, device=torch.device, dtype=torch.dtype), but expected one of:
* (tuple of ints size, Number fill_value, *, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
* (tuple of ints size, Number fill_value, *, tuple of names names, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
tutorials code:
>>> from transformers import EncoderDecoderModel, BertTokenizer
>>> import torch
>>> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
>>> model = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased') # initialize Bert2Bert from pre-trained checkpoints
>>> # forward
>>> input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1
>>> outputs = model(input_ids=input_ids, decoder_input_ids=input_ids)
>>> # training
>>> outputs = model(input_ids=input_ids, decoder_input_ids=input_ids, labels=input_ids, return_dict=True)
>>> loss, logits = outputs.loss, outputs.logits
>>> # save and load from pretrained
>>> model.save_pretrained("bert2bert")
>>> model = EncoderDecoderModel.from_pretrained("bert2bert")
>>> # generation
>>> generated = model.generate(input_ids, decoder_start_token_id=model.config.decoder.pad_token_id)
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7063/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7063/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7062 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7062/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7062/comments | https://api.github.com/repos/huggingface/transformers/issues/7062/events | https://github.com/huggingface/transformers/issues/7062 | 698,540,127 | MDU6SXNzdWU2OTg1NDAxMjc= | 7,062 | circleci testing issue | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This has been fixed in master already, it's due to the change of name of the nlp library to datasets. You will need to install datasets to have the full test suite working.",
"Thanks, @sgugger . So in order to make my pull request pass the test online, I have to merge remote master intomy local branch and repush? Is this an acceptable approach of doing PR?",
"You can either rebase master on your PR branch, or not worry if this is the only test not passing (merging will make the CI green)."
] | 1,599 | 1,599 | 1,599 | COLLABORATOR | null | Sorry, this is not a bug report about `transformers` library. But I don't know what's the best place to ask.
While working on a pull request #6998 , after pushing the code, I got
==================================== ERRORS ====================================
____________________ ERROR collecting tests/test_trainer.py ____________________
ImportError while importing test module '/home/circleci/transformers/tests/test_trainer.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/local/lib/python3.7/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/test_trainer.py:3: in <module>
import nlp
E ModuleNotFoundError: No module named 'nlp'
The PR #6998 has already been tested several times without problem. And it seems that the error `No module named 'nlp'` is not on my side. Could you help me to resolve this? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7062/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7062/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7061 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7061/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7061/comments | https://api.github.com/repos/huggingface/transformers/issues/7061/events | https://github.com/huggingface/transformers/pull/7061 | 698,517,969 | MDExOlB1bGxSZXF1ZXN0NDg0MzUyNDk1 | 7,061 | Automate the lists in auto-xxx docs | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7061?src=pr&el=h1) Report\n> Merging [#7061](https://codecov.io/gh/huggingface/transformers/pull/7061?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0054a48cdd64e7309184a64b399ab2c58d75d4e5?el=desc) will **increase** coverage by `1.21%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7061?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7061 +/- ##\n==========================================\n+ Coverage 80.53% 81.74% +1.21% \n==========================================\n Files 168 168 \n Lines 32179 32251 +72 \n==========================================\n+ Hits 25915 26365 +450 \n+ Misses 6264 5886 -378 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7061?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7061/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `96.10% <100.00%> (+2.48%)` | :arrow_up: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7061/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `81.21% <100.00%> (+2.12%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7061/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `70.58% <100.00%> (+3.52%)` | :arrow_up: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7061/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `91.80% <100.00%> (+0.27%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7061/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7061/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7061/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7061/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7061/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7061/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.77% <0.00%> (-0.68%)` | :arrow_down: |\n| ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/7061/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7061?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7061?src=pr&el=footer). Last update [0054a48...9eceac7](https://codecov.io/gh/huggingface/transformers/pull/7061?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,599 | 1,599 | 1,599 | COLLABORATOR | null | This PR adds a decorate to automate the creation of the lists in the documentation of `AutoConfig`, `AutoTokenizer`, `AutoModel`.
Apart from adding the docstrings of `AutoModelForMultipleChoice` and removing some parts of some class docstrings (because three lists is a bit too much for one class!) this PR does not fix the docstrings in themselves (that's for the next one). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7061/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7061/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7061",
"html_url": "https://github.com/huggingface/transformers/pull/7061",
"diff_url": "https://github.com/huggingface/transformers/pull/7061.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7061.patch",
"merged_at": 1599835329000
} |
https://api.github.com/repos/huggingface/transformers/issues/7060 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7060/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7060/comments | https://api.github.com/repos/huggingface/transformers/issues/7060/events | https://github.com/huggingface/transformers/issues/7060 | 698,429,512 | MDU6SXNzdWU2OTg0Mjk1MTI= | 7,060 | mBART 50 | {
"login": "Bachstelze",
"id": 19904888,
"node_id": "MDQ6VXNlcjE5OTA0ODg4",
"avatar_url": "https://avatars.githubusercontent.com/u/19904888?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bachstelze",
"html_url": "https://github.com/Bachstelze",
"followers_url": "https://api.github.com/users/Bachstelze/followers",
"following_url": "https://api.github.com/users/Bachstelze/following{/other_user}",
"gists_url": "https://api.github.com/users/Bachstelze/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bachstelze/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bachstelze/subscriptions",
"organizations_url": "https://api.github.com/users/Bachstelze/orgs",
"repos_url": "https://api.github.com/users/Bachstelze/repos",
"events_url": "https://api.github.com/users/Bachstelze/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bachstelze/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Thank you for your interest. We are doing a few clean-up and will release them shortly. ",
"Waiting for the model too :)\r\n\r\nThanks",
"@tangyuq Is the benchmark included in [Beyond English-Centric Multilingual Machine Translation](https://github.com/pytorch/fairseq/tree/master/examples/m2m_100)?",
"We'll be adding a full evaluation on the ML50 benchmark in an updated version. The main reason why I haven't added it yet is that we want to update ML50's evaluation pairs done on WMT20 to the WMT20 test sets. When ML50 was originally released, the WMT20 test sets were not released yet. So we'll update both works together. \r\n\r\nA portion of ML50 evaluation, from the original mBART paper, is available in Beyond English Centric MMT work already. ",
"Hi, thanks for the great work and releasing the models to public.\r\nDo we have any update on mBART50 model release? ",
"The [mBART50 model](https://github.com/pytorch/fairseq/tree/master/examples/multilingual#mbart50-models) is now available on the fairseq repository.",
"@jaspock Has someone been able to load these mBART50 models with huggingface mBART version?\r\nI was able to convert it (and the weight appear to be right). However, when generating translations, it just repeats the same word over and over again.\r\n",
"Maybe @patil-suraj has some input?",
"Hi @jaspock \r\n\r\nI'm also facing the same issue, I've been able to convert the weights, but as you said the generation is not working, I'm investigating this currently.",
"mBART-50 is now available on master!\r\n\r\nhttps://huggingface.co/transformers/master/model_doc/mbart.html#overview-of-mbart-50\r\n\r\nhttps://huggingface.co/models?filter=mbart-50",
"Hi, I want to finetune the multilingual model for some languages.\r\nI cannot find the exact script , number of updates used to finetune mbart50.\r\nThe paper suggested 500k updates for training the model that was trained from scratch. Paper [Link](https://arxiv.org/pdf/2008.00401.pdf)\r\nThe multilingual fine-tuning script shows 40k updates in the training [script](https://github.com/pytorch/fairseq/tree/master/examples/multilingual#mbart50-models) (Which was also used for Bilingual model in mBart paper)\r\n\r\nBut I cannot find specifications for MT-FT (Multilingual Fine Tuning) for 50 languages.\r\nI suspect 40k won't be enough if combined data for all languages is high.\r\nCan anyone guide me if I am missing something\r\n",
"> Hi, I want to finetune the multilingual model for some languages.\r\n> I cannot find the exact script , number of updates used to finetune mbart50.\r\n> The paper suggested 500k updates for training the model that was trained from scratch. Paper [Link](https://arxiv.org/pdf/2008.00401.pdf)\r\n> The multilingual fine-tuning script shows 40k updates in the training [script](https://github.com/pytorch/fairseq/tree/master/examples/multilingual#mbart50-models) (Which was also used for Bilingual model in mBart paper)\r\n> \r\n> But I cannot find specifications for MT-FT (Multilingual Fine Tuning) for 50 languages.\r\n> I suspect 40k won't be enough if combined data for all languages is high.\r\n> Can anyone guide me if I am missing something\r\n\r\nHi @kaivu1999 Did you figure out how to fine-tune? I am also exploring for finetuning the multilingual model but haven't found a proper specification yet.",
"Hi @kaivu1999, @pdakwal \r\n\r\n> I cannot find the exact script \r\n\r\nThere is a `run_translation.py` script [here](https://github.com/huggingface/transformers/tree/master/examples/pytorch/translation), which can be used for fine-tuning mBART-50 but it does not support multi-lingual fine-tuning. But it should be easy to adapt to do that.\r\n\r\nFor multi-lingual fine-tuning, what we need to do here is, say we are fine-tuning on two language pairs, in that case, we need to concatenate the two datasets or in case the two language pairs don't have the same number of examples then add some sort of sampler which will sample the example from the datasets depending on the number of examples in each one. And when processing each language pair, set the appropriate `src_lang` and `tgt_lang` tokens. The processing part is explained in the [docs](https://huggingface.co/transformers/model_doc/mbart.html#overview-of-mbart-50).",
"Hi @kaivu1999, @pdakwal were u able to finetune mbart50 on custom data? Please help."
] | 1,599 | 1,639 | 1,613 | NONE | null | # mBART 50
## Additional languages to the mBART 25 model
The model is described in [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/pdf/2008.00401.pdf)
## Open source status
* [x] the model implementation is available: [mBART](https://github.com/pytorch/fairseq/tree/master/examples/mbart)
* [ ] the model weights are available: They should be released with the benchmark dataset, but I can't find them.
* [ ] who are the authors: @tangyuq @huihuifan @xianxl ...
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7060/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7060/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7059 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7059/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7059/comments | https://api.github.com/repos/huggingface/transformers/issues/7059/events | https://github.com/huggingface/transformers/pull/7059 | 698,423,878 | MDExOlB1bGxSZXF1ZXN0NDg0MjY4MjYy | 7,059 | these tests require non-multigpu env | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7059?src=pr&el=h1) Report\n> Merging [#7059](https://codecov.io/gh/huggingface/transformers/pull/7059?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/77950c485a6f6bcb4db6501bffbbd0cd96c0cd1a?el=desc) will **increase** coverage by `0.79%`.\n> The diff coverage is `71.42%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7059?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7059 +/- ##\n==========================================\n+ Coverage 80.25% 81.05% +0.79% \n==========================================\n Files 168 168 \n Lines 32172 32179 +7 \n==========================================\n+ Hits 25821 26082 +261 \n+ Misses 6351 6097 -254 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7059?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/testing\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7059/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90ZXN0aW5nX3V0aWxzLnB5) | `69.48% <71.42%> (+0.09%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7059/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7059/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7059/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.73% <0.00%> (-19.35%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7059/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.83% <0.00%> (-0.36%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7059/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.74% <0.00%> (+0.35%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7059/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.77% <0.00%> (+3.51%)` | :arrow_up: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7059/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `89.45% <0.00%> (+10.24%)` | :arrow_up: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7059/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `79.77% <0.00%> (+12.66%)` | :arrow_up: |\n| [src/transformers/modeling\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7059/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `66.41% <0.00%> (+13.43%)` | :arrow_up: |\n| ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/7059/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7059?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7059?src=pr&el=footer). Last update [77950c4...22a45b8](https://codecov.io/gh/huggingface/transformers/pull/7059?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Let's merge this for now, I'll work on some of those tests to remove the decorator"
] | 1,599 | 1,599 | 1,599 | CONTRIBUTOR | null | fix for https://github.com/huggingface/transformers/issues/7055#issuecomment-690648438
Fixes: #7055
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7059/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7059/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7059",
"html_url": "https://github.com/huggingface/transformers/pull/7059",
"diff_url": "https://github.com/huggingface/transformers/pull/7059.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7059.patch",
"merged_at": 1599778376000
} |
https://api.github.com/repos/huggingface/transformers/issues/7058 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7058/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7058/comments | https://api.github.com/repos/huggingface/transformers/issues/7058/events | https://github.com/huggingface/transformers/pull/7058 | 698,394,540 | MDExOlB1bGxSZXF1ZXN0NDg0MjQyMTg5 | 7,058 | Document the dependcy on datasets | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7058?src=pr&el=h1) Report\n> Merging [#7058](https://codecov.io/gh/huggingface/transformers/pull/7058?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/514486739cc732ad05549d81bd48c0aa9e03a0f3?el=desc) will **increase** coverage by `0.14%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7058?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7058 +/- ##\n==========================================\n+ Coverage 79.32% 79.47% +0.14% \n==========================================\n Files 168 168 \n Lines 32172 32172 \n==========================================\n+ Hits 25522 25568 +46 \n+ Misses 6650 6604 -46 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7058?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7058/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7058/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.91% <0.00%> (-0.14%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7058/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.44% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7058/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `94.00% <0.00%> (+4.00%)` | :arrow_up: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7058/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `93.54% <0.00%> (+5.37%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7058/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+7.51%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7058?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7058?src=pr&el=footer). Last update [5144867...d360cf8](https://codecov.io/gh/huggingface/transformers/pull/7058?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,599 | 1,599 | 1,599 | COLLABORATOR | null | We need the latest version of datasets to run the tests (since Trainer uses it). When datasets is more settled and we don't necessarily need to quickly update it, we can change this to a regular dep in "dev" with a minimal version pinned. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7058/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7058/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7058",
"html_url": "https://github.com/huggingface/transformers/pull/7058",
"diff_url": "https://github.com/huggingface/transformers/pull/7058.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7058.patch",
"merged_at": 1599813800000
} |
https://api.github.com/repos/huggingface/transformers/issues/7057 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7057/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7057/comments | https://api.github.com/repos/huggingface/transformers/issues/7057/events | https://github.com/huggingface/transformers/pull/7057 | 698,344,128 | MDExOlB1bGxSZXF1ZXN0NDg0MTk3MDIx | 7,057 | Add option to pass parameters to the loss function. | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Closed by this answer: https://github.com/huggingface/transformers/issues/7024#issuecomment-691481436"
] | 1,599 | 1,599 | 1,599 | CONTRIBUTOR | null | This PR is by far not finished. In the current state it is just a demo how I would implement it. See #7024
- loss_function_params is a dict that gets passed to the CrossEntropyLoss constructor
- that way you can set class_weights for the `CrossEntropyLoss ` for example
- this is a very important thing to do when working with unbalanced data and can improve your metric by a large amount
- it should be no breaking change since `loss_function_params` is always initialized with `None` which is just the current behavior
Example:
```python
model_name = 'bert-base-german-dbmdz-uncased'
config = AutoConfig.from_pretrained(
model_name,
num_labels=3,
)
model = AutoModelForSequenceClassification.from_pretrained(
model_name,
config=config,
loss_function_params={"weight": [0.8, 1.2, 0.97]}
)
```
## ToDo
- [ ] discuss with @LysandreJik @sgugger @nvs-abhilash
- [ ] implement for other models
- [ ] add docstrings
- [ ] write tests
- [ ] maybe implement for TF also | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7057/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7057/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7057",
"html_url": "https://github.com/huggingface/transformers/pull/7057",
"diff_url": "https://github.com/huggingface/transformers/pull/7057.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7057.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/7056 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7056/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7056/comments | https://api.github.com/repos/huggingface/transformers/issues/7056/events | https://github.com/huggingface/transformers/pull/7056 | 698,342,187 | MDExOlB1bGxSZXF1ZXN0NDg0MTk1MzE3 | 7,056 | [wip/s2s] DistributedSortishSampler | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Not necessarily the base Trainer, but definitely the `Seq2SeqTrainer` in preparation if you use this sampler often.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7056?src=pr&el=h1) Report\n> Merging [#7056](https://codecov.io/gh/huggingface/transformers/pull/7056?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/514486739cc732ad05549d81bd48c0aa9e03a0f3?el=desc) will **increase** coverage by `0.25%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7056?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7056 +/- ##\n==========================================\n+ Coverage 79.32% 79.58% +0.25% \n==========================================\n Files 168 168 \n Lines 32172 32172 \n==========================================\n+ Hits 25522 25605 +83 \n+ Misses 6650 6567 -83 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7056?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `16.37% <0.00%> (-82.31%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `19.71% <0.00%> (-72.34%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert\\_generation.py](https://codecov.io/gh/huggingface/transformers/pull/7056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydF9nZW5lcmF0aW9uLnB5) | `39.28% <0.00%> (-55.36%)` | :arrow_down: |\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/7056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `34.28% <0.00%> (-48.00%)` | :arrow_down: |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `75.00% <0.00%> (-25.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `75.91% <0.00%> (-21.17%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.87% <0.00%> (-7.18%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/7056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.85% <0.00%> (-1.43%)` | :arrow_down: |\n| ... and [16 more](https://codecov.io/gh/huggingface/transformers/pull/7056/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7056?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7056?src=pr&el=footer). Last update [5144867...41dca6e](https://codecov.io/gh/huggingface/transformers/pull/7056?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,599 | 1,599 | 1,599 | CONTRIBUTOR | null | This allows sortish sampler logic to work on multiple GPU.
The strategy is
1) find the indices that the current rank's data loader should be using (like `DistributedSampler`)
2) reorder those using the `SortishSampler` logic.
### Results
The results on a small MT task are similar to the 1 GPU setting.
+ 2 GPU, random sampler: 13 mins/epoch BLEU 8.6
+ 2 GPU, `DistributedSortishSampler`: 10 mins/epoch, BLEU 8.6
In the chart below, you can see that the sortish sampler gets to a higher BLEU score in the same number of minutes (x axis) (because it has finished a full epoch rather than 70% of one).

@sgugger let me know if you want this in Trainer! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7056/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7056/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7056",
"html_url": "https://github.com/huggingface/transformers/pull/7056",
"diff_url": "https://github.com/huggingface/transformers/pull/7056.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7056.patch",
"merged_at": 1599765825000
} |
https://api.github.com/repos/huggingface/transformers/issues/7055 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7055/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7055/comments | https://api.github.com/repos/huggingface/transformers/issues/7055/events | https://github.com/huggingface/transformers/issues/7055 | 698,338,101 | MDU6SXNzdWU2OTgzMzgxMDE= | 7,055 | [testing] test_trainer.py is failing | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"So with more look, the issue has to do with multiple GPUs. The problem goes away if I do:\r\n\r\n`CUDA_VISIBLE_DEVICES=\"0\" pytest tests/test_trainer.py`",
"> Looks like those tests need a decorator to run on the CPU only.\r\n\r\nThey work with 1 gpu too, but we don't have such a setup to force 1 gpu?\r\n\r\nThough isn't this exposing a problem and we will just hide it by setting CPU-only?",
"Yes, 2 GPUs end up doing half the updates, hence the error when counting the number of steps or the wrong value for a/b. This not a problem per se: it's logical to do half the steps since the batch size is multiplied by 2 and it's also logical that the seeded training end up to a different value with a different batch size.\r\n\r\nBut I wasn't expecting those to run on 2 GPUs, so we should add something to not use the 2 GPUs, which I don't think is possible since Trainer is there to automatically use all of them...",
"The best I can think of is to skip when detecting multiple GPUs. At least the tests that test the seeded training. For the number of steps, we can adapt to use the proper batch size.",
"ok, let me try - will post PR when working.",
"Odd, now it's failing with 1 or 0 gpus too, but only 4 tests. \r\n```\r\ncollecting ... \r\n tests/test_trainer.py β 9% β \r\n\r\nββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ TrainerIntegrationTest.test_custom_optimizer ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\r\n\r\nself = <tests.test_trainer.TrainerIntegrationTest testMethod=test_custom_optimizer>\r\n\r\n @require_non_multigpu\r\n def test_custom_optimizer(self):\r\n train_dataset = RegressionDataset()\r\n args = TrainingArguments(\"./regression\")\r\n model = RegressionModel()\r\n optimizer = torch.optim.SGD(model.parameters(), lr=1.0)\r\n lr_scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda x: 1.0)\r\n trainer = Trainer(model, args, train_dataset=train_dataset, optimizers=(optimizer, lr_scheduler))\r\n trainer.train()\r\n \r\n> self.assertTrue(torch.abs(trainer.model.a - 1.8950) < 1e-4)\r\nE AssertionError: tensor(False, device='cuda:0') is not true\r\n\r\ntests/test_trainer.py:245: AssertionError\r\n---------------------------------------------------------------------- Captured stderr call ----------------------------------------------------------------------\r\nIteration: 100%|ββββββββββ| 8/8 [00:00<00:00, 1064.58it/s]\r\nIteration: 100%|ββββββββββ| 8/8 [00:00<00:00, 1292.20it/s]\r\nIteration: 100%|ββββββββββ| 8/8 [00:00<00:00, 1446.00it/s]\r\nEpoch: 100%|ββββββββββ| 3/3 [00:00<00:00, 152.11it/s]\r\n\r\n tests/test_trainer.py β¨― 18% ββ \r\n\r\nβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ TrainerIntegrationTest.test_model_init βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\r\n\r\nself = <tests.test_trainer.TrainerIntegrationTest testMethod=test_model_init>\r\n\r\n @require_non_multigpu\r\n def test_model_init(self):\r\n train_dataset = RegressionDataset()\r\n args = TrainingArguments(\"./regression\", learning_rate=0.1)\r\n trainer = Trainer(args=args, train_dataset=train_dataset, model_init=lambda: RegressionModel())\r\n trainer.train()\r\n> self.check_trained_model(trainer.model)\r\n\r\ntests/test_trainer.py:255: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_trainer.py:105: in check_trained_model\r\n self.assertTrue(torch.abs(model.a - 0.6975) < 1e-4)\r\nE AssertionError: tensor(False, device='cuda:0') is not true\r\n---------------------------------------------------------------------- Captured stderr call ----------------------------------------------------------------------\r\nIteration: 100%|ββββββββββ| 8/8 [00:00<00:00, 1122.63it/s]\r\nIteration: 100%|ββββββββββ| 8/8 [00:00<00:00, 1000.25it/s]\r\nIteration: 100%|ββββββββββ| 8/8 [00:00<00:00, 1124.70it/s]\r\nEpoch: 100%|ββββββββββ| 3/3 [00:00<00:00, 131.09it/s]\r\n\r\n tests/test_trainer.py β¨― 27% βββ \r\n\r\nβββββββββββββββββββββββββββββββββββββββββββββββββββββββ TrainerIntegrationTest.test_trainer_with_datasets ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\r\n\r\nself = <tests.test_trainer.TrainerIntegrationTest testMethod=test_trainer_with_datasets>\r\n\r\n @require_non_multigpu\r\n def test_trainer_with_datasets(self):\r\n np.random.seed(42)\r\n x = np.random.normal(size=(64,)).astype(np.float32)\r\n y = 2.0 * x + 3.0 + np.random.normal(scale=0.1, size=(64,))\r\n train_dataset = datasets.Dataset.from_dict({\"input_x\": x, \"label\": y})\r\n \r\n # Base training. Should have the same results as test_reproducible_training\r\n model = RegressionModel()\r\n args = TrainingArguments(\"./regression\", learning_rate=0.1)\r\n trainer = Trainer(model, args, train_dataset=train_dataset)\r\n trainer.train()\r\n> self.check_trained_model(trainer.model)\r\n\r\ntests/test_trainer.py:218: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_trainer.py:105: in check_trained_model\r\n self.assertTrue(torch.abs(model.a - 0.6975) < 1e-4)\r\nE AssertionError: tensor(False, device='cuda:0') is not true\r\n---------------------------------------------------------------------- Captured stderr call ----------------------------------------------------------------------\r\nSet __getitem__(key) output type to python objects for ['input_x', 'label'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\nIteration: 100%|ββββββββββ| 8/8 [00:00<00:00, 941.46it/s]\r\nIteration: 100%|ββββββββββ| 8/8 [00:00<00:00, 1036.33it/s]\r\nIteration: 100%|ββββββββββ| 8/8 [00:00<00:00, 1004.92it/s]\r\nEpoch: 100%|ββββββββββ| 3/3 [00:00<00:00, 121.20it/s]\r\n\r\n tests/test_trainer.py β¨― 36% ββββ \r\n\r\nβββββββββββββββββββββββββββββββββββββββββββββββββββββββ TrainerIntegrationTest.test_reproducible_training ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\r\n\r\nself = <tests.test_trainer.TrainerIntegrationTest testMethod=test_reproducible_training>\r\n\r\n @require_non_multigpu\r\n def test_reproducible_training(self):\r\n # Checks that training worked, model trained and seed made a reproducible training.\r\n trainer = get_regression_trainer(learning_rate=0.1)\r\n trainer.train()\r\n> self.check_trained_model(trainer.model)\r\n\r\ntests/test_trainer.py:119:\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\ntests/test_trainer.py:105: in check_trained_model\r\n self.assertTrue(torch.abs(model.a - 0.6975) < 1e-4)\r\nE AssertionError: tensor(False, device='cuda:0') is not true\r\n---------------------------------------------------------------------- Captured stderr call ----------------------------------------------------------------------\r\nIteration: 100%|ββββββββββ| 8/8 [00:00<00:00, 1168.66it/s]\r\nIteration: 100%|ββββββββββ| 8/8 [00:00<00:00, 1169.27it/s]\r\nIteration: 100%|ββββββββββ| 8/8 [00:00<00:00, 1208.34it/s]\r\nEpoch: 100%|ββββββββββ| 3/3 [00:00<00:00, 143.97it/s]\r\n\r\n tests/test_trainer.py β¨―ββββββ 100% ββββββββββ\r\n======================================================================== warnings summary ========================================================================\r\n/home/stas/anaconda3/envs/main/lib/python3.8/site-packages/tensorflow/python/pywrap_tensorflow_internal.py:15\r\n /home/stas/anaconda3/envs/main/lib/python3.8/site-packages/tensorflow/python/pywrap_tensorflow_internal.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses\r\n import imp\r\n\r\n-- Docs: https://docs.pytest.org/en/stable/warnings.html\r\n==================================================================== short test summary info =====================================================================\r\nFAILED tests/test_trainer.py::TrainerIntegrationTest::test_custom_optimizer - AssertionError: tensor(False, device='cuda:0') is not true\r\nFAILED tests/test_trainer.py::TrainerIntegrationTest::test_model_init - AssertionError: tensor(False, device='cuda:0') is not true\r\nFAILED tests/test_trainer.py::TrainerIntegrationTest::test_trainer_with_datasets - AssertionError: tensor(False, device='cuda:0') is not true\r\nFAILED tests/test_trainer.py::TrainerIntegrationTest::test_reproducible_training - AssertionError: tensor(False, device='cuda:0') is not true\r\n\r\nResults (9.28s):\r\n 7 passed\r\n 4 failed\r\n - tests/test_trainer.py:235 TrainerIntegrationTest.test_custom_optimizer\r\n - tests/test_trainer.py:249 TrainerIntegrationTest.test_model_init\r\n - tests/test_trainer.py:206 TrainerIntegrationTest.test_trainer_with_datasets\r\n - tests/test_trainer.py:114 TrainerIntegrationTest.test_reproducible_training\r\n```",
"OK, so I found at least one difference - fails on py38 env, but works on py37 - I will check if it's python or some libs that are different and report back.\r\n\r\n**edit**: I rebuilt a fresh py38 env and the problem is gone, so it must be some other packages. I will post more if I find the culprit.",
"Hmm, I was trying to match the environments and accidentally updated the environment the test breaks in and it no longer breaks. So it must have been some package. The main suspect was pytorch as I had a nightly version, but I reverted back to that old version and it still works. \r\n\r\nSo this issue is resolved once this is merged https://github.com/huggingface/transformers/pull/7059"
] | 1,599 | 1,599 | 1,599 | CONTRIBUTOR | null | ```
pytest tests/test_trainer.py
```
```
platform linux -- Python 3.7.9, pytest-6.0.1, py-1.9.0, pluggy-0.13.1
rootdir: /mnt/nvme1/code/huggingface/transformers-master
plugins: xdist-2.1.0, forked-1.3.0
collected 11 items
tests/test_trainer.py F.FF.FF...F [100%]
====================================================================== FAILURES ======================================================================
____________________________________________________ TrainerIntegrationTest.test_custom_optimizer ____________________________________________________
self = <tests.test_trainer.TrainerIntegrationTest testMethod=test_custom_optimizer>
def test_custom_optimizer(self):
train_dataset = RegressionDataset()
args = TrainingArguments("./regression")
model = RegressionModel()
optimizer = torch.optim.SGD(model.parameters(), lr=1.0)
lr_scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda x: 1.0)
trainer = Trainer(model, args, train_dataset=train_dataset, optimizers=(optimizer, lr_scheduler))
trainer.train()
> self.assertTrue(torch.abs(trainer.model.a - 1.8950) < 1e-4)
E AssertionError: tensor(False, device='cuda:0') is not true
tests/test_trainer.py:240: AssertionError
---------------------------------------------------------------- Captured stderr call ----------------------------------------------------------------
Iteration: 100%|ββββββββββ| 4/4 [00:00<00:00, 4.15it/s]
Iteration: 100%|ββββββββββ| 4/4 [00:00<00:00, 584.41it/s]
Iteration: 100%|ββββββββββ| 4/4 [00:00<00:00, 570.73it/s]
Epoch: 100%|ββββββββββ| 3/3 [00:00<00:00, 3.06it/s]
_______________________________________________________ TrainerIntegrationTest.test_model_init _______________________________________________________
self = <tests.test_trainer.TrainerIntegrationTest testMethod=test_model_init>
def test_model_init(self):
train_dataset = RegressionDataset()
args = TrainingArguments("./regression", learning_rate=0.1)
trainer = Trainer(args=args, train_dataset=train_dataset, model_init=lambda: RegressionModel())
trainer.train()
> self.check_trained_model(trainer.model)
tests/test_trainer.py:249:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/test_trainer.py:105: in check_trained_model
self.assertTrue(torch.abs(model.a - 0.6975) < 1e-4)
E AssertionError: tensor(False, device='cuda:0') is not true
---------------------------------------------------------------- Captured stderr call ----------------------------------------------------------------
Iteration: 100%|ββββββββββ| 4/4 [00:00<00:00, 540.05it/s]
Iteration: 100%|ββββββββββ| 4/4 [00:00<00:00, 510.99it/s]
Iteration: 100%|ββββββββββ| 4/4 [00:00<00:00, 553.37it/s]
Epoch: 100%|ββββββββββ| 3/3 [00:00<00:00, 130.01it/s]
______________________________________________ TrainerIntegrationTest.test_number_of_steps_in_training _______________________________________________
self = <tests.test_trainer.TrainerIntegrationTest testMethod=test_number_of_steps_in_training>
def test_number_of_steps_in_training(self):
# Regular training has n_epochs * len(train_dl) steps
trainer = get_regression_trainer(learning_rate=0.1)
train_output = trainer.train()
> self.assertEqual(train_output.global_step, self.n_epochs * 64 / self.batch_size)
E AssertionError: 12 != 24.0
tests/test_trainer.py:129: AssertionError
---------------------------------------------------------------- Captured stderr call ----------------------------------------------------------------
Iteration: 100%|ββββββββββ| 4/4 [00:00<00:00, 547.43it/s]
Iteration: 100%|ββββββββββ| 4/4 [00:00<00:00, 573.03it/s]
Iteration: 100%|ββββββββββ| 4/4 [00:00<00:00, 557.12it/s]
Epoch: 100%|ββββββββββ| 3/3 [00:00<00:00, 136.18it/s]
_________________________________________________ TrainerIntegrationTest.test_reproducible_training __________________________________________________
self = <tests.test_trainer.TrainerIntegrationTest testMethod=test_reproducible_training>
def test_reproducible_training(self):
# Checks that training worked, model trained and seed made a reproducible training.
trainer = get_regression_trainer(learning_rate=0.1)
trainer.train()
> self.check_trained_model(trainer.model)
tests/test_trainer.py:118:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/test_trainer.py:105: in check_trained_model
self.assertTrue(torch.abs(model.a - 0.6975) < 1e-4)
E AssertionError: tensor(False, device='cuda:0') is not true
---------------------------------------------------------------- Captured stderr call ----------------------------------------------------------------
Iteration: 100%|ββββββββββ| 4/4 [00:00<00:00, 388.21it/s]
Iteration: 100%|ββββββββββ| 4/4 [00:00<00:00, 556.31it/s]
Iteration: 100%|ββββββββββ| 4/4 [00:00<00:00, 544.31it/s]
Epoch: 100%|ββββββββββ| 3/3 [00:00<00:00, 117.50it/s]
_______________________________________________ TrainerIntegrationTest.test_train_and_eval_dataloaders _______________________________________________
self = <tests.test_trainer.TrainerIntegrationTest testMethod=test_train_and_eval_dataloaders>
def test_train_and_eval_dataloaders(self):
trainer = get_regression_trainer(learning_rate=0.1, per_device_train_batch_size=16)
> self.assertEqual(trainer.get_train_dataloader().batch_size, 16)
E AssertionError: 32 != 16
tests/test_trainer.py:143: AssertionError
____________________________________________________ TrainerIntegrationTest.test_trainer_with_nlp ____________________________________________________
self = <tests.test_trainer.TrainerIntegrationTest testMethod=test_trainer_with_nlp>
def test_trainer_with_nlp(self):
np.random.seed(42)
x = np.random.normal(size=(64,)).astype(np.float32)
y = 2.0 * x + 3.0 + np.random.normal(scale=0.1, size=(64,))
train_dataset = nlp.Dataset.from_dict({"input_x": x, "label": y})
# Base training. Should have the same results as test_reproducible_training
model = RegressionModel()
args = TrainingArguments("./regression", learning_rate=0.1)
> trainer = Trainer(model, args, train_dataset=train_dataset)
tests/test_trainer.py:212:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/trainer.py:285: in __init__
self._remove_unused_columns(self.train_dataset, description="training")
src/transformers/trainer.py:311: in _remove_unused_columns
dataset.set_format(type=dataset.format["type"], columns=columns)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = Dataset(features: {'input_x': Value(dtype='float32', id=None), 'label': Value(dtype='float64', id=None)}, num_rows: 64), type = 'python'
columns = ['input_x', 'label'], output_all_columns = False, format_kwargs = {}
def set_format(
self,
type: Optional[str] = None,
columns: Optional[List] = None,
output_all_columns: bool = False,
**format_kwargs,
):
""" Set __getitem__ return format (type and columns)
Args:
type (Optional ``str``): output type selected in [None, 'numpy', 'torch', 'tensorflow', 'pandas']
None means __getitem__ returns python objects (default)
columns (Optional ``List[str]``): columns to format in the output
None means __getitem__ returns all columns (default)
output_all_columns (``bool`` default to False): keep un-formated columns as well in the output (as python objects)
format_kwargs: keywords arguments passed to the convert function like `np.array`, `torch.tensor` or `tensorflow.ragged.constant`.
"""
# Check return type
if type == "torch":
try:
import torch # noqa: F401
except ImportError:
logger.error("PyTorch needs to be installed to be able to return PyTorch tensors.")
elif type == "tensorflow":
try:
import tensorflow # noqa: F401
except ImportError:
logger.error("Tensorflow needs to be installed to be able to return Tensorflow tensors.")
else:
assert not (
type == "pandas" and (output_all_columns or format_kwargs)
), "Format type 'pandas' doesn't allow the use of `output_all_columns` or `**format_kwargs`."
assert (
type is None or type == "numpy" or type == "pandas"
> ), "Return type should be None or selected in ['numpy', 'torch', 'tensorflow', 'pandas']."
E AssertionError: Return type should be None or selected in ['numpy', 'torch', 'tensorflow', 'pandas'].
/home/stas/anaconda3/envs/main-37/lib/python3.7/site-packages/nlp/arrow_dataset.py:542: AssertionError
================================================================== warnings summary ==================================================================
/home/stas/anaconda3/envs/main-37/lib/python3.7/site-packages/tensorflow/python/data/ops/iterator_ops.py:546
/home/stas/anaconda3/envs/main-37/lib/python3.7/site-packages/tensorflow/python/data/ops/iterator_ops.py:546: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3,and in 3.9 it will stop working
class IteratorBase(collections.Iterator, trackable.Trackable,
/home/stas/anaconda3/envs/main-37/lib/python3.7/site-packages/tensorflow/python/data/ops/dataset_ops.py:106
/home/stas/anaconda3/envs/main-37/lib/python3.7/site-packages/tensorflow/python/data/ops/dataset_ops.py:106: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3,and in 3.9 it will stop working
class DatasetV2(collections.Iterable, tracking_base.Trackable,
/home/stas/anaconda3/envs/main-37/lib/python3.7/site-packages/tensorflow/python/autograph/utils/testing.py:21
/home/stas/anaconda3/envs/main-37/lib/python3.7/site-packages/tensorflow/python/autograph/utils/testing.py:21: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
src/transformers/modeling_tf_utils.py:718
/mnt/nvme1/code/huggingface/transformers-master/src/transformers/modeling_tf_utils.py:718: DeprecationWarning: invalid escape sequence \s
"""
src/transformers/modeling_funnel.py:130
/mnt/nvme1/code/huggingface/transformers-master/src/transformers/modeling_funnel.py:130: DeprecationWarning: invalid escape sequence \d
layer_index = int(re.search("layer_(\d+)", m_name).groups()[0])
tests/test_trainer.py::TrainerIntegrationTest::test_custom_optimizer
tests/test_trainer.py::TrainerIntegrationTest::test_evaluate
tests/test_trainer.py::TrainerIntegrationTest::test_model_init
tests/test_trainer.py::TrainerIntegrationTest::test_number_of_steps_in_training
tests/test_trainer.py::TrainerIntegrationTest::test_predict
tests/test_trainer.py::TrainerIntegrationTest::test_reproducible_training
/home/stas/anaconda3/envs/main-37/lib/python3.7/site-packages/torch/nn/parallel/_functions.py:61: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.
warnings.warn('Was asked to gather along dimension 0, but all '
-- Docs: https://docs.pytest.org/en/stable/warnings.html
============================================================== short test summary info ===============================================================
FAILED tests/test_trainer.py::TrainerIntegrationTest::test_custom_optimizer - AssertionError: tensor(False, device='cuda:0') is not true
FAILED tests/test_trainer.py::TrainerIntegrationTest::test_model_init - AssertionError: tensor(False, device='cuda:0') is not true
FAILED tests/test_trainer.py::TrainerIntegrationTest::test_number_of_steps_in_training - AssertionError: 12 != 24.0
FAILED tests/test_trainer.py::TrainerIntegrationTest::test_reproducible_training - AssertionError: tensor(False, device='cuda:0') is not true
FAILED tests/test_trainer.py::TrainerIntegrationTest::test_train_and_eval_dataloaders - AssertionError: 32 != 16
FAILED tests/test_trainer.py::TrainerIntegrationTest::test_trainer_with_nlp - AssertionError: Return type should be None or selected in ['numpy', '...
===================================================== 6 failed, 5 passed, 11 warnings in 10.28s ==================================================
```
Env:
```
- `transformers` version: 3.1.0
- Platform: Linux-4.15.0-112-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.9
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): 2.3.0 (True)
- Using GPU in script?: yes
```
Thanks.
@sgugger?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7055/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7055/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7054 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7054/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7054/comments | https://api.github.com/repos/huggingface/transformers/issues/7054/events | https://github.com/huggingface/transformers/pull/7054 | 698,322,868 | MDExOlB1bGxSZXF1ZXN0NDg0MTc3NzU0 | 7,054 | Fix CI with change of name of nlp | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7054?src=pr&el=h1) Report\n> Merging [#7054](https://codecov.io/gh/huggingface/transformers/pull/7054?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d6c08b07a087e83915b4b3156bbf464cebc7b9b5?el=desc) will **increase** coverage by `2.11%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7054?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7054 +/- ##\n==========================================\n+ Coverage 78.74% 80.85% +2.11% \n==========================================\n Files 168 168 \n Lines 32172 32172 \n==========================================\n+ Hits 25335 26014 +679 \n+ Misses 6837 6158 -679 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7054?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/7054/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.33% <ΓΈ> (ΓΈ)` | |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7054/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `82.93% <ΓΈ> (ΓΈ)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7054/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <100.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7054/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `54.68% <100.00%> (ΓΈ)` | |\n| [src/transformers/modeling\\_tf\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7054/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7054/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7054/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7054/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7054/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7054/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.77% <0.00%> (-0.68%)` | :arrow_down: |\n| ... and [26 more](https://codecov.io/gh/huggingface/transformers/pull/7054/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7054?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7054?src=pr&el=footer). Last update [df4594a...f8b9682](https://codecov.io/gh/huggingface/transformers/pull/7054?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Merging to make the CI green but happy to address any comment in a follow-up PR.",
"```\r\n_______________________________________________________ ERROR collecting tests/test_trainer.py _______________________________________________________\r\nImportError while importing test module '/mnt/nvme1/code/huggingface/transformers-master/tests/test_trainer.py'.\r\nHint: make sure your test modules/packages have valid Python names.\r\nTraceback:\r\n/home/stas/anaconda3/envs/main-37/lib/python3.7/importlib/__init__.py:127: in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\ntests/test_trainer.py:3: in <module>\r\n import datasets\r\nE ModuleNotFoundError: No module named 'datasets'\r\n```",
"and after `pip install datasets` (needed in `setup.py`), the failure is still the same as in https://github.com/huggingface/transformers/issues/7055",
"I think you need an install from source.\r\n",
"not sure what you mean? install from source `datasets`?\r\n\r\nI did:\r\n\r\n```\r\ngit pull\r\npip install -e .[dev]\r\n```\r\nin transformers",
"It's not in the depencies of transformers and requires a separate install from source for now. It does work on the CI and my machine:\r\n```\r\ngit clone https://github.com/huggingface/datasets\r\ncd datasets\r\npip install -e .\r\n```",
"I did what you suggested, same failures.",
"Are you sure you are in the same env?\r\n\r\nnlp was never in `setup.py`. It is an additional dep required for the full test suite as a source install for now, will become a dep when it's stable enough. I'll add that to the CONTRIBUTING but trying to understand why it fails for you before.",
"I have 2 gpus, you probably don't?\r\n\r\nIndeed, if I run:\r\n\r\n```CUDA_VISIBLE_DEVICES=\"\" pytest tests/test_trainer.py```\r\n\r\nit works.",
"Yup, it's multi-gpu that is the problem. It works if I do `CUDA_VISIBLE_DEVICES=\"0\" pytest tests/test_trainer.py`",
"Mmmm, why would the multi-gpu not see a new module. That's weird.",
"I'm not sure you have looked at the errors https://github.com/huggingface/transformers/issues/7055 - they are of numeric mismatch nature. Have a look?\r\n\r\n`12 != 24.0` looks like 1 vs 2 gpu issue.\r\n\r\nlet's move back into #7055 and continue there.",
"Oh this is a different error, not a missing model. Looks like those tests need a decorator to run on the CPU only.",
"> nlp was never in setup.py. It is an additional dep required for the full test suite as a source install for now, will become a dep when it's stable enough. I'll add that to the CONTRIBUTING but trying to understand why it fails for you before.\r\n\r\n`datasets` needs to be in requirements for `dev` - otherwise test suite fails.",
"Yes, like I said you need a separate source install of it. You can't have a source install from dev that is properly up to date AFAIK.\r\n\r\nDocumented this in #7058"
] | 1,599 | 1,599 | 1,599 | COLLABORATOR | null | Fixes #7055 (because yes, I can see the future) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7054/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7054/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7054",
"html_url": "https://github.com/huggingface/transformers/pull/7054",
"diff_url": "https://github.com/huggingface/transformers/pull/7054.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7054.patch",
"merged_at": 1599763869000
} |
https://api.github.com/repos/huggingface/transformers/issues/7053 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7053/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7053/comments | https://api.github.com/repos/huggingface/transformers/issues/7053/events | https://github.com/huggingface/transformers/pull/7053 | 698,303,017 | MDExOlB1bGxSZXF1ZXN0NDg0MTU5OTg4 | 7,053 | [examples] bump pl=0.9.0 | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @patil-suraj @stas00 "
] | 1,599 | 1,602 | 1,602 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7053/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7053/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7053",
"html_url": "https://github.com/huggingface/transformers/pull/7053",
"diff_url": "https://github.com/huggingface/transformers/pull/7053.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7053.patch",
"merged_at": 1602448779000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/7052 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7052/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7052/comments | https://api.github.com/repos/huggingface/transformers/issues/7052/events | https://github.com/huggingface/transformers/issues/7052 | 698,300,464 | MDU6SXNzdWU2OTgzMDA0NjQ= | 7,052 | TFBert activation layer will be casted into float32 under mixed precision policy | {
"login": "jlei2",
"id": 70337521,
"node_id": "MDQ6VXNlcjcwMzM3NTIx",
"avatar_url": "https://avatars.githubusercontent.com/u/70337521?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jlei2",
"html_url": "https://github.com/jlei2",
"followers_url": "https://api.github.com/users/jlei2/followers",
"following_url": "https://api.github.com/users/jlei2/following{/other_user}",
"gists_url": "https://api.github.com/users/jlei2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jlei2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jlei2/subscriptions",
"organizations_url": "https://api.github.com/users/jlei2/orgs",
"repos_url": "https://api.github.com/users/jlei2/repos",
"events_url": "https://api.github.com/users/jlei2/events{/privacy}",
"received_events_url": "https://api.github.com/users/jlei2/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hello @jlei2!\r\n\r\nThis issue should be fixed in https://github.com/huggingface/transformers/pull/7022 ",
"Gotcha! Thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi @jplu , you mentioned that this issue should be fixed in #7022. But I noticed that that PR has been closed. Are you still planning to look at this? Thanks!",
"Did you try with the master version?",
"Yes. I got same warning indicating that the gelu activation layer will be executed in fp32.",
"Thanks! I will rework this.",
"Hey @jlei2 ! Can you let me know if your issue is fixed in this PR please https://github.com/huggingface/transformers/pull/9163",
"Hi @jplu,\r\n\r\nThanks for creating the PR! Unfortunately the PR doesn't fix the issue. It resolves the 2nd causes I mentioned in this PR description but not the 1st one. It means after applying your PR, the activation layer will always has a compute dtype of fp32, no matter we are in mixed_fp16 mode or pure fp16 mode. As I said for the 1st cause, I think it is because the activation layer is defined outside of the BertIntermediate layer and the computation policy is not broadcasted into it.\r\n\r\nFor example, under your branch, with the code below:\r\n```\r\nimport tensorflow as tf\r\nfrom transformers import TFBertForQuestionAnswering\r\ntf.keras.mixed_precision.experimental.set_policy('float16')\r\nmodel = TFBertForQuestionAnswering.from_pretrained('bert-base-uncased')\r\n```\r\n\r\nWe still got\r\n```\r\nWARNING:tensorflow:Layer activation is casting an input tensor from dtype float16 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because its dtype defaults to f\r\nloatx.\r\n\r\nIf you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.\r\n\r\nTo change all layers to have dtype float16 by default, call `tf.keras.backend.set_floatx('float16')`. To change just this layer, pass dtype='float16' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.\r\n```\r\nindicating that the activation layer is casted into float32.\r\n\r\nThe behavior is different from Google's official Bert codebase where the activation layer will be executed as float16. And the two cast ops converting float16 to float32 and then converting it back can slow down the forward computation but provide higher precision.",
"Did you pull the Last version? I don't get the same behavior than you.",
"I think so. I am on your branch of fix-activations. Does it require any specific version of TF? I am using tf==2.3.0. Do you mean you don't get the warning info?",
"I'm using 2.4 and no, I don't have the warning. Try to git pull once again to check.",
"Just tried with 2.3 as well and I still don't get the warning.",
"With TF2.4, I don't get the warning because in your PR after tf2.4 we start to use tf.keras.activations.gelu which is introduced in tf2.4. But I still think you should be able to get the warning with tf 2.3, because in that case we are using tf.keras.layers.Activation to create the activation layer. This is weird. Could you double-check you are using tf 2.3 and the execution flow does go through here(https://github.com/huggingface/transformers/blob/4ba0b792767c2dfff61afbf4fd36b8bc521518a1/src/transformers/activations_tf.py#L21)?",
"I confirm, including with gelu_new",
"Ok, I know why, I was setting the mixed precision before to import the transformers lib, now if I switch the two I get the warning with 2.3. I will remove the usage of the Activation layer and use only the function then.",
"I just did the update, now it should works!",
"Oh good catch! I didn't know the order would make a difference. It now works well on my side!",
"Awesome! Happy to know that the issue is fixed on your side as well!! The PR should be merged this week :)",
"Great! I really appreciate your effort on this issue!"
] | 1,599 | 1,611 | 1,611 | NONE | null | ### To reproduce
```
import tensorflow as tf
from transformers import TFBertForQuestionAnswering
tf.keras.mixed_precision.experimental.set_policy('mixed_float16')
model = TFBertForQuestionAnswering.from_pretrained('bert-base-uncased')
```
User will receive warning saying
```
WARNING:tensorflow:Layer activation is casting an input tensor from dtype float16 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because its dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float16 by default, call `tf.keras.backend.set_floatx('float16')`. To change just this layer, pass dtype='float16' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
```
This log means that under mixed-precision policy, the gelu activation layer in TFBert will be executed with float32 dtype. So the input x([link](https://github.com/huggingface/transformers/blob/d6c08b07a087e83915b4b3156bbf464cebc7b9b5/src/transformers/modeling_tf_bert.py#L91)) will be casted into float32 before sent into activation layer and casted back into float16 after finishing computation.
### Causes
This issue is mainly due to two reasons:
- The dict of [ACT2FN](https://github.com/huggingface/transformers/blob/d6c08b07a087e83915b4b3156bbf464cebc7b9b5/src/transformers/modeling_tf_bert.py#L121) is defined outside of [TFBertIntermediate class](https://github.com/huggingface/transformers/blob/d6c08b07a087e83915b4b3156bbf464cebc7b9b5/src/transformers/modeling_tf_bert.py#L347). The mixed-precision policy will not be broadcasted into the activation layer defined in this way. This can be verified by adding `print(self.intermediate_act_fn._compute_dtype)` below this [line](https://github.com/huggingface/transformers/blob/d6c08b07a087e83915b4b3156bbf464cebc7b9b5/src/transformers/modeling_tf_bert.py#L360). float32 will be given.
- tf.math.sqrt(2.0) used in [gelu](https://github.com/huggingface/transformers/blob/d6c08b07a087e83915b4b3156bbf464cebc7b9b5/src/transformers/modeling_tf_bert.py#L98) will always return a float32 tensor even if it is under mixed-precision policy. So there will be dtype incompatibility error after we move the ACT2FN into TFBertIntermediate class. This can be further solved by specifying dtype of tf.math.sqrt(2.0) to be consistent with `tf.keras.mixed_precision.experimental.global_policy().compute_dtype
`.
### Pros & Cons
Pros: activation layer will have higher precision as it is computed with float32 dtype
Cons: the two additional Cast operation will increase latency.
### PR
The PR that can help getting rid of warning log and make the activation layer be executed in float16 under mixed-precision policy: https://github.com/jlei2/transformers/pull/3 This PR is not a concrete one because there are some other models importing this ACT2FN dict so it can only solve the issue existed in TFBert related model.
### Benchmark Results
As Huggingface official benchmark tests don't support mixed-precision, the comparison was made using Tensorflow Profiling Tool(https://www.tensorflow.org/tfx/serving/tensorboard).
1 GPU-V100
Model: Huggingface FP16 Bert-Base with batch_size=128 and seq_len = 128.
<img width="350" alt="image (5)" src="https://user-images.githubusercontent.com/70337521/92769405-cae10f80-f34d-11ea-87c0-df4ec9ecdae2.png">
<img width="350" alt="image (5)" src="https://user-images.githubusercontent.com/70337521/92769851-39be6880-f34e-11ea-9926-2e8014c17e4e.png">
After applying the PR, the two Cast ops disappear and the time spent on activation layer is reduced from 3.745 ms to 1.75ms.
This issue is not like a bug or error and the pros & cons are listed above. Will you consider make changes corresponding to this? @patrickvonplaten Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7052/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7052/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7051 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7051/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7051/comments | https://api.github.com/repos/huggingface/transformers/issues/7051/events | https://github.com/huggingface/transformers/issues/7051 | 698,291,655 | MDU6SXNzdWU2OTgyOTE2NTU= | 7,051 | How to pass tokenized hypotheses to TFRobertaForSequenceClassification model directly for faster inference? | {
"login": "akshatapatel",
"id": 2765148,
"node_id": "MDQ6VXNlcjI3NjUxNDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2765148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akshatapatel",
"html_url": "https://github.com/akshatapatel",
"followers_url": "https://api.github.com/users/akshatapatel/followers",
"following_url": "https://api.github.com/users/akshatapatel/following{/other_user}",
"gists_url": "https://api.github.com/users/akshatapatel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akshatapatel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akshatapatel/subscriptions",
"organizations_url": "https://api.github.com/users/akshatapatel/orgs",
"repos_url": "https://api.github.com/users/akshatapatel/repos",
"events_url": "https://api.github.com/users/akshatapatel/events{/privacy}",
"received_events_url": "https://api.github.com/users/akshatapatel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello @akshatapatel!\r\n\r\nPinging @joeddav as he is the person who takes care of the zero-shot classification in the lib.\r\n\r\nCan you share with us a piece of code on how you are trying to do your sequence classification?",
"Question moved to [the forums](https://discuss.huggingface.co/t/new-pipeline-for-zero-shot-text-classification/681/30?u=joeddav). Other discussions on speeding up zero shot classification [here](https://discuss.huggingface.co/t/speeding-up-zero-shot-classification-solved/692) and [here](https://discuss.huggingface.co/t/way-to-make-inference-zero-shot-pipeline-faster/1384)"
] | 1,599 | 1,603 | 1,603 | NONE | null | Hi,
I have been running the zero shot classification pipeline for my use case by passing each text and itβs corresponding list of hypotheses labels, however it takes around 3 hours on a 32B GPU to classify ~22000 sentences(each sentence can have a varying number of labels ranging from 70 to 140 labels).
Is there a way to reduce the computation time by passing the embeddings to the sequence classification model directly rather than the raw list of labels? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7051/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7051/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7050 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7050/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7050/comments | https://api.github.com/repos/huggingface/transformers/issues/7050/events | https://github.com/huggingface/transformers/pull/7050 | 698,104,941 | MDExOlB1bGxSZXF1ZXN0NDgzOTgxMzg4 | 7,050 | [BertGeneration, Docs] Fix another old name in docs | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,599 | 1,599 | 1,599 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7050/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7050/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7050",
"html_url": "https://github.com/huggingface/transformers/pull/7050",
"diff_url": "https://github.com/huggingface/transformers/pull/7050.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7050.patch",
"merged_at": 1599750754000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/7049 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7049/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7049/comments | https://api.github.com/repos/huggingface/transformers/issues/7049/events | https://github.com/huggingface/transformers/issues/7049 | 698,101,387 | MDU6SXNzdWU2OTgxMDEzODc= | 7,049 | Convert 12-1 and 6-1 en-de models from AllenNLP | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"I can work on these\r\n\r\nBut first I have a question: who would be a user of these wmt16-based models when there are several wmt19 en-de models (marian+fairseq wmt) which are significantly better with scores at ~41 and 43 respectively, vs 28 in this one. \r\n\r\nwmt19 is a vastly bigger dataset, so it makes sense that it'd beat wmt16-based pre-trained models.\r\n",
"+ since different dataset, different val set, which implies that 28 and 41 are not comparable BLEU scores.\r\n+ These models should be significantly faster than the Marian models at similar performance levels.\r\n+ We can finetune them on the new data if we think that will help.\r\n+ FYI `Helinki-NLP/opus-mt-en-de` trained on way more data than the fairseq model I think, not totally sure.",
"Seems like embeddings will be shared in these.",
"Converted and did the initial eval with the same wmt19 val set, with beam=15\r\n\r\n```\r\ncheckpoint_best.pt:\r\n\r\nallen_nlp-wmt16-en-de-dist_12-1\r\n{'bleu': 30.1995, 'n_obs': 1997, 'runtime': 233, 'seconds_per_sample': 0.1167}\r\n\r\nallen_nlp-wmt16-en-de-dist_6-1\r\n{'bleu': 29.3692, 'n_obs': 1997, 'runtime': 236, 'seconds_per_sample': 0.1182}\r\n\r\nallen_nlp-wmt16-en-de-12-1\r\n{'bleu': 24.3901, 'n_obs': 1997, 'runtime': 280, 'seconds_per_sample': 0.1402}\r\n\r\ncheckpoint_top5_average.pt:\r\n\r\nallen_nlp-wmt16-en-de-dist_12-1\r\n{'bleu': 30.1078, 'n_obs': 1997, 'runtime': 239, 'seconds_per_sample': 0.1197}\r\n\r\n```\r\nwhich is more or less about the same BLEU scores reported on the model's page.\r\n\r\nI will re-check tomorrow that I haven't made any mistakes, but this is far from wmt19 model's scores, which is not surprising given the difference in the amount of training data.\r\n\r\nUsing wmt16 dataset was probably sufficient to support the ideas in the paper, but it doesn't appear to be very practical for the end user, unless it's re-trained with a much bigger dataset (and wmt20 is imminent and probably will be even bigger).\r\n\r\nI guess I should also re-eval against wmt16 eval set, so that we also compare their bleu scores to the ported model - to ensure it works as good as the original. Will post results tomorrow.",
"That is shockingly low/suggests a bug somewhere. To big of a discrepancy to explain with training data. I can't find your en-de BLEU from the en-de but I remember 40.8 for Marian, which shouldn't be worse.\r\nCan you check your work/ look at translations a bit, then upload the model to `stas/` so that I can take a look from your PR branch?\r\n\r\nBug sources I've seen before:\r\n\r\n- special tokens (`eos_token_id`, `decoder_start_token_id`)\r\n- assumption that tokenizer is identical\r\n- Redundant generations: suggests caching issue.\r\n",
"So as promised here is the bleu scores with the wmt16 test set (3K items), beam=50:\r\n```\r\ndist_12-1\r\n{'bleu': 26.1883, 'n_obs': 2999, 'runtime': 1151, 'seconds_per_sample': 0.3838}\r\ndist_6-1\r\n{'bleu': 26.685, 'n_obs': 2999, 'runtime': 1054, 'seconds_per_sample': 0.3515}\r\n12-1\r\n{'bleu': 24.7299, 'n_obs': 2999, 'runtime': 1134, 'seconds_per_sample': 0.3781}\r\n```\r\nwhich are about 2 points behind if we assume they have run their eval on the wmt16 test data set.\r\n\r\n**edit**: but I also don't get the advertised scores with their instructions: https://github.com/jungokasai/deep-shallow/issues/3\r\n\r\nI will explore the model more today and see if I find anything, then yes, will upload the model. \r\n\r\nThank you for the pointers, @sshleifer \r\n\r\n> I can't find your en-de BLEU from the en-de but I remember 40.8 for Marian, which shouldn't be worse.\r\n\r\ngithub hides older comments, here is the one you're after:\r\nhttps://github.com/huggingface/transformers/pull/6940#issuecomment-687709700\r\n\r\n\r\npair | fairseq | transformers\r\n-------|----|----------\r\n\"en-ru\"|36.4| 33.29\r\n\"ru-en\"|41.3| 38.93\r\n\"de-en\"|42.3| 41.18\r\n\"en-de\"|43.1| 42.79\r\n\r\n\r\n",
"New model's config should be `{'num_beams':5}` according to https://github.com/jungokasai/deep-shallow#evaluation",
"> New model's config should be `{'num_beams':5}` according to https://github.com/jungokasai/deep-shallow#evaluation\r\n\r\nI'm not 100% sure what to do with this. \r\n\r\nfariseq uses `num_beams=50` in their eval, but that would be an overkill as the default for normal use. So may I propose we set a reasonable beam size for FSMT (currently 8) and leave it at it.\r\n\r\nBut when comparing bleu scores we match what the researchers advertised.\r\n\r\nIs this a reasonable approach?\r\n\r\n",
"+ `FSMT` defaulting to 8 is fine. 5 would also be fine.\r\n+ Each model's config should either have `num_beams` used by the author or a lower value that is close to as good (up to you).\r\n\r\n",
"So do you suggest we use `num_beams=50` for fairseq? that'd be imposing a significant slowdown on a user, when 5-10 scores about the same.\r\n\r\nThe competitors try to win and thus try to squeeze all they can, at a compute/time cost, so I'm not sure the number reported in the paper is always a good one to use for that model.\r\n\r\nBut if you think the model's default should match the paper as a rule, then a rule is a rule.",
"+ lets do 5 for fairseq\r\n+ Possible source of discrepancy for the allen-nlp models is the tokenizer.\r\nThey are using whatever tokenizer `transformer.wmt16.en-de` uses\r\n[here](https://github.com/pytorch/fairseq/blob/master/examples/translation/README.md#pre-trained-models)\r\n\r\nmy convo with the 1st author:\r\n\r\nhttps://github.com/jungokasai/deep-shallow/issues/1#issuecomment-678549967",
"> * Possible source of discrepancy for the allen-nlp models is the tokenizer.\r\n> They are using whatever tokenizer `transformer.wmt16.en-de` uses\r\n> [here](https://github.com/pytorch/fairseq/blob/master/examples/translation/README.md#pre-trained-models)\r\n\r\nYou're very likely correct:\r\n\r\n```\r\n # fairseq/models/transformer.py\r\n 'transformer.wmt14.en-fr': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/wmt14.en-fr.joined-dict.transformer.tar.bz2'),\r\n 'transformer.wmt16.en-de': 'https://dl.fbaipublicfiles.com/fairseq/models/wmt16.en-de.joined-dict.transformer.tar.bz2',\r\n 'transformer.wmt18.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/wmt18.en-de.ensemble.tar.gz'),\r\n 'transformer.wmt19.en-de': moses_fastbpe('https://dl.fbaipublicfiles.com/fairseq/models/wmt19.en-de.joined-dict.ensemble.tar.gz'),\r\n 'transformer.wmt19.en-ru': moses_fastbpe('https://dl.fbaipublicfiles.com/fairseq/models/wmt19.en-ru.ensemble.tar.gz'),\r\n```\r\non it.",
"I verified your suggestion and they didn't do it like `transformer.wmt16.en-de` does, as they have in model args:\r\n```\r\n'bpe': 'fastbpe',\r\n'tokenizer': 'moses',\r\n```\r\nbut it's a good point that I need to ensure inside the convert script that this is so, otherwise there will be bugs in porting future models that may use a different tokenizer/bpe.\r\n\r\n `transformer.wmt16.en-de` uses a basic split to words tokenizer and no sub-words:\r\n```\r\ndef tokenize_line(line):\r\n line = SPACE_NORMALIZER.sub(\" \", line)\r\n line = line.strip()\r\n return line.split()\r\n```\r\n",
"It was [suggested](https://github.com/jungokasai/deep-shallow/issues/3#issuecomment-691392148) `checkpoint_top5_average.pt` should have a better score than `checkpoint_best.pt`, but I get a slightly better result with the latter on `dist-12-1`. Here is the full table at the moment using the ported to FSMT weights.\r\n\r\n`num_beams=5` on `wmt19` test set:\r\n\r\nchkpt file| top5_average | best\r\n----------|--------------|-----\r\ndist-12-1 | 29.9134 | 30.2591\r\ndist-6-1 | 29.9837 | 29.3349\r\n12-1 | 26.4008 | 24.1803",
"I rerun eval after adding `length_penalty = 0.6` and getting better scores:\r\n\r\nchkpt file| top5_average \r\n----------|--------------\r\ndist-12-1 | 30.1637\r\ndist-6-1 | 30.2214\r\n12-1 | 26.9763\r\n",
"For en-de datasets, I think they used moses+joint fastbpe. The model just assumes input data are already preprocessed with these tools, so that's why they just split with space. ",
"for wmt16 en/de, as you said fairseq transformer does only whitespace-splitting, and no moses/fastbpe. \r\n\r\nBut your model appears to do moses/fastbpe according to the args stored in the checkpoint, so our code copies your settings.\r\n",
"OK, I did search the hparam space and came up with:\r\n\r\n```\r\n# based on the results of a search on a range of `num_beams`, `length_penalty` and `early_stopping`\r\n# values against wmt19 test data to obtain the best BLEU scores, we will use the following defaults:\r\n#\r\n# * `num_beams`: 5 (higher scores better, but requires more memory/is slower, can be adjusted by users)\r\n# * `early_stopping`: `False` consistently scored better\r\n# * `length_penalty` varied, so will assign the best one depending on the model\r\nbest_score_hparams = {\r\n # fairseq:\r\n \"wmt19-ru-en\": {\"length_penalty\": 1.1},\r\n \"wmt19-en-ru\": {\"length_penalty\": 1.15},\r\n \"wmt19-en-de\": {\"length_penalty\": 1.0},\r\n \"wmt19-de-en\": {\"length_penalty\": 1.1},\r\n # allen-nlp : \r\n \"wmt16-en-de-dist-12-1\": {\"length_penalty\": 0.6},\r\n \"wmt16-en-de-dist-6-1\": {\"length_penalty\": 0.6},\r\n \"wmt16-en-de-12-1\": {\"length_penalty\": 0.8},\r\n \"wmt19-de-en-6-6-base\": {\"length_penalty\": 0.6 },\r\n \"wmt19-de-en-6-6-big\": {\"length_penalty\": 0.6 },\r\n }\r\n}\r\n```\r\n\r\nHere are the full results for allen-nlp:\r\n\r\n\r\n* wmt16-en-de-dist-12-1\r\n\r\nbleu | num_beams | length_penalty\r\n----- | --------- | --------------\r\n30.36 | 15 | 0.6\r\n30.35 | 15 | 0.7\r\n30.29 | 10 | 0.6\r\n30.27 | 15 | 0.8\r\n30.23 | 10 | 0.7\r\n30.21 | 15 | 0.9\r\n30.16 | 5 | 0.6\r\n30.16 | 10 | 0.8\r\n30.11 | 10 | 0.9\r\n30.11 | 15 | 1.0\r\n30.10 | 5 | 0.7\r\n30.03 | 5 | 0.8\r\n30.03 | 5 | 0.9\r\n30.02 | 10 | 1.0\r\n29.99 | 15 | 1.1\r\n29.94 | 10 | 1.1\r\n29.91 | 5 | 1.0\r\n29.88 | 5 | 1.1\r\n\r\n\r\n\r\n* wmt16-en-de-dist-6-1\r\n\r\n\r\nbleu | num_beams | length_penalty\r\n----- | --------- | --------------\r\n30.22 | 5 | 0.6\r\n30.17 | 10 | 0.7\r\n30.17 | 15 | 0.7\r\n30.16 | 5 | 0.7\r\n30.11 | 15 | 0.8\r\n30.10 | 10 | 0.6\r\n30.07 | 10 | 0.8\r\n30.05 | 5 | 0.8\r\n30.05 | 15 | 0.9\r\n30.04 | 5 | 0.9\r\n30.03 | 15 | 0.6\r\n30.00 | 10 | 0.9\r\n29.98 | 5 | 1.0\r\n29.95 | 15 | 1.0\r\n29.92 | 5 | 1.1\r\n29.91 | 10 | 1.0\r\n29.82 | 15 | 1.1\r\n29.80 | 10 | 1.1\r\n\r\n\r\n* wmt16-en-de-12-1\r\n\r\n\r\nbleu | num_beams | length_penalty\r\n----- | --------- | --------------\r\n27.71 | 15 | 0.8\r\n27.60 | 15 | 0.9\r\n27.35 | 15 | 0.7\r\n27.33 | 10 | 0.7\r\n27.19 | 10 | 0.8\r\n27.17 | 10 | 0.6\r\n27.13 | 5 | 0.8\r\n27.07 | 5 | 0.7\r\n27.07 | 15 | 0.6\r\n27.02 | 15 | 1.0\r\n26.98 | 5 | 0.6\r\n26.97 | 10 | 0.9\r\n26.69 | 5 | 0.9\r\n26.48 | 10 | 1.0\r\n26.40 | 5 | 1.0\r\n26.18 | 15 | 1.1\r\n26.04 | 10 | 1.1\r\n25.65 | 5 | 1.1\r\n\r\n\r\n* wmt19-de-en-6-6-base\r\n\r\n\r\nbleu | num_beams | length_penalty\r\n----- | --------- | --------------\r\n38.37 | 5 | 0.6\r\n38.31 | 5 | 0.7\r\n38.29 | 15 | 0.7\r\n38.25 | 10 | 0.7\r\n38.25 | 15 | 0.6\r\n38.24 | 10 | 0.6\r\n38.23 | 15 | 0.8\r\n38.17 | 5 | 0.8\r\n38.11 | 10 | 0.8\r\n38.11 | 15 | 0.9\r\n38.03 | 5 | 0.9\r\n38.02 | 5 | 1.0\r\n38.02 | 10 | 0.9\r\n38.02 | 15 | 1.0\r\n38.00 | 10 | 1.0\r\n37.86 | 5 | 1.1\r\n37.77 | 10 | 1.1\r\n37.74 | 15 | 1.1\r\n\r\n* wmt19-de-en-6-6-big\r\n\r\n\r\nbleu | num_beams | length_penalty\r\n----- | --------- | --------------\r\n40.12 | 15 | 0.6\r\n40.01 | 10 | 0.6\r\n39.96 | 15 | 0.7\r\n39.90 | 5 | 0.6\r\n39.90 | 10 | 0.7\r\n39.76 | 5 | 0.7\r\n39.74 | 10 | 0.8\r\n39.74 | 15 | 0.8\r\n39.65 | 5 | 0.8\r\n39.56 | 10 | 0.9\r\n39.48 | 5 | 0.9\r\n39.46 | 15 | 0.9\r\n39.42 | 10 | 1.0\r\n39.32 | 5 | 1.0\r\n39.29 | 15 | 1.0\r\n39.21 | 10 | 1.1\r\n39.16 | 5 | 1.1\r\n38.83 | 15 | 1.1",
"https://github.com/huggingface/transformers/pull/7153 once merged should close this issue."
] | 1,599 | 1,600 | 1,600 | CONTRIBUTOR | null | https://github.com/jungokasai/deep-shallow#download-trained-deep-shallow-models
+ These should be FSMT models, so can be part of #6940 or done after.
+ They should be uploaded to the AllenNLP namespace. If stas takes this, they can start in stas/ and I will move them.
+ model card(s) should link to the original repo and paper.
+ I hope same en-de tokenizer already ported.
+ Would be interesting to compare BLEU to the initial models in that PR. There is no ensemble so we should be able to reported scores pretty well.
+ **Ideally** this requires 0 lines of checked in python code, besides maybe an integration test.
Desired Signature:
```python
model = FSMT.from_pretrained('allen_nlp/en-de-12-1')
```
Weights can be downloaded with gdown https://pypi.org/project/gdown/
```bash
pip install gdown
gdown https://drive.google.com/uc?id=1x_G2cjvM1nW5hjAB8-vWxRqtQTlmIaQU
```
@stas00 if you are blocked in the late stages of #6940 and have extra cycles, you could give this a whirl. We could also wait for that to be finalized and then either of us can take this. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7049/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7049/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7048 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7048/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7048/comments | https://api.github.com/repos/huggingface/transformers/issues/7048/events | https://github.com/huggingface/transformers/pull/7048 | 698,100,485 | MDExOlB1bGxSZXF1ZXN0NDgzOTc3NDA4 | 7,048 | [BertGeneration] Correct Doc Title | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,599 | 1,599 | 1,599 | MEMBER | null | All is in the title. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7048/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7048/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7048",
"html_url": "https://github.com/huggingface/transformers/pull/7048",
"diff_url": "https://github.com/huggingface/transformers/pull/7048.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7048.patch",
"merged_at": 1599750521000
} |
https://api.github.com/repos/huggingface/transformers/issues/7047 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7047/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7047/comments | https://api.github.com/repos/huggingface/transformers/issues/7047/events | https://github.com/huggingface/transformers/issues/7047 | 698,090,038 | MDU6SXNzdWU2OTgwOTAwMzg= | 7,047 | T5-11b model parallelism | {
"login": "exelents",
"id": 12846582,
"node_id": "MDQ6VXNlcjEyODQ2NTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/12846582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/exelents",
"html_url": "https://github.com/exelents",
"followers_url": "https://api.github.com/users/exelents/followers",
"following_url": "https://api.github.com/users/exelents/following{/other_user}",
"gists_url": "https://api.github.com/users/exelents/gists{/gist_id}",
"starred_url": "https://api.github.com/users/exelents/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/exelents/subscriptions",
"organizations_url": "https://api.github.com/users/exelents/orgs",
"repos_url": "https://api.github.com/users/exelents/repos",
"events_url": "https://api.github.com/users/exelents/events{/privacy}",
"received_events_url": "https://api.github.com/users/exelents/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @exelents, \r\n\r\nyes we are still looking into a good way of doing model parallelism. Could you post the error message you received when using #3578? ",
"Here is it\r\n\r\n> -input-22-5591bd8e45c0> in main()\r\n> 143 cache_dir=model_args.cache_dir,\r\n> 144 )\r\n> --> 145 model = model.spread_on_devices(['cpu', 'cpu'])\r\n> 146\r\n> 147 # Get datasets\r\n> \r\n> /usr/local/lib/python3.6/dist-packages/transformers/modeling_t5.py in spread_on_devices(self, devices)\r\n> 936 return\r\n> 937\r\n> --> 938 modules_to_move = set(self.modules)\r\n> 939\r\n> 940 # Evenly spread the blocks on devices\r\n> \r\n> TypeError: 'method' object is not iterable\r\n\r\nAs I don't have several GPU at the moment, I tried to run it on CPU (see line 145 in error stack)",
"patrickvonplaten, \r\n\r\nThe following should be interesting. \r\n\r\nhttps://www.microsoft.com/en-us/research/publication/training-large-neural-networks-with-constant-memory-using-a-new-execution-algorithm/ \r\n\r\nI have engaged them and they are planning to release the open source several months back but faces some issues with Microsoft internals. Heard the author is planning to release open source themselves.\r\n\r\nCan anyone work with them?\r\n\r\nCheers,\r\nDr. Patrick\r\n\r\n",
"That does look interesting. Thanks for sharing! I'm not sure if we are planning on working with the author - but feel free to reach out to him and maybe this can help resolve the T5 model parallelism.",
"Hello, guys.\r\nAs I still need to train t5-11b, and Google doesn't want to give me access to his TPU's despite I can pay for it... So I have made some changes to T5 model to make it live on several GPU simultaneously.\r\nmy fork: https://github.com/huggingface/transformers/compare/master...exelents:model_parallelism_t5\r\n\r\nThe point is: transformer blocks (T5Block) is most large parts of network. First step is to evenly spread them aross all GPUs. In the second step we spread across GPUs all other blocks of our transformer, that are incomparably smaller than main blocks. Also there are some modification of original model code to make tensors move to nesessary GPU when incoming tensor and a layer are on the different devices.\r\nUnfortunately testing this code on 8-gpu server I found that first GPU is going to spend memory resource faster than others:\r\n\r\n> +-----------------------------------------------------------------------------+\r\n> | NVIDIA-SMI 450.51.06 Driver Version: 450.51.06 CUDA Version: 11.0 |\r\n> |-------------------------------+----------------------+----------------------+\r\n> | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\r\n> | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\r\n> | | | MIG M. |\r\n> |===============================+======================+======================|\r\n> | 0 Tesla V100-SXM2... On | 00000000:00:17.0 Off | 0 |\r\n> | N/A 53C P0 65W / 300W | 16108MiB / 16160MiB | 0% Default |\r\n> | | | N/A |\r\n> +-------------------------------+----------------------+----------------------+\r\n> | 1 Tesla V100-SXM2... On | 00000000:00:18.0 Off | 0 |\r\n> | N/A 53C P0 64W / 300W | 10224MiB / 16160MiB | 0% Default |\r\n> | | | N/A |\r\n> +-------------------------------+----------------------+----------------------+\r\n> | 2 Tesla V100-SXM2... On | 00000000:00:19.0 Off | 0 |\r\n> | N/A 57C P0 63W / 300W | 10224MiB / 16160MiB | 0% Default |\r\n> | | | N/A |\r\n> +-------------------------------+----------------------+----------------------+\r\n> | 3 Tesla V100-SXM2... On | 00000000:00:1A.0 Off | 0 |\r\n> | N/A 51C P0 64W / 300W | 10224MiB / 16160MiB | 0% Default |\r\n> | | | N/A |\r\n> +-------------------------------+----------------------+----------------------+\r\n> | 4 Tesla V100-SXM2... On | 00000000:00:1B.0 Off | 0 |\r\n> | N/A 51C P0 63W / 300W | 13296MiB / 16160MiB | 0% Default |\r\n> | | | N/A |\r\n> +-------------------------------+----------------------+----------------------+\r\n> | 5 Tesla V100-SXM2... On | 00000000:00:1C.0 Off | 0 |\r\n> | N/A 56C P0 65W / 300W | 13296MiB / 16160MiB | 0% Default |\r\n> | | | N/A |\r\n> +-------------------------------+----------------------+----------------------+\r\n> | 6 Tesla V100-SXM2... On | 00000000:00:1D.0 Off | 0 |\r\n> | N/A 52C P0 62W / 300W | 13296MiB / 16160MiB | 0% Default |\r\n> | | | N/A |\r\n> +-------------------------------+----------------------+----------------------+\r\n> | 7 Tesla V100-SXM2... On | 00000000:00:1E.0 Off | 0 |\r\n> | N/A 51C P0 64W / 300W | 13548MiB / 16160MiB | 0% Default |\r\n> | | | N/A |\r\n> +-------------------------------+----------------------+----------------------+\r\n\r\nIt seems in the beginning of our graph we have a large block which have a size comparable to T5Block size. The smarter way would be to split layers according to these memory usage, but I don't know a simple way to know how much memory every module use. \r\nMaybe a simple workaround would be to find which layer can use so much memory and provide it's memory in first step, with T5Block's.\r\n\r\nWhat do you think about this?",
"I tested this script on a machine with 8x32GB GPUs and have seen the same symptoms - first gpu's memoru gets fully loaded while other GPUs consume around 5 gigabytes:\r\nhttps://pastebin.com/cV3CAQMk\r\nLooking on output of device assignation array I see that all layers get spreaded evenly, so I can't imagine why it consumes memory of only one GPU....\r\nIf somebody could help with this code - please tell me, I can prepare running script for you. Also, you can use my code with only one line of code:\r\n\r\n rc = model.split_across_gpus(devices=['cuda:0', 'cuda:1','cuda:2','cuda:3', 'cuda:4', 'cuda:5', 'cuda:6', 'cuda:7',])\r\n print(rc)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi @exelents,\r\n\r\nI also need model parallelism for T5 and your code should be very helpful. However, the link to your code seems invalid. Could you please share the code with me?\r\n\r\nBest,\r\nJingxuan",
"Hello, @LostBenjamin.\r\nUnfortunately, this my code didn't worked when I tested 11B model on 8 V100 GPU, so I didn't fixed it.\r\n@alexorona did some work for model parallelism, here https://github.com/huggingface/transformers/pull/9384 you can find a discussion about already existing MP in transformers library. It's about Bart, but the same functions exists in T5 model class too. There is a code to spread model on several GPUs:\r\n`model.parallelize() # autogenerated`\r\n`inputs = inputs.to(\"cuda:0\")`\r\n\r\nAlso, you can try DeepSpeed:\r\nhttps://github.com/exelents/try_t5_qa\r\nI haven't used this code for model parallelism, but in DeepSpeed community people say MP is exists in this library. So maybe this repo would be helpful.",
"Hi @exelents,\r\n\r\nThanks for your help! I will try the MP in transformers library."
] | 1,599 | 1,610 | 1,607 | NONE | null | # π Feature request
I would like to finetune t5-11b model on my dataset, but found that it doesn't fit in TPU or GPU memory - colab notebook just crash when I run it.
I tried to find a ready model parallelism solution. First I found this PR:
https://github.com/huggingface/transformers/pull/3578
but it seems it haven't released. I tried to merge it to master branch locally, and use it, but it's crashed.
Also I have found Eisen library that propose "model parallelism with one code line", but works only for models with only one input ( t5 have 2 inputs - tokens and mask).
I need to distribute model on several GPU, and I see somebody tried to perform it. If this development (pull request 3578) is still in process, can you tell is there are any plans to release it? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7047/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7047/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7046 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7046/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7046/comments | https://api.github.com/repos/huggingface/transformers/issues/7046/events | https://github.com/huggingface/transformers/issues/7046 | 698,084,660 | MDU6SXNzdWU2OTgwODQ2NjA= | 7,046 | Usage of targets argument in fill-mask pipeline (Pipeline cannot handle mixed args and kwargs) | {
"login": "mattjhill",
"id": 5299353,
"node_id": "MDQ6VXNlcjUyOTkzNTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5299353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mattjhill",
"html_url": "https://github.com/mattjhill",
"followers_url": "https://api.github.com/users/mattjhill/followers",
"following_url": "https://api.github.com/users/mattjhill/following{/other_user}",
"gists_url": "https://api.github.com/users/mattjhill/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mattjhill/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mattjhill/subscriptions",
"organizations_url": "https://api.github.com/users/mattjhill/orgs",
"repos_url": "https://api.github.com/users/mattjhill/repos",
"events_url": "https://api.github.com/users/mattjhill/events{/privacy}",
"received_events_url": "https://api.github.com/users/mattjhill/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Looks like I was using v3.0.2 and this was introduced in v3.1.0"
] | 1,599 | 1,599 | 1,599 | NONE | null | I'm trying to use the `targets` keyword in the fill mask pipeline as described in #6239 but I'm getting a `ValueError: Pipeline cannot handle mixed args and kwargs`
Full example
nlp = pipeline('fill-mask', topk=2)
nlp("The acting was believable and the action was outstanding. The sentiment of this review is <mask>.", targets=[' positive', ' negative'])
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-44-2f6d875aee9b> in <module>
----> 1 nlp("The acting was believable and the action was outstanding. The sentiment of this review is <mask>.",
2 targets=[' positive', ' negative'])
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/pipelines.py in __call__(self, *args, **kwargs)
1054
1055 def __call__(self, *args, **kwargs):
-> 1056 inputs = self._parse_and_tokenize(*args, **kwargs)
1057 outputs = self._forward(inputs, return_tensors=True)
1058
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/pipelines.py in _parse_and_tokenize(self, padding, add_special_tokens, *args, **kwargs)
503 """
504 # Parse arguments
--> 505 inputs = self._args_parser(*args, **kwargs)
506 inputs = self.tokenizer(
507 inputs, add_special_tokens=add_special_tokens, return_tensors=self.framework, padding=padding,
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/pipelines.py in __call__(self, *args, **kwargs)
167 def __call__(self, *args, **kwargs):
168 if len(kwargs) > 0 and len(args) > 0:
--> 169 raise ValueError("Pipeline cannot handle mixed args and kwargs")
170
171 if len(kwargs) > 0:
ValueError: Pipeline cannot handle mixed args and kwargs
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7046/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7046/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7045 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7045/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7045/comments | https://api.github.com/repos/huggingface/transformers/issues/7045/events | https://github.com/huggingface/transformers/issues/7045 | 698,075,503 | MDU6SXNzdWU2OTgwNzU1MDM= | 7,045 | max_length does not seem to work | {
"login": "MarijnQ",
"id": 61018583,
"node_id": "MDQ6VXNlcjYxMDE4NTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/61018583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MarijnQ",
"html_url": "https://github.com/MarijnQ",
"followers_url": "https://api.github.com/users/MarijnQ/followers",
"following_url": "https://api.github.com/users/MarijnQ/following{/other_user}",
"gists_url": "https://api.github.com/users/MarijnQ/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MarijnQ/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MarijnQ/subscriptions",
"organizations_url": "https://api.github.com/users/MarijnQ/orgs",
"repos_url": "https://api.github.com/users/MarijnQ/repos",
"events_url": "https://api.github.com/users/MarijnQ/events{/privacy}",
"received_events_url": "https://api.github.com/users/MarijnQ/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @MarijnQ,\r\n\r\n`max_length` defines the number of maximum output *tokens* (which is usually a bit less than number of words). You can get the number of tokens of a text by doing:\r\n\r\n```python \r\nfrom transformers import T5Tokenizer\r\n\r\ntokenizer = T5Tokenizer.from_pretrained(\"t5-base\")\r\ninput_ids = tokenizer('Liana Barrientos pleaded not guilty to two counts of \"offering a false instrument for filing in the first degree\" She has been married to 10 men, nine of them between 1999 and 2002 . At one time, she was married to eight men at once, prosecutors say .')\r\n\r\nprint(f\"Num tokens {len(input_ids.input_ids)}\") \r\n```"
] | 1,599 | 1,599 | 1,599 | NONE | null | So I may have understood the definition wrong, but in my understanding max_length defines the amount of characters you want your summary to be. i.e. max_length=50, the summary is not over 50 characters.
I have installed the transformers, work with the summarizer pipeline, which I assume is the bartsummarizer.
I try to summarize the standard text from the tutorial about the woman who married a lotta guys.
I fill in the following string:
`summarizer = pipeline('summarization')
summarizer(TEXT_TO_SUMMARIZE, min_length=50, max_length=100)`
The summary is this
` Liana Barrientos pleaded not guilty to two counts of "offering a false instrument for filing in the first degree" She has been married to 10 men, nine of them between 1999 and 2002 . At one time, she was married to eight men at once, prosecutors say .`
Which is 249 characters and has weird errors with the . being placed after a space.
It is probably a rookie question, but I can't seem to figure it out.
ps. I run it on Google Colab, if that makes a difference | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7045/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7045/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7044 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7044/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7044/comments | https://api.github.com/repos/huggingface/transformers/issues/7044/events | https://github.com/huggingface/transformers/pull/7044 | 698,054,518 | MDExOlB1bGxSZXF1ZXN0NDgzOTM2NjAw | 7,044 | Small fixes in tf template | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,599 | 1,599 | 1,599 | COLLABORATOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7044/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7044/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7044",
"html_url": "https://github.com/huggingface/transformers/pull/7044",
"diff_url": "https://github.com/huggingface/transformers/pull/7044.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7044.patch",
"merged_at": 1599748563000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/7043 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7043/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7043/comments | https://api.github.com/repos/huggingface/transformers/issues/7043/events | https://github.com/huggingface/transformers/issues/7043 | 697,974,578 | MDU6SXNzdWU2OTc5NzQ1Nzg= | 7,043 | Batch_encode_plus with is_pretokenized=True outputs incomplete input_ids | {
"login": "ShriyaA",
"id": 16235088,
"node_id": "MDQ6VXNlcjE2MjM1MDg4",
"avatar_url": "https://avatars.githubusercontent.com/u/16235088?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ShriyaA",
"html_url": "https://github.com/ShriyaA",
"followers_url": "https://api.github.com/users/ShriyaA/followers",
"following_url": "https://api.github.com/users/ShriyaA/following{/other_user}",
"gists_url": "https://api.github.com/users/ShriyaA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ShriyaA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShriyaA/subscriptions",
"organizations_url": "https://api.github.com/users/ShriyaA/orgs",
"repos_url": "https://api.github.com/users/ShriyaA/repos",
"events_url": "https://api.github.com/users/ShriyaA/events{/privacy}",
"received_events_url": "https://api.github.com/users/ShriyaA/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I ran this example using the v3.1.0 and it seems the issue was resolved in the latest versions:\r\n\r\n```py\r\nfrom transformers import AutoTokenizer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained('distilbert-base-cased', use_fast=True)\r\ninput_text = ['Roman Atwood is a content creator.', 'The Boston Celtics play their home games at TD Garden.']\r\nsample_batch = [x.split(' ') for x in input_text]\r\nprint(sample_batch)\r\n\r\nencoded_dict_batch = tokenizer.batch_encode_plus(sample_batch, is_pretokenized=True, padding=True, return_tensors='pt', truncation=True, max_length=125)\r\nprint(encoded_dict_batch)\r\n```\r\noutputs\r\n\r\n```\r\n[['Roman', 'Atwood', 'is', 'a', 'content', 'creator.'], ['The', 'Boston', 'Celtics', 'play', 'their', 'home', 'games', 'at', 'TD', 'Garden.']]\r\n{'input_ids': tensor([[ 101, 2264, 1335, 2615, 1110, 170, 3438, 9264, 119, 102,\r\n 0, 0, 0],\r\n [ 101, 1109, 2859, 25931, 1505, 1147, 1313, 1638, 1120, 15439,\r\n 5217, 119, 102]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0],\r\n [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}\r\n```\r\n\r\nWould it be possible for you to upgrade to version v3.1.0?",
"I was able to upgrade to 3.0.2 and it was working with that. Thanks!"
] | 1,599 | 1,601 | 1,601 | NONE | null | ## Environment info
- `transformers` version: 2.11.0
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.1+cu101 (True)
- Tensorflow version (GPU?): 2.3.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
tokenizers: @mfuntowicz
## Information
Model I am using (Bert, XLNet ...): Distilbert, Roberta
I am trying to use batch_encode_plus with pretokenized inputs but the input_encodings are different than if the same text is run individually through encode_plus or if the same text is batch encoded without pretokenization.
## To reproduce
Code and outputs:
```python
tokenizer = AutoTokenizer.from_pretrained('distilbert-base-cased', use_fast=True)
input_text = ['Roman Atwood is a content creator.', 'The Boston Celtics play their home games at TD Garden.']
sample_batch = [x.split(' ') for x in input_text]
sample_batch
```
Output: [['Roman', 'Atwood', 'is', 'a', 'content', 'creator.'],
['The', 'Boston', 'Celtics', 'play', 'their', 'home', 'games', 'at', 'TD', 'Garden.']]
```python
encoded_dict_batch = tokenizer.batch_encode_plus(sample_batch, is_pretokenized=True, padding=True, return_tensors='pt', truncation=True, max_length=125)
print(encoded_dict_batch)
```
The output is this which was far fewer non-zero tokens than I expected:
{'input_ids': tensor([[ 101, 2264, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 102],
[ 101, 1109, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 102]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1]])}
The attention mask is also a problem with all the padded indices being 1s.
Run only the first sentence through encoding (not in batch). All other parameters remain the same.
```python
encoded_dict_single = tokenizer.encode_plus(sample_batch[0], is_pretokenized=True, padding=True, return_tensors='pt', truncation=True, max_length=125)
print(encoded_dict_single)
```
This produces a much more sane output:
{'input_ids': tensor([[ 101, 2264, 1335, 2615, 1110, 170, 3438, 9264, 119, 102]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}
The input ids here from index 2 onwards are replaced by 0 in the previous output even though it's well below max_length.
The words are truncated as well:
```python
print(encoded_dict_single.words())
print(encoded_dict_batch.words())
```
[None, 0, 1, 1, 2, 3, 4, 5, 6, None]
[None, 0, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None]
Strangely enough, when calling batch_encode_plus with a pretokenized batch that consists of only 1 item in the list, it works fine.
```python
batch_of_one = [sample_batch[0]]
encoded_dict_batch_of_one = tokenizer.batch_encode_plus(batch_of_one, is_pretokenized=True, padding=True, return_tensors='pt', truncation=True, max_length=125)
print(encoded_dict_batch_of_one)
```
Output:
{'input_ids': tensor([[ 101, 2264, 1335, 2615, 1110, 170, 3438, 9264, 119, 102]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}
I've tried with roberta as well and had the same results. Also, the same sentences in a batch but without pretokenization produces the correct outputs. It seems to be only the combination of pretokenized and batched sentences that are a problem.
## Expected behavior
The input_ids should not be replaced by 0s when tokenizing batched and pre-tokenized inputs.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7043/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7043/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7042 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7042/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7042/comments | https://api.github.com/repos/huggingface/transformers/issues/7042/events | https://github.com/huggingface/transformers/issues/7042 | 697,960,359 | MDU6SXNzdWU2OTc5NjAzNTk= | 7,042 | the time of loading different models to GPU is nearly the same? | {
"login": "cmdllx",
"id": 50104519,
"node_id": "MDQ6VXNlcjUwMTA0NTE5",
"avatar_url": "https://avatars.githubusercontent.com/u/50104519?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cmdllx",
"html_url": "https://github.com/cmdllx",
"followers_url": "https://api.github.com/users/cmdllx/followers",
"following_url": "https://api.github.com/users/cmdllx/following{/other_user}",
"gists_url": "https://api.github.com/users/cmdllx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cmdllx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cmdllx/subscriptions",
"organizations_url": "https://api.github.com/users/cmdllx/orgs",
"repos_url": "https://api.github.com/users/cmdllx/repos",
"events_url": "https://api.github.com/users/cmdllx/events{/privacy}",
"received_events_url": "https://api.github.com/users/cmdllx/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,599 | 1,605 | 1,605 | NONE | null | I once used the old script for text-classification,the time of loading different models(mode.to_device) to GPU is different,
However, when I run the lastest version of script, I find that loading different models to GPU nearly takes the same time(which is in the process of initing the Trainer)
I want to know the reason. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7042/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7042/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7041 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7041/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7041/comments | https://api.github.com/repos/huggingface/transformers/issues/7041/events | https://github.com/huggingface/transformers/pull/7041 | 697,946,045 | MDExOlB1bGxSZXF1ZXN0NDgzODM4Njgx | 7,041 | [wip/token clasification] Introduce datasets and metrics in token classification examples | {
"login": "vblagoje",
"id": 458335,
"node_id": "MDQ6VXNlcjQ1ODMzNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/458335?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vblagoje",
"html_url": "https://github.com/vblagoje",
"followers_url": "https://api.github.com/users/vblagoje/followers",
"following_url": "https://api.github.com/users/vblagoje/following{/other_user}",
"gists_url": "https://api.github.com/users/vblagoje/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vblagoje/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vblagoje/subscriptions",
"organizations_url": "https://api.github.com/users/vblagoje/orgs",
"repos_url": "https://api.github.com/users/vblagoje/repos",
"events_url": "https://api.github.com/users/vblagoje/events{/privacy}",
"received_events_url": "https://api.github.com/users/vblagoje/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"> Thanks a lot for this PR!\r\n> \r\n> The files `conll2003.py` and `ud_english_ewt.py` should be in the `nlp` package and not here. Can you open a PR there to propose these new datasets please.\r\n\r\nOk no problem. That makes more sense. \r\n> \r\n> In overrall I really like it! Nevertheless, it brings an important breaking change, because it is not possible anymore to use its own files without making an `nlp` script. Instead of modifying the existing script can you add a new one? For example `run_ner_with_nlp.py` or something like that?\r\n\r\nCan you please clarify what do you mean by 'it is not possible anymore to use its own files without making an nlp script.' Do you mean that it would be preferred to use raw datasets just like before (as it is now)?\r\n",
"> Can you please clarify what do you mean by 'it is not possible anymore to use its own files without making an nlp script.' Do you mean that it would be preferred to use raw datasets just like before (as it is now)?\r\n\r\nThis is exaclty what I mean, and what you have done would be better suited as a new script.",
"Well, the idea of `datasets` is that you can also use your own data files or python/pandas objects @jplu.\r\nSee here: https://huggingface.co/docs/datasets/loading_datasets.html#from-local-files\r\nand here: https://huggingface.co/docs/datasets/loading_datasets.html#from-in-memory-data\r\nSo I'm not sure we need several scripts actually.",
"True, I know, but CoNLL2003 is the most used NER dataset, it is the must have evaluation when developing a new NER approach. Same thing with UDEnglishEWT for POS. They are so popular that I think it would be a nice to have in `datasets`.",
"@jplu I looked at the changes needed to accommodate both: dataset use and raw dataset download + use (without dataset API). It makes things a bit more complicated, clutters the token classification example, and defeats the purpose of simplifying things for new HF users.\r\n\r\nExperienced users know how to download raw datasets and load them into the models. The proposed use of the new datasets package lowers the barrier to entry, makes everything simple and easy to understand for the new users. \r\n\r\nIn summary, we can add new datasets to the datasets project, use them in examples, that's given. However, having both the old approach (raw datasets download, script preprocessing, and use) and the new approach (just datasets) defeats the purpose of having clean, easy to understand and simple examples for the new users. On top of it, maintaining and testing these examples will be additional headache for us. But you guys let me know wdyt @sshleifer @thomwolf @stefan-it and we'll proceed forward. ",
"@vblagoje Sorry, I'm not saying that you have to modify the existing script, but to create a new one. Basically you don't touch the current `run_tf_ner.py` and you create a new `run_tf_ner_with_datasets.py` where you do what you want here.\r\n\r\nThe advantage here is that you are not introducing breaking changes in the scripts people already uses. Think of those who are already using this script with their own data and don't want to move their dataset to datasets (for some reason). How do they do?",
"Or a better compromise IMO, as @thomwolf proposed, to add the possiblity to use a raw dataset, something that you can do with `datasets` pretty easily. Because here the problem is that you force people to use a specific dataset, which is quite odd for me, we should be able to let people use the dataset they want (from `datasets` or their own) in a smart way.",
"@jplu Yes, I am all for compromise solution, people can use any token level dataset they want (canonical, community or local dataset)- easily. Have a look at https://github.com/vblagoje/transformers/blob/refactor_token_examples/examples/token-classification/tasks.py All the users have to do is to supply the reference to dataset, source/target columns, labels and off they go with token level classification. That's all. ",
"That's the point, they should not be forced to build their own `datasets` script or modify the existing one. We should provide a simple way to use the built-in local dataset (https://huggingface.co/docs/datasets/loading_datasets.html#from-local-files) + having the possibility to select a dataset from the `datasets` list. These two things can be done by adding some arguments to the script.",
"@jplu ok understood, good idea! Let's do that!\r\n",
"@jplu I looked into how users could use the local dataset without providing a dataset script. Although local datasets give us the convenience of loading datasets easily they can't parse input out-of-the-box without, at least some, parsing customization. For example, GermEval has a specific [format](https://github.com/huggingface/transformers/blob/master/examples/token-classification/tasks.py#L18) and so do other token level datasets. They are almost never in json or csv format so that we can parse them with some generic [load_dataset](https://huggingface.co/docs/datasets/loading_datasets.html#from-local-files) approach. \r\n\r\nWhat we can do is keep the level of abstraction of InputExample so that a new classification task returns a list of InputExamples just like [here](https://github.com/huggingface/transformers/blob/master/examples/token-classification/tasks.py#L18) That way users don't have to create a dataset script but can read examples from a local file, parse and return them as InputExample list. For existing datasets, we convert them to the InputExample list. If you have other ideas on how this could be done lmk but I don't see how we can achieve this just by adding some arguments to the script. ",
"> I looked into how users could use the local dataset without providing a dataset script. Although local datasets give us the convenience of loading datasets easily they can't parse input out-of-the-box without, at least some, parsing customization. For example, GermEval has a specific format and so do other token level datasets. They are almost never in json or csv format so that we can parse them with some generic load_dataset approach.\r\n\r\nIndeed, CoNLL files have a specific format with the `-docstart-` header for separating each document. The `csv` script cannot handle this.\r\n\r\n> What we can do is keep the level of abstraction of InputExample so that a new classification task returns a list of InputExamples just like here That way users don't have to create a dataset script but can read examples from a local file, parse and return them as InputExample list. For existing datasets, we convert them to the InputExample list. If you have other ideas on how this could be done lmk but I don't see how we can achieve this just by adding some arguments to the script.\r\n\r\nMight be another solution yes. Let's try this one and see how it goes.\r\n",
"@jplu The commit change is [here](https://github.com/huggingface/transformers/pull/7041/commits/f381a0b17ed3046a96e5adc96c6a71d592c58cdf) So instead of relying on `get_dataset` API contract users can simply implement `get_input_examples`. We implement both approaches: using datasets and if needed loading from raw datasets. I am now checking that all PyTorch examples indeed work. ",
"There are still several changes to do but it looks much better. Among what I could see:\r\n\r\n- the `--labels` much be kept otherwise we always get the same list of labels.\r\n- `conll2003.py` and `ud_english_ewt.py` should be moved. And a `--dataset_name` parameter should be added to the `run_*` scripts.\r\n- `CoNLL2003` is not for chunking, then it should not belong to the chunking task, it is counter intuitive. Use CoNLL2000 here https://github.com/teropa/nlp/tree/master/resources/corpora/conll2000\r\n- the method `read_examples_from_file` should not be removed.\r\n\r\nYou should also update the README by giving an example for at least these cases:\r\n1. I want to train a NER model over CoNLL2003\r\n2. I have a `train.txt`, `test.txt` and `dev.txt` with my own labels and want to train a NER model over it\r\n3. I want to train a NER model over GERMEVAL2004",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,599 | 1,606 | 1,606 | CONTRIBUTOR | null | This PR further simplifies token classification examples by introducing the use of HF nlp datasets and metrics.
In summary, we introduce:
- use of GermEval canonical dataset
- two additional local datasets: [Conll2003](https://github.com/vblagoje/transformers/blob/b58c6186fd589ea6f4c86e10df5a54aa63516d10/examples/token-classification/conll2003.py) and [UDEnglishEWT](https://github.com/vblagoje/transformers/blob/b58c6186fd589ea6f4c86e10df5a54aa63516d10/examples/token-classification/ud_english_ewt.py) (demonstrate custom nlp dataset use in training)
- nlp metrics, splits, removal of seqeval
- removal of all the clutter related to direct download and preprocessing of raw datasets
- further minor simplifications
I have verified the training of all the PyTorch and PL examples works but I was not able to verify the TensorFlow portion due to a faulty GPU setup on my cloud machine. Perhaps @jplu could help me there? Also @stefan-it could you please have a look as well. Therefore, I leave wip tag on, although the PR should be 99% ready.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7041/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7041/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7041",
"html_url": "https://github.com/huggingface/transformers/pull/7041",
"diff_url": "https://github.com/huggingface/transformers/pull/7041.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7041.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/7040 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7040/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7040/comments | https://api.github.com/repos/huggingface/transformers/issues/7040/events | https://github.com/huggingface/transformers/pull/7040 | 697,899,496 | MDExOlB1bGxSZXF1ZXN0NDgzNzk2MjQ2 | 7,040 | Fix template | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,599 | 1,599 | 1,599 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7040/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7040/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7040",
"html_url": "https://github.com/huggingface/transformers/pull/7040",
"diff_url": "https://github.com/huggingface/transformers/pull/7040.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7040.patch",
"merged_at": 1599741953000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/7039 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7039/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7039/comments | https://api.github.com/repos/huggingface/transformers/issues/7039/events | https://github.com/huggingface/transformers/pull/7039 | 697,870,118 | MDExOlB1bGxSZXF1ZXN0NDgzNzY5NTUw | 7,039 | fix to ensure that returned tensors after the tokenization is Long | {
"login": "GeetDsa",
"id": 13940397,
"node_id": "MDQ6VXNlcjEzOTQwMzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/13940397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GeetDsa",
"html_url": "https://github.com/GeetDsa",
"followers_url": "https://api.github.com/users/GeetDsa/followers",
"following_url": "https://api.github.com/users/GeetDsa/following{/other_user}",
"gists_url": "https://api.github.com/users/GeetDsa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GeetDsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GeetDsa/subscriptions",
"organizations_url": "https://api.github.com/users/GeetDsa/orgs",
"repos_url": "https://api.github.com/users/GeetDsa/repos",
"events_url": "https://api.github.com/users/GeetDsa/events{/privacy}",
"received_events_url": "https://api.github.com/users/GeetDsa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7039?src=pr&el=h1) Report\n> Merging [#7039](https://codecov.io/gh/huggingface/transformers/pull/7039?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/762cba3bdaf70104dc17cc7ff0f8ce13ba23d558?el=desc) will **not change** coverage.\n> The diff coverage is `0.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7039?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7039 +/- ##\n=======================================\n Coverage 79.13% 79.13% \n=======================================\n Files 164 164 \n Lines 31143 31143 \n=======================================\n Hits 24646 24646 \n Misses 6497 6497 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7039?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7039/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `93.18% <0.00%> (+0.35%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7039/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7039?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7039?src=pr&el=footer). Last update [762cba3...db8ea89](https://codecov.io/gh/huggingface/transformers/pull/7039?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Wouldn't it be better to do `torch.tensor(e, dtype=torch.long)` when creating the tensor, instead of casting the batch once the tensors are created?",
"@LysandreJik, made the suggested changes. :+1: ",
"Thanks for the fix!"
] | 1,599 | 1,599 | 1,599 | CONTRIBUTOR | null | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #7026
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7039/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7039/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7039",
"html_url": "https://github.com/huggingface/transformers/pull/7039",
"diff_url": "https://github.com/huggingface/transformers/pull/7039.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7039.patch",
"merged_at": 1599750244000
} |
https://api.github.com/repos/huggingface/transformers/issues/7038 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7038/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7038/comments | https://api.github.com/repos/huggingface/transformers/issues/7038/events | https://github.com/huggingface/transformers/issues/7038 | 697,832,549 | MDU6SXNzdWU2OTc4MzI1NDk= | 7,038 | Question about the test results of my own testsets with a fine-tuned BERT | {
"login": "Deep1994",
"id": 24366782,
"node_id": "MDQ6VXNlcjI0MzY2Nzgy",
"avatar_url": "https://avatars.githubusercontent.com/u/24366782?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Deep1994",
"html_url": "https://github.com/Deep1994",
"followers_url": "https://api.github.com/users/Deep1994/followers",
"following_url": "https://api.github.com/users/Deep1994/following{/other_user}",
"gists_url": "https://api.github.com/users/Deep1994/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Deep1994/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Deep1994/subscriptions",
"organizations_url": "https://api.github.com/users/Deep1994/orgs",
"repos_url": "https://api.github.com/users/Deep1994/repos",
"events_url": "https://api.github.com/users/Deep1994/events{/privacy}",
"received_events_url": "https://api.github.com/users/Deep1994/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I found the cause of the problem. If you want to predict on your own test set, you need to delete the previous \"cached_test_BertTokenizer_128_qqp.lock\" file and \"cached_test_BertTokenizer_128_qqp\" file."
] | 1,599 | 1,599 | 1,599 | NONE | null | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
Hi, I have a fine-tuned BERT model and I want to use it to predict on my own testset which have 40k sentence pairs. I use the script like:
`export GLUE_DIR=./glue_data
export TASK_NAME=QQP
CUDA_VISIBLE_DEVICES=3 python ./examples/text-classification/run_glue.py \
--model_name_or_path ./pretrained_weights/bert_base_uncased \
--task_name $TASK_NAME \
--do_predict \
--data_dir $GLUE_DIR/$TASK_NAME \
--max_seq_length 128 \
--per_device_eval_batch_size=32 \
--per_device_train_batch_size=32 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--overwrite_output_dir \
--output_dir ./tmp/filter_nouns_pairs \`
It stands to reason that I should get a prediction file containing 40k rows, but in fact the output result is still only 10,000 rows. Is the result truncated during prediction? Thanks!
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7038/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7038/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7037 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7037/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7037/comments | https://api.github.com/repos/huggingface/transformers/issues/7037/events | https://github.com/huggingface/transformers/pull/7037 | 697,657,804 | MDExOlB1bGxSZXF1ZXN0NDgzNTc1OTQ4 | 7,037 | Update eval dataset to pick start_position at first index | {
"login": "kay-wong",
"id": 25245693,
"node_id": "MDQ6VXNlcjI1MjQ1Njkz",
"avatar_url": "https://avatars.githubusercontent.com/u/25245693?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kay-wong",
"html_url": "https://github.com/kay-wong",
"followers_url": "https://api.github.com/users/kay-wong/followers",
"following_url": "https://api.github.com/users/kay-wong/following{/other_user}",
"gists_url": "https://api.github.com/users/kay-wong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kay-wong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kay-wong/subscriptions",
"organizations_url": "https://api.github.com/users/kay-wong/orgs",
"repos_url": "https://api.github.com/users/kay-wong/repos",
"events_url": "https://api.github.com/users/kay-wong/events{/privacy}",
"received_events_url": "https://api.github.com/users/kay-wong/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7037?src=pr&el=h1) Report\n> Merging [#7037](https://codecov.io/gh/huggingface/transformers/pull/7037?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/76818cc4c6a1275a23ba261ca337b9f9070c397e?el=desc) will **increase** coverage by `0.20%`.\n> The diff coverage is `0.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7037?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7037 +/- ##\n==========================================\n+ Coverage 79.43% 79.63% +0.20% \n==========================================\n Files 164 164 \n Lines 31026 31029 +3 \n==========================================\n+ Hits 24645 24710 +65 \n+ Misses 6381 6319 -62 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7037?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/7037/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `27.87% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7037/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `16.37% <0.00%> (-82.31%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7037/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `19.71% <0.00%> (-72.34%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7037/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7037/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `75.00% <0.00%> (-25.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/7037/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7037/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `86.66% <0.00%> (-0.55%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7037/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.28% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7037/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.04% <0.00%> (+0.13%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7037/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+1.95%)` | :arrow_up: |\n| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/7037/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7037?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7037?src=pr&el=footer). Last update [76818cc...657fab8](https://codecov.io/gh/huggingface/transformers/pull/7037?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,599 | 1,599 | 1,599 | NONE | null | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #7032
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7037/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7037/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7037",
"html_url": "https://github.com/huggingface/transformers/pull/7037",
"diff_url": "https://github.com/huggingface/transformers/pull/7037.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7037.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/7036 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7036/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7036/comments | https://api.github.com/repos/huggingface/transformers/issues/7036/events | https://github.com/huggingface/transformers/issues/7036 | 697,462,951 | MDU6SXNzdWU2OTc0NjI5NTE= | 7,036 | "You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference" when I am finetuning on distilert pretrained model, After printing this it is taking a lot of time and using only one CPU, how can we parallelized to all the cores in the system ( even I hve 8 GPU's but it is not using tht) | {
"login": "trkece",
"id": 36273175,
"node_id": "MDQ6VXNlcjM2MjczMTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/36273175?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/trkece",
"html_url": "https://github.com/trkece",
"followers_url": "https://api.github.com/users/trkece/followers",
"following_url": "https://api.github.com/users/trkece/following{/other_user}",
"gists_url": "https://api.github.com/users/trkece/gists{/gist_id}",
"starred_url": "https://api.github.com/users/trkece/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trkece/subscriptions",
"organizations_url": "https://api.github.com/users/trkece/orgs",
"repos_url": "https://api.github.com/users/trkece/repos",
"events_url": "https://api.github.com/users/trkece/events{/privacy}",
"received_events_url": "https://api.github.com/users/trkece/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, could you fill in the issue template rather than the issue title?",
"python ./examples/text-classification/run_glue.py \\\r\n --model_name_or_path ./examples/language-modeling/output/SIA_Ontopof_RISK/DistilBERT/Trained_on_iRisk/ \\\r\n --do_train \\\r\n --do_predict \\\r\n --task_name=mrpc \\\r\n --data_dir=/mnt/share/app/proj/ph/com/gbl/irisk/project_ph_com_irisk/Udita/Code_evaluation/Data//Train_Test_Data_BERTformat_Biz13Aug/BusinessRules_more_0s/ \\\r\n --output_dir=./proc_data/mrpc/BusinessRules_more_0s \\\r\n --max_seq_length=512 \\\r\n --per_device_train_batch_size=24 \\\r\n --per_device_eval_batch_size=32 \\\r\n --gradient_accumulation_steps=1 \\\r\n --max_steps=16323 \\\r\n --save_steps=10000 \\\r\n\r\n\r\nThis is the command i am executing (finetuning a distilbert on a customer pre trained on a specific dataset)\r\n\r\nTrain.tsv file is of 1GB.\r\n\r\nFeature creation is taking longer time(more than 24 hours and it is still running)\r\n\r\n\r\n\r\n ",
"\r\n",
"This warning means that the weights of `pre_classifier` and `classifier` have not been initialized by your checkpoint. This is normal since you seem to be loading a checkpoint right after a language modeling training. These layers will now be trained for sequence classification using the command you've shown.",
"Hi, Can we use multi core CPU's here, if yes, where to change and in which function?\r\n\r\nIf i look at all the cores are not utilized. ",
"\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,599 | 1,605 | 1,605 | NONE | null | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7036/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7036/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7035 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7035/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7035/comments | https://api.github.com/repos/huggingface/transformers/issues/7035/events | https://github.com/huggingface/transformers/pull/7035 | 697,458,607 | MDExOlB1bGxSZXF1ZXN0NDgzNDAxMTYw | 7,035 | add -y to bypass prompt for transformers-cli upload | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7035?src=pr&el=h1) Report\n> Merging [#7035](https://codecov.io/gh/huggingface/transformers/pull/7035?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/76818cc4c6a1275a23ba261ca337b9f9070c397e?el=desc) will **decrease** coverage by `0.36%`.\n> The diff coverage is `16.66%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7035?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7035 +/- ##\n==========================================\n- Coverage 79.43% 79.06% -0.37% \n==========================================\n Files 164 164 \n Lines 31026 31028 +2 \n==========================================\n- Hits 24645 24532 -113 \n- Misses 6381 6496 +115 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7035?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/commands/user.py](https://codecov.io/gh/huggingface/transformers/pull/7035/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy91c2VyLnB5) | `36.73% <16.66%> (+0.18%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7035/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `16.37% <0.00%> (-82.31%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7035/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `19.71% <0.00%> (-72.34%)` | :arrow_down: |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7035/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `75.00% <0.00%> (-25.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/7035/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7035/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `86.66% <0.00%> (-0.55%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7035/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.28% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7035/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.04% <0.00%> (+0.13%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7035/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (+1.62%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7035/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+7.51%)` | :arrow_up: |\n| ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/7035/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7035?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7035?src=pr&el=footer). Last update [76818cc...8feb8ec](https://codecov.io/gh/huggingface/transformers/pull/7035?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"LGTM (this could lead to more invalidly-named checkpoints being uploaded, but we'll soon have more server-side validation)"
] | 1,599 | 1,599 | 1,599 | CONTRIBUTOR | null | As discussed here: https://github.com/huggingface/transformers/issues/6934#issuecomment-687365960
adding `-y`/`--yes` to bypass prompt for scripting.
<!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #6934
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7035/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7035/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7035",
"html_url": "https://github.com/huggingface/transformers/pull/7035",
"diff_url": "https://github.com/huggingface/transformers/pull/7035.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7035.patch",
"merged_at": 1599728310000
} |
https://api.github.com/repos/huggingface/transformers/issues/7034 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7034/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7034/comments | https://api.github.com/repos/huggingface/transformers/issues/7034/events | https://github.com/huggingface/transformers/pull/7034 | 697,379,690 | MDExOlB1bGxSZXF1ZXN0NDgzMzI5NTU0 | 7,034 | [xlm tok] config dict: fix str into int to match definition | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,599 | 1,599 | 1,599 | CONTRIBUTOR | null | match arg doc definition:
```
id2lang (:obj:`Dict[int, str`, `optional`):
```
so replacing str with int.
the inverse is correct.
<!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #6734
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7034/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7034/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7034",
"html_url": "https://github.com/huggingface/transformers/pull/7034",
"diff_url": "https://github.com/huggingface/transformers/pull/7034.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7034.patch",
"merged_at": 1599759062000
} |
https://api.github.com/repos/huggingface/transformers/issues/7033 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7033/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7033/comments | https://api.github.com/repos/huggingface/transformers/issues/7033/events | https://github.com/huggingface/transformers/pull/7033 | 697,375,816 | MDExOlB1bGxSZXF1ZXN0NDgzMzI2MDk1 | 7,033 | fix deprecation warnings | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7033?src=pr&el=h1) Report\n> Merging [#7033](https://codecov.io/gh/huggingface/transformers/pull/7033?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/76818cc4c6a1275a23ba261ca337b9f9070c397e?el=desc) will **increase** coverage by `1.13%`.\n> The diff coverage is `0.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7033?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7033 +/- ##\n==========================================\n+ Coverage 79.43% 80.57% +1.13% \n==========================================\n Files 164 164 \n Lines 31026 31026 \n==========================================\n+ Hits 24645 24998 +353 \n+ Misses 6381 6028 -353 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7033?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7033/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mdW5uZWwucHk=) | `86.76% <0.00%> (ΓΈ)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7033/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <ΓΈ> (+1.95%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7033/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7033/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7033/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7033/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `60.81% <0.00%> (-22.62%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7033/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7033/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `67.10% <0.00%> (-12.67%)` | :arrow_down: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7033/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7033/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/7033/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7033?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7033?src=pr&el=footer). Last update [76818cc...1e3136b](https://codecov.io/gh/huggingface/transformers/pull/7033?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I hear you, @LysandreJik - it makes sense and I reverted that removal.\r\n\r\nIt's just that warnings serve a special purpose - they tell users that something could be not right and they are suggested to act on it. So when I was debugging a specific test this showed up - and I had to look at it to see whether this warning is telling me that something is wrong and may affect what I'm debugging. \r\n\r\nYet, the logic you shared is absolutely valid. \r\n\r\nSo perhaps in the case of the test suite `FutureWarning`s shouldn't be warnings. Does it make sense? If you agree, then perhaps it can be changed to be so.\r\n\r\nHere is a potential fix: https://github.com/huggingface/transformers/pull/7079\r\n\r\np.s. CI failure is unrelated."
] | 1,599 | 1,600 | 1,600 | CONTRIBUTOR | null | Fixing:
```
pytest tests/test_tokenization_xlm.py
src/transformers/modeling_tf_utils.py:702
/mnt/nvme1/code/huggingface/transformers-xlm/src/transformers/modeling_tf_utils.py:702: DeprecationWarning:
invalid escape sequence \s
"""
src/transformers/modeling_funnel.py:130
/mnt/nvme1/code/huggingface/transformers-xlm/src/transformers/modeling_funnel.py:130: DeprecationWarning:
invalid escape sequence \d
layer_index = int(re.search("layer_(\d+)", m_name).groups()[0])
tests/test_tokenization_xlm.py::XLMTokenizationTest::test_padding_to_max_length
/mnt/nvme1/code/huggingface/transformers-xlm/src/transformers/tokenization_utils_base.py:1764: FutureWarning:
The `pad_to_max_length` argument is deprecated and will be removed in a future version, use `padding=True` or `padding='longest'` to pad to the longest sequence in the batch, or use `padding='max_length'` to pad to a max length. In this case, you can give a specific length with `max_length` (e.g. `max_length=45`) or leave max_length to None to pad to the maximal input size of the model (e.g. 512 for Bert).
warnings.warn(
tests/test_tokenization_xlm.py::XLMTokenizationTest::test_save_and_load_tokenizer
/mnt/nvme1/code/huggingface/transformers-xlm/src/transformers/tokenization_utils_base.py:1319: FutureWarning:
The `max_len` attribute has been deprecated and will be removed in a future version, use `model_max_length` instead.
warnings.warn(
```
ok, removing `tests/test_tokenization_common.py`'s `test_padding_to_max_length` as suggested there:
```
def test_padding_to_max_length(self):
"""We keep this test for backward compatibility but it should be remove when `pad_to_max_length` will e deprecated"""
```
these 2 fail with that test:
```
FAILED tests/test_tokenization_marian.py::MarianTokenizationTest::test_padding_to_max_length
FAILED tests/test_tokenization_pegasus.py::PegasusTokenizationTest::test_padding_to_max_length
```
if I try to fix it:
```
- padded_sequence_right = tokenizer.encode(sequence, pad_to_max_length=True)
+ padded_sequence_right = tokenizer.encode(sequence, padding="max_length")
```
So there is no salvaging it, right?
Oh and I realized that this one was a `FutureWarning` for end users, but the test suite is under our control, so this is the right action, correct? If I'm wrong, please let me know and I will revert this part. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7033/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7033/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7033",
"html_url": "https://github.com/huggingface/transformers/pull/7033",
"diff_url": "https://github.com/huggingface/transformers/pull/7033.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7033.patch",
"merged_at": 1600084279000
} |
https://api.github.com/repos/huggingface/transformers/issues/7032 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7032/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7032/comments | https://api.github.com/repos/huggingface/transformers/issues/7032/events | https://github.com/huggingface/transformers/issues/7032 | 697,192,530 | MDU6SXNzdWU2OTcxOTI1MzA= | 7,032 | SQuAD: Implement eval in Trainer-backed run_squad_trainer | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | closed | false | {
"login": "ovbondarenko",
"id": 43346781,
"node_id": "MDQ6VXNlcjQzMzQ2Nzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/43346781?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ovbondarenko",
"html_url": "https://github.com/ovbondarenko",
"followers_url": "https://api.github.com/users/ovbondarenko/followers",
"following_url": "https://api.github.com/users/ovbondarenko/following{/other_user}",
"gists_url": "https://api.github.com/users/ovbondarenko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ovbondarenko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ovbondarenko/subscriptions",
"organizations_url": "https://api.github.com/users/ovbondarenko/orgs",
"repos_url": "https://api.github.com/users/ovbondarenko/repos",
"events_url": "https://api.github.com/users/ovbondarenko/events{/privacy}",
"received_events_url": "https://api.github.com/users/ovbondarenko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "ovbondarenko",
"id": 43346781,
"node_id": "MDQ6VXNlcjQzMzQ2Nzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/43346781?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ovbondarenko",
"html_url": "https://github.com/ovbondarenko",
"followers_url": "https://api.github.com/users/ovbondarenko/followers",
"following_url": "https://api.github.com/users/ovbondarenko/following{/other_user}",
"gists_url": "https://api.github.com/users/ovbondarenko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ovbondarenko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ovbondarenko/subscriptions",
"organizations_url": "https://api.github.com/users/ovbondarenko/orgs",
"repos_url": "https://api.github.com/users/ovbondarenko/repos",
"events_url": "https://api.github.com/users/ovbondarenko/events{/privacy}",
"received_events_url": "https://api.github.com/users/ovbondarenko/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi Julien, I would like to contribute. Could you assign me to the issue?",
"Yes, go for it! @sgugger and I (and others) can help if needed",
"Hi there! I missed that there was an already an assignee, but I took a crack at it too.\r\n\r\nI initially tried to pick the most frequent `start_position` βhowever the same `start_position` may have `answer_text`s of different length (i.e one answer could be a substring of another answer). I wasnβt sure how this should be best handled in `transformers/data/processors/squad.py(737)` where the length of the `answer_text` is used to calculate the start and end positions (Maybe I could try picking the start_pos of the most frequent *unique* answer_texts?). Given that, I went with picking the answer in the first index as a first attempt. ",
"How were we doing it in `run_squad.py` again, @LysandreJik?",
"In `run_squad.py` we were leveraging the [`squad_metrics.py` evaluation script](https://github.com/huggingface/transformers/blob/master/src/transformers/data/metrics/squad_metrics.py).\r\n\r\nIf I recall correctly, it gets the sequence predicted by the model from the start and end positions. It then compares that sequence to all the possible answers.\r\n\r\nFor the evaluation I think it is imperative to compare the model prediction against all possible answers, so I don't think the evaluation can simply be solved by adding a single start/end position.",
"Thanks for the pointer, that makes sense. I'm interested in looking into it further, but I'll jump out and return the reigns to @ovbondarenko ",
"see also the `squad` metrics in the huggingface/nlp library: https://github.com/huggingface/nlp/blob/master/metrics/squad/evaluate.py",
"Thanks for the pointers, everyone! .. and I am sorry for hijacking the issue, @kay-wong. @LysandreJik, I agree that we might want to check if predicted answer matches any one for the possible answers. I am going to give this approach a go.",
"@julien-c, @LysandreJik I've spent some time trying to decide on the best way to implement the evaluation, and I've realized that I need some help. I think the fastest way would be to basically reuse `load_and_cache_examples()` and `evaluate()` functions from [run_squad.py](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py) with adapted args. But it may not be the best way.\r\n\r\n`evaluate()` function from run_squad.py is also handling predictions using `compute_predictions_logits` from [squad.py](https://github.com/huggingface/transformers/blob/master/src/transformers/data/metrics/squad_metrics.py). I would prefer to use the output from `trainer.predict` for a squad evaluator downstream. By design [trainer](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py) allows to substitute `compute_metrics` method by a custom evaluation method, so I am assuming this would ultimately be the preferred implementation of a custom squad evaluator for [run_squad_trainer.py](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad_trainer.py). Problem is, I am struggling to figure out what exactly trainer.predict is returning. \r\n\r\nWhen I call trainer.predict on dev-v2.0 it returns predictions in a form of ndarray, which is clear enough:\r\n\r\n```\r\n# from trainer_utils.py:\r\nclass PredictionOutput(NamedTuple): \r\n predictions: np.ndarray \r\n label_ids: Optional[np.ndarray] \r\n metrics: Optional[Dict[str, float]] \r\n```\r\n\r\nI thought that the predictions ndarray would contain computed probabilities for each question in the dev-v2.0.json dataset, but the length of this array (21454) does not match the number of examples in dev-v2.0 (11873), or the number of examples of train-v2.0.json (130319), or it is a multiple of any of the two. Could you help me understand what the trainer.predict method is returning?\r\n\r\nThanks a lot for any feedback!",
"Just realized while reading this that there is a big limitation inside Trainer prediction_loop: it expects just one output (maybe two if there is the loss) when here we have two (or even five in some cases). Will try to fix that on Monday to unlock you on this stage. Will add tests about the number of predictions outputed as well, but it should be the length of the dataset passed.",
"@sgugger Awesome, thank you!",
"Hi. Are there any updates on this?",
"Any progress on this issue?\r\nTrying to define and pass in my own metric for QuestionAnswering tasks, but not quite sure how to calculate the exact loss and F1 when the `compute_metric' only seem to take in the predictions and label_ids.\r\n\r\nOr is it possible to override say `EvalPrediction` and `PredictionOutput` to pass in the full input_ids and calculate the SQuAD metrics that way?\r\n",
"I would like to take up this issue?",
"This has been fixed in #8992 already. I forgot to close this issue as a result, but this is implemented now."
] | 1,599 | 1,609 | 1,609 | MEMBER | null | A very good first issue IMO!
See https://github.com/huggingface/transformers/pull/4829#issuecomment-645994130
> we should update the eval dataset to pick one start_position (or the most frequent one)
Optionally, use the `huggingface/nlp` library to get the eval dataset, and hook it into the Trainer.
Also referenced in https://github.com/huggingface/transformers/issues/6997#issuecomment-688747680 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7032/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7032/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7031 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7031/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7031/comments | https://api.github.com/repos/huggingface/transformers/issues/7031/events | https://github.com/huggingface/transformers/issues/7031 | 697,173,785 | MDU6SXNzdWU2OTcxNzM3ODU= | 7,031 | Unable to recreate onnx speedups demonstrated in 04-onnx-export.ipynb on mac or linux | {
"login": "erees1",
"id": 53308037,
"node_id": "MDQ6VXNlcjUzMzA4MDM3",
"avatar_url": "https://avatars.githubusercontent.com/u/53308037?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erees1",
"html_url": "https://github.com/erees1",
"followers_url": "https://api.github.com/users/erees1/followers",
"following_url": "https://api.github.com/users/erees1/following{/other_user}",
"gists_url": "https://api.github.com/users/erees1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/erees1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/erees1/subscriptions",
"organizations_url": "https://api.github.com/users/erees1/orgs",
"repos_url": "https://api.github.com/users/erees1/repos",
"events_url": "https://api.github.com/users/erees1/events{/privacy}",
"received_events_url": "https://api.github.com/users/erees1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@erees1,\r\n\r\ntorch.set_num_threads(1) will set OpenMP thread number to 1. That will disable all multiple threading optimizations in ONNX Runtime. \r\n\r\nIt is recommended to set number of threads to align with CPU cores for best performance. If you use ONNX Runtime version >= 1.3, you can try the following in the notebook:\r\n\r\n```\r\ntorch.set_num_threads(psutil.cpu_count(logical=False))\r\n```\r\n\r\nAfter torch.set_num_threads is called, environ[\"OMP_NUM_THREADS\"] will not have effect. You may set environ[\"OMP_NUM_THREADS\"] if ONNX Runtime does not run with pytorch in the same process.",
"Thanks for the reply, just for clarity running the notebook exactly as it is we are unable to reproduce the results.\r\n```python\r\n# These are the settings in the notebook (I have made no changes) \r\nenviron[\"OMP_NUM_THREADS\"] = str(cpu_count(logical=True))\r\noptions.intra_op_num_threads = 1\r\n```\r\n\r\n\r\nI have since discovered that changing _only_ the `options.intra_op_num_threads` us to get closer to the results shown in the notebook - but it's odd that we are unable to reproduce the results without changing this setting. \r\n```python\r\n# Have changed intra_op_num_threads from what was in the notebook\r\nenviron[\"OMP_NUM_THREADS\"] = str(cpu_count(logical=True))\r\noptions.intra_op_num_threads = cpu_count(logical=True)\r\n```\r\n\r\n",
"Tagging @mfuntowicz :)",
"@erees1, your observation is correct. \r\n\r\nIt is recommended to use default setting (do not set the option intra_op_num_threads) for general usage.\r\n\r\nonnxruntime-gpu package is not built with OpenMP, so OMP_NUM_THREADS does not have effect. If cpu cores >= 16, user might try intra_op_num_threads =16 explicitly.\r\n\r\nFor onnxruntime package, `options.intra_op_num_threads = 1` was advised for version = 1.2.0 at the time that notebook created. User could set OMP_NUM_THREADS etc environment variable before importing onnxruntime to control the intra op thread number. For version >= 1.3.0, it is recommended to use default intra_op_num_threads.\r\n\r\n@mfuntowicz, could you help update the setting in the notebook like the following?\r\n\r\nBefore:\r\n```\r\n # Few properties that might have an impact on performances (provided by MS)\r\n options = SessionOptions()\r\n options.intra_op_num_threads = 1\r\n options.graph_optimization_level = GraphOptimizationLevel.ORT_ENABLE_ALL\r\n```\r\n\r\nAfter:\r\n```\r\n options = SessionOptions()\r\n # It is recommended to use default settings.\r\n # onnxruntime package uses environment variable OMP_NUM_THREADS to control the intra op threads.\r\n # For onnxruntime 1.2.0 package, you need set intra_op_num_threads = 1 to enable OpenMP. It is not needed for newer versions.\r\n # For onnxruntime-gpu package, try the following when your cpu has many cores:\r\n # options.intra_op_num_threads = min(16, cpu_count(logical=True))\r\n```",
"Thanks for the help, I think that clears things up! - closing the issue."
] | 1,599 | 1,600 | 1,600 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.1.0
- Platform: Mac OS Mojave + Ubuntu 18.04.4
- Python version: 3.7.7
- PyTorch version (GPU?): 1.6.0
- Tensorflow version (GPU?): na
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): bert-base-uncased
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
I am running the /notebooks/04-onnx-export.ipynb example
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
I am using the example data in the notebook
## To reproduce
Steps to reproduce the behavior:
1. Within the notebook add `torch.set_num_threads(1)`
2. Replace `environ["OMP_NUM_THREADS"] = str(cpu_count(logical=True))` with `environ["OMP_NUM_THREADS"] = "1"`
3. Run the 04-onnx-export.ipynb example notebook
I am trying to recreate the speedups shown in this example notebook.
Note that without step 1 above I found pytorch to be considerably faster than onnx as presumably it was using more threads than onnx, step 2 doesn't seem to impact the results but I set it for completeness (ensuring every thing is on the same number of threads)
Actual results on a Macbook Pro:

with hardware:
```
machdep.cpu.max_basic: 22
machdep.cpu.max_ext: 2147483656
machdep.cpu.vendor: GenuineIntel
machdep.cpu.brand_string: Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz
machdep.cpu.family: 6
machdep.cpu.model: 94
machdep.cpu.extmodel: 5
machdep.cpu.extfamily: 0
machdep.cpu.stepping: 3
machdep.cpu.feature_bits: 9221959987971750911
machdep.cpu.leaf7_feature_bits: 43806655 0
machdep.cpu.leaf7_feature_bits_edx: 2617255424
machdep.cpu.extfeature_bits: 1241984796928
machdep.cpu.signature: 329443
machdep.cpu.brand: 0
machdep.cpu.features: FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR PGE MCA CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE SSE3 PCLMULQDQ DTES64 MON DSCPL VMX EST TM2 SSSE3 FMA CX16 TPR PDCM SSE4.1 SSE4.2 x2APIC MOVBE POPCNT AES PCID XSAVE OSXSAVE SEGLIM64 TSCTMR AVX1.0 RDRAND F16C
machdep.cpu.leaf7_features: RDWRFSGS TSC_THREAD_OFFSET SGX BMI1 HLE AVX2 SMEP BMI2 ERMS INVPCID RTM FPU_CSDS MPX RDSEED ADX SMAP CLFSOPT IPT MDCLEAR TSXFA IBRS STIBP L1DF SSBD
machdep.cpu.extfeatures: SYSCALL XD 1GBPAGE EM64T LAHF LZCNT PREFETCHW RDTSCP TSCI
machdep.cpu.logical_per_package: 16
machdep.cpu.cores_per_package: 8
machdep.cpu.microcode_version: 220
machdep.cpu.processor_flag: 5
machdep.cpu.mwait.linesize_min: 64
machdep.cpu.mwait.linesize_max: 64
machdep.cpu.mwait.extensions: 3
machdep.cpu.mwait.sub_Cstates: 286531872
machdep.cpu.thermal.sensor: 1
machdep.cpu.thermal.dynamic_acceleration: 1
machdep.cpu.thermal.invariant_APIC_timer: 1
machdep.cpu.thermal.thresholds: 2
machdep.cpu.thermal.ACNT_MCNT: 1
machdep.cpu.thermal.core_power_limits: 1
machdep.cpu.thermal.fine_grain_clock_mod: 1
machdep.cpu.thermal.package_thermal_intr: 1
machdep.cpu.thermal.hardware_feedback: 0
machdep.cpu.thermal.energy_policy: 1
machdep.cpu.xsave.extended_state: 31 832 1088 0
machdep.cpu.xsave.extended_state1: 15 832 256 0
machdep.cpu.arch_perf.version: 4
machdep.cpu.arch_perf.number: 4
machdep.cpu.arch_perf.width: 48
machdep.cpu.arch_perf.events_number: 7
machdep.cpu.arch_perf.events: 0
machdep.cpu.arch_perf.fixed_number: 3
machdep.cpu.arch_perf.fixed_width: 48
machdep.cpu.cache.linesize: 64
machdep.cpu.cache.L2_associativity: 4
machdep.cpu.cache.size: 256
machdep.cpu.tlb.inst.large: 8
machdep.cpu.tlb.data.small: 64
machdep.cpu.tlb.data.small_level1: 64
machdep.cpu.address_bits.physical: 39
machdep.cpu.address_bits.virtual: 48
machdep.cpu.core_count: 4
machdep.cpu.thread_count: 8
machdep.cpu.tsc_ccc.numerator: 216
machdep.cpu.tsc_ccc.denominator: 2
```
I obtained even worse results on a linux machine:

with hardware:
```
processor : 11
vendor_id : GenuineIntel
cpu family : 6
model : 63
model name : Intel(R) Core(TM) i7-5820K CPU @ 3.30GHz
stepping : 2
microcode : 0x43
cpu MHz : 1199.433
cache size : 15360 KB
physical id : 0
siblings : 12
core id : 5
cpu cores : 6
apicid : 11
initial apicid : 11
fpu : yes
fpu_exception : yes
cpuid level : 15
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts md_clear flush_l1d
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit
bogomips : 6596.76
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Expected to speed speedup from using onnx as in the example:

I know this is hardware specific but having tested it on two machines I wonder if there is some config not included in the example that I am missing or some other issue? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7031/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7031/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7030 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7030/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7030/comments | https://api.github.com/repos/huggingface/transformers/issues/7030/events | https://github.com/huggingface/transformers/pull/7030 | 697,147,638 | MDExOlB1bGxSZXF1ZXN0NDgzMTI4MDQ4 | 7,030 | [s2s] dynamic batch size with --max_tokens_per_batch | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7030?src=pr&el=h1) Report\n> Merging [#7030](https://codecov.io/gh/huggingface/transformers/pull/7030?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/efeab6a3f1eeaffc2cec350ffce797f209ba38f8?el=desc) will **decrease** coverage by `2.69%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7030?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7030 +/- ##\n==========================================\n- Coverage 81.50% 78.81% -2.70% \n==========================================\n Files 172 172 \n Lines 33077 33077 \n==========================================\n- Hits 26959 26068 -891 \n- Misses 6118 7009 +891 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7030?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7030/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7030/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-77.64%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7030/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `19.02% <0.00%> (-69.35%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7030/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7030/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.04% <0.00%> (+0.13%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7030/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.33% <0.00%> (+0.32%)` | :arrow_up: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7030/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `93.54% <0.00%> (+0.35%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7030/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.23% <0.00%> (+0.53%)` | :arrow_up: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/7030/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `82.76% <0.00%> (+6.06%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7030/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `98.95% <0.00%> (+73.62%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7030?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7030?src=pr&el=footer). Last update [efeab6a...f23dd11](https://codecov.io/gh/huggingface/transformers/pull/7030?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,599 | 1,600 | 1,600 | CONTRIBUTOR | null | + adds test coverage for `SortishSampler`+`DistributedSortishSampler`.
+ use fancy samplers for validation if specified.
+ dynamic batch sampler is still experimental. Distributed doesn't work, only slightly faster at the moment, and requires a preprocessing step. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7030/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7030/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7030",
"html_url": "https://github.com/huggingface/transformers/pull/7030",
"diff_url": "https://github.com/huggingface/transformers/pull/7030.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7030.patch",
"merged_at": 1600370375000
} |
https://api.github.com/repos/huggingface/transformers/issues/7029 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7029/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7029/comments | https://api.github.com/repos/huggingface/transformers/issues/7029/events | https://github.com/huggingface/transformers/pull/7029 | 697,129,267 | MDExOlB1bGxSZXF1ZXN0NDgzMTEyNjcw | 7,029 | Add TF Funnel Transformer | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7029?src=pr&el=h1) Report\n> Merging [#7029](https://codecov.io/gh/huggingface/transformers/pull/7029?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/15478c1287a4e7b52c01730ffb0718243d153600?el=desc) will **increase** coverage by `2.52%`.\n> The diff coverage is `19.30%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7029?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7029 +/- ##\n==========================================\n+ Coverage 78.37% 80.90% +2.52% \n==========================================\n Files 164 165 +1 \n Lines 31026 31767 +741 \n==========================================\n+ Hits 24318 25702 +1384 \n+ Misses 6708 6065 -643 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7029?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7029/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <18.53%> (ΓΈ)` | |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/7029/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.32% <100.00%> (+<0.01%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7029/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <100.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7029/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mdW5uZWwucHk=) | `86.76% <100.00%> (ΓΈ)` | |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7029/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `67.06% <100.00%> (+0.19%)` | :arrow_up: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7029/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7029/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `88.77% <0.00%> (-2.55%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7029/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `91.90% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7029/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.64% <0.00%> (-0.14%)` | :arrow_down: |\n| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/7029/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7029?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7029?src=pr&el=footer). Last update [15478c1...2da2d1e](https://codecov.io/gh/huggingface/transformers/pull/7029?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,599 | 1,599 | 1,599 | COLLABORATOR | null | This adds the TF implementation of the model. Will upload the TF checkpoints as the PR goes under review.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7029/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7029/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7029",
"html_url": "https://github.com/huggingface/transformers/pull/7029",
"diff_url": "https://github.com/huggingface/transformers/pull/7029.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7029.patch",
"merged_at": 1599748917000
} |
https://api.github.com/repos/huggingface/transformers/issues/7028 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7028/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7028/comments | https://api.github.com/repos/huggingface/transformers/issues/7028/events | https://github.com/huggingface/transformers/issues/7028 | 696,995,904 | MDU6SXNzdWU2OTY5OTU5MDQ= | 7,028 | No way around "Truncation was not explicitely activated..." error when using SingleSentenceClassificationProcessor. | {
"login": "codygunton",
"id": 26756572,
"node_id": "MDQ6VXNlcjI2NzU2NTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/26756572?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/codygunton",
"html_url": "https://github.com/codygunton",
"followers_url": "https://api.github.com/users/codygunton/followers",
"following_url": "https://api.github.com/users/codygunton/following{/other_user}",
"gists_url": "https://api.github.com/users/codygunton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/codygunton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/codygunton/subscriptions",
"organizations_url": "https://api.github.com/users/codygunton/orgs",
"repos_url": "https://api.github.com/users/codygunton/repos",
"events_url": "https://api.github.com/users/codygunton/events{/privacy}",
"received_events_url": "https://api.github.com/users/codygunton/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,599 | 1,605 | 1,605 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.1.0
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101 (True)
- Tensorflow version (GPU?): 2.3.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@LysandreJik, @thomwolf
## Information
Model I am using: BERT.
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
```
from transformers import *
processor = SingleSentenceClassificationProcessor()
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
processor.add_examples(["Thanks for cool stuff!"])
processor.get_features(tokenizer, max_length=3)
```
```
Truncation was not explicitely activated but `max_length` is provided a specific value, please use `truncation=True` to explicitely truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.
[InputFeatures(input_ids=[101, 4283, 102], attention_mask=[1, 1, 1], token_type_ids=None, label=0)]
```
## Expected behavior
This is expected, but the problem is that there is no way to suppress that warning, because there is no way to pass `truncation=True` when `tokenizer.encode` is called within `processor.get_features`. Probably one should make `truncation` an argument to `processor.get_features`.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7028/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7028/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7027 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7027/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7027/comments | https://api.github.com/repos/huggingface/transformers/issues/7027/events | https://github.com/huggingface/transformers/issues/7027 | 696,951,053 | MDU6SXNzdWU2OTY5NTEwNTM= | 7,027 | Getting underling S3 URL | {
"login": "sgummidipundi",
"id": 24970664,
"node_id": "MDQ6VXNlcjI0OTcwNjY0",
"avatar_url": "https://avatars.githubusercontent.com/u/24970664?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgummidipundi",
"html_url": "https://github.com/sgummidipundi",
"followers_url": "https://api.github.com/users/sgummidipundi/followers",
"following_url": "https://api.github.com/users/sgummidipundi/following{/other_user}",
"gists_url": "https://api.github.com/users/sgummidipundi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgummidipundi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgummidipundi/subscriptions",
"organizations_url": "https://api.github.com/users/sgummidipundi/orgs",
"repos_url": "https://api.github.com/users/sgummidipundi/repos",
"events_url": "https://api.github.com/users/sgummidipundi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgummidipundi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"probably @julien-c can help here",
"By default the library actually downloads model weights from `cdn.huggingface.co` so you might want to whitelist that domain.\r\n\r\nFor config files, we do download from S3. You can check the urls in the model pages' file lists e.g. https://huggingface.co/bert-base-multilingual-cased#list-files\r\n\r\nand then replace the prefix:\r\n`https://s3.amazonaws.com/models.huggingface.co/`\r\nby\r\n`http://models.huggingface.co.s3.amazonaws.com/`\r\n",
"I experienced this problem and have found at least one cause to it. I added a print to the library to print what URL it was actually fetching. curl-ing that URL was no problem, but from within the package, I got a different response. I then discovered that I was able to run as root with sudo. I then straced with both my regular user and my root user and diffed the results. Then i figured out that my regular user had a .netrc-file in the home directory. Renaming that fixed the issue for me. Only took 7 hours :-\\",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,599 | 1,607 | 1,607 | NONE | null | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
The high performance cluster environment at my work has been great for working with transformers. The downside however is that most access to the internet has been locked down. This means that I have been manually bringing in the models. This is somewhat problematic since it is a laborious process and I have been behind on being able to update to the newest models.
The IT team has however said that they would be able to white list a specific S3 bucket. When I see the link to the models, I see that it is of the form: https://s3.amazonaws.com/models.huggingface.co rather than the usual https://bucket-name.s3.Region.amazonaws.com.
Is there any chance I could obtain that second format for the bucket?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7027/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7027/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7026 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7026/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7026/comments | https://api.github.com/repos/huggingface/transformers/issues/7026/events | https://github.com/huggingface/transformers/issues/7026 | 696,943,434 | MDU6SXNzdWU2OTY5NDM0MzQ= | 7,026 | RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.FloatTensor instead (while checking arguments for embedding) | {
"login": "GeetDsa",
"id": 13940397,
"node_id": "MDQ6VXNlcjEzOTQwMzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/13940397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GeetDsa",
"html_url": "https://github.com/GeetDsa",
"followers_url": "https://api.github.com/users/GeetDsa/followers",
"following_url": "https://api.github.com/users/GeetDsa/following{/other_user}",
"gists_url": "https://api.github.com/users/GeetDsa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GeetDsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GeetDsa/subscriptions",
"organizations_url": "https://api.github.com/users/GeetDsa/orgs",
"repos_url": "https://api.github.com/users/GeetDsa/repos",
"events_url": "https://api.github.com/users/GeetDsa/events{/privacy}",
"received_events_url": "https://api.github.com/users/GeetDsa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"@patrickvonplaten , I did find out that the problem arises from `data collator` (`DataCollatorForLanguageModeling`). The returned tensors (tensor of indices to vocab) are not Long, which is creating the problem."
] | 1,599 | 1,599 | 1,599 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.1.0
- Platform: Linux-4.19.0-10-amd64-x86_64-with-debian-10.5
- Python version: 3.6.10
- PyTorch version (GPU?): 1.3.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
-->
## Information
Model I am using (GPT2-large) for fine-tuning on custom data:
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
Trace:
> File "gpt_language_generation.py", line 209, in <module>
> main()
> File "gpt_language_generation.py", line 136, in main
> trainer.train(model_path=None)
> File "<conda_env>/lib/python3.6/site-packages/transformers/trainer.py", line 708, in train
> tr_loss += self.training_step(model, inputs)
> File "<conda_env>/lib/python3.6/site-packages/transformers/trainer.py", line 995, in training_step
> outputs = model(**inputs)
> File "<conda_env>/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
> result = self.forward(*input, **kwargs)
> File "<conda_env>/lib/python3.6/site-packages/transformers/modeling_gpt2.py", line 731, in forward
> return_dict=return_dict,
> File "<conda_env>/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
> result = self.forward(*input, **kwargs)
> File "<conda_env>/lib/python3.6/site-packages/transformers/modeling_gpt2.py", line 593, in forward
> inputs_embeds = self.wte(input_ids)
> File "<conda_env>/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
> result = self.forward(*input, **kwargs)
> File "<conda_env>/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 114, in forward
> self.norm_type, self.scale_grad_by_freq, self.sparse)
> File "<conda_env>/lib/python3.6/site-packages/torch/nn/functional.py", line 1484, in embedding
> return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
model_class, tokenizer_class = GPT2LMHeadModel, GPT2Tokenizer
model = model_class.from_pretrained("gpt2-large")
tokenizer = tokenizer_class.from_pretrained("gpt2-large")
special_tokens_dict = {'bos_token': '<BOS>', 'eos_token': '<EOS>', 'pad_token': '<PAD>'}
num_added_toks = tokenizer.add_special_tokens(special_tokens_dict)
model.resize_token_embeddings(len(tokenizer))
input_text = ["a cat is sitting on the mat"]*100
train_dataset = tokenizer(input_text,add_special_tokens=True, truncation=True, max_length=64)
train_dataset = train_dataset["input_ids"]
eval_dataset = tokenizer(input_text,add_special_tokens=True, truncation=True, max_length=64)
eval_dataset = eval_dataset["input_ids"]
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer,mlm=False)
training_args = TrainingArguments(output_dir='./gpt_model/')
training_args.do_train = True
training_args.do_eval = True
training_args.per_device_train_batch_size = 32
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
prediction_loss_only=True)
trainer.train(model_path=None) // The error occurs here
trainer.save_model()
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Expect the training to continue without an error.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7026/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7026/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7025 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7025/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7025/comments | https://api.github.com/repos/huggingface/transformers/issues/7025/events | https://github.com/huggingface/transformers/issues/7025 | 696,903,717 | MDU6SXNzdWU2OTY5MDM3MTc= | 7,025 | Python | {
"login": "smithjadhav",
"id": 40405066,
"node_id": "MDQ6VXNlcjQwNDA1MDY2",
"avatar_url": "https://avatars.githubusercontent.com/u/40405066?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/smithjadhav",
"html_url": "https://github.com/smithjadhav",
"followers_url": "https://api.github.com/users/smithjadhav/followers",
"following_url": "https://api.github.com/users/smithjadhav/following{/other_user}",
"gists_url": "https://api.github.com/users/smithjadhav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/smithjadhav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/smithjadhav/subscriptions",
"organizations_url": "https://api.github.com/users/smithjadhav/orgs",
"repos_url": "https://api.github.com/users/smithjadhav/repos",
"events_url": "https://api.github.com/users/smithjadhav/events{/privacy}",
"received_events_url": "https://api.github.com/users/smithjadhav/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @smithjadhav, \r\n\r\nThanks for your issue. Given that the whole library is built on Python, I think it would require too much work for us to change to a different programming language and it would probably be better to just start a new github repo."
] | 1,599 | 1,599 | 1,599 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7025/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7025/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/7024 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7024/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7024/comments | https://api.github.com/repos/huggingface/transformers/issues/7024/events | https://github.com/huggingface/transformers/issues/7024 | 696,684,075 | MDU6SXNzdWU2OTY2ODQwNzU= | 7,024 | Adding `class_weights` argument for the loss function of transformers model | {
"login": "nvs-abhilash",
"id": 15072945,
"node_id": "MDQ6VXNlcjE1MDcyOTQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/15072945?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nvs-abhilash",
"html_url": "https://github.com/nvs-abhilash",
"followers_url": "https://api.github.com/users/nvs-abhilash/followers",
"following_url": "https://api.github.com/users/nvs-abhilash/following{/other_user}",
"gists_url": "https://api.github.com/users/nvs-abhilash/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nvs-abhilash/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nvs-abhilash/subscriptions",
"organizations_url": "https://api.github.com/users/nvs-abhilash/orgs",
"repos_url": "https://api.github.com/users/nvs-abhilash/repos",
"events_url": "https://api.github.com/users/nvs-abhilash/events{/privacy}",
"received_events_url": "https://api.github.com/users/nvs-abhilash/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This would be a cool addition! This would need to be an attribute added to the configuration, similarly to what is done for `num_labels` or other configuration attributes. You can see the implementation in `PretrainedConfig`, [here](https://github.com/huggingface/transformers/blob/master/src/transformers/configuration_utils.py#L31). Feel free to open a PR and ping me on it!",
"Thanks @LysandreJik . I'll work on the initial draft and create a PR.",
"After having thought about it with @sgugger, it's probably not such a great idea to allow changing this parameter directly in the model configuration. If we enable this change, then we'll eventually have to support more arguments to pass to the loss functions, while the goal of the config and model loss computation isn't to be a feature-complete loss computation system, but to provide the very basic and simple traditional loss for the use-case (classification in this case).\r\n\r\nIn specific cases like this one where you would like to tweak the loss parameters, it would be better to get the logits back and compute the loss yourself with the logits and labels (no need to modify the `transformers` code when doing so).",
"Ok. Understood. Thanks, @LysandreJik and @sgugger for your points. π \r\n\r\nI'll work on a custom solution for the same.",
"@LysandreJik @sgugger \r\n\r\n> After having thought about it with @sgugger, it's probably not such a great idea to allow changing this parameter directly in the model configuration. If we enable this change, then we'll eventually have to support more arguments to pass to the loss functions, while the goal of the config and model loss computation isn't to be a feature-complete loss computation system, but to provide the very basic and simple traditional loss for the use-case (classification in this case).\r\n\r\nWell, what about allowing a dict to be passed to the prediction heads and then passed to the `CrossEntropyLoss` function.\r\n\r\nLike this:\r\n```python\r\ncross_entropy_loss_params = {\"weight\": [0.8, 1.2, 0.97]}\r\n\r\nloss_fct = CrossEntropyLoss(**cross_entropy_loss_params )\r\n```\r\n\r\nHere for example: https://github.com/huggingface/transformers/blob/76818cc4c6a1275a23ba261ca337b9f9070c397e/src/transformers/modeling_bert.py#L943\r\n\r\nThis way you would:\r\n- implement no breaking change\r\n- open up everything for all parameters the API user wants to set\r\n\r\n@LysandreJik @sgugger @nvs-abhilash what do you think?",
"I'm not sure where you would pass that dict. Could you precise that part?",
"> I'm not sure where you would pass that dict. Could you precise that part?\r\n\r\n@sgugger I just did start a RP (which is just a demo in the current state) that explains how I would implement it.\r\nSee here: #7057 \r\n\r\nThe code would look like this:\r\n```python\r\nmodel_name = 'bert-base-german-dbmdz-uncased'\r\n\r\nconfig = AutoConfig.from_pretrained(\r\n model_name,\r\n num_labels=3,\r\n)\r\n\r\nmodel = AutoModelForSequenceClassification.from_pretrained(\r\n model_name,\r\n config=config,\r\n loss_function_params={\"weight\": [0.8, 1.2, 0.97]}\r\n)\r\n```\r\n\r\nI would be happy about feedback and to finish this PR.\r\n\r\n@LysandreJik @nvs-abhilash what do you think?",
"Reopening the issue, since the discussion is going on.",
">@LysandreJik @nvs-abhilash what do you think?\r\n\r\nIt looks good to me and I am happy to contribute but I guess it's up to @sgugger and @LysandreJik to provide more explanation on the feasibility and potential implications on the project.\r\n\r\n",
"Supporting the last comment made, we don't intend for `PreTrainedModel`s to provide a feature-complete loss computation system. We expect them to provide the simplest loss that's traditionally used in most cases.\r\n\r\nWe would rather encourage users to retrieve the logits from the models and compute the loss themselves when having different use-cases than the very basic approach, like it is usually done with `nn.Module`s, like so:\r\n\r\n```py\r\nlogits = model(**input_dict)\r\nloss = CrossEntropyLoss(weight=[0.8, 1.2, 0.97])\r\n\r\noutput = loss(logits, labels)\r\n```",
"Hi @LysandreJik ok...\r\nCould you please briefly explain how those 3 lines of code are used from users (API) perspective?\r\n\r\nAs a Hugging Face Transformers user: when I want to train a new Text classifier with unbalanced classes and do `model = AutoModelForSequenceClassification.from_pretrained(model_name, config=config)` how do I get `CrossEntropyLoss(weight=[0.8, 1.2, 0.97])` into that?\r\n\r\nI could just subclass `BertForSequenceClassification` for example and write the complete `forward` function from scratch again. But this would be 99% cut and paste and IMO not the way a good and open API like _HF Transformers_ should be designed. IMO this is not good from usability point of view.\r\n\r\nIf I understand you right @LysandreJik you do not want to force new model type developers to support the API that I suggested in my PR #7057 because you think that would be too much work to do. But IMO you do not consider the needs of the API user.",
"It's a bit hard to know how to guide you when you don't explain to use how you train your model. Are you using `Trainer`? Then you should subclass it and override the brand new `compute_loss` method that I just added to make this use case super easy. There is an example in the [docs](https://huggingface.co/transformers/master/main_classes/trainer.html) (note that you will need an install from source for this).",
"Ok. Super easy. Thanks @sgugger ! Thats it! :-))",
"@nvs-abhilash I think the answer closes this issue - right?",
">Then you should subclass it and override the brand new compute_loss method that I just added to make this use case super easy\r\n\r\nThanks, @sgugger , this will definitely solve my problem as well!",
"@nvs-abhilash @PhilipMay could you please share your answer here?",
"@sgugger Just stumbled upon this - If we are not using Trainer but [HF's Accelerate lib](https://github.com/huggingface/accelerate), is there an easy way to achieve for this? Rather than replicating the entire model which HF already has as-is, all of this just for a custom loss function? Is there an easier alternative?",
"@ashutoshsaboo, the model output the logits, so if you don't want to leverage the loss output by the transformer model when passing it the labels, simply compute the loss outside of the model with the retrieved logits and your labels."
] | 1,599 | 1,648 | 1,599 | NONE | null | # π Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
To provide a parameter called `class_weights` while initializing a sequence classification model. The attribute will be used to calculate the weighted loss which is useful for classification with imbalanced datasets.
```python
from transformers import DistilBertForSequenceClassification
# Note the additional class_weights attribute
model = DistilBertForSequenceClassification.from_pretrained(
"distilbert-base-uncased",
num_labels=5,
class_weights=[5, 3, 2, 1, 1])
```
class_weights will provide the same functionality as the `weight` parameter of Pytorch losses like [torch.nn.CrossEntropyLoss](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html).
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
There have been similar issues raised before on "How to provide class weights for imbalanced classification dataset". See [#297](https://github.com/huggingface/transformers/issues/297#issuecomment-534185049), [#1755](https://github.com/huggingface/transformers/issues/1755),
And I ended up modifying the transformers code to get the class weights (shown below), and it looks like an easy addition which can benefit many.
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
This should be possible because currently the loss for Sequence classification in the forward method is initialized like below:
```python
loss_fct = nn.CrossEntropyLoss() # <- Defined without the weight parameter
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
```
And we can add the weight attribute of Pytorch and pass the class_weights recorded during model initialization.
```python
loss_fct = nn.CrossEntropyLoss(weight=self.class_weights)
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
```
I am happy to implement this and provide a PR. Although I am new to the transformers package and may require some iterative code reviews from the senior contributors/members. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7024/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7024/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7023 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7023/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7023/comments | https://api.github.com/repos/huggingface/transformers/issues/7023/events | https://github.com/huggingface/transformers/issues/7023 | 696,589,395 | MDU6SXNzdWU2OTY1ODkzOTU= | 7,023 | PreTrained (custom) model not correctly initializing when using AutoModel methods | {
"login": "mar-wel",
"id": 42975940,
"node_id": "MDQ6VXNlcjQyOTc1OTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/42975940?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mar-wel",
"html_url": "https://github.com/mar-wel",
"followers_url": "https://api.github.com/users/mar-wel/followers",
"following_url": "https://api.github.com/users/mar-wel/following{/other_user}",
"gists_url": "https://api.github.com/users/mar-wel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mar-wel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mar-wel/subscriptions",
"organizations_url": "https://api.github.com/users/mar-wel/orgs",
"repos_url": "https://api.github.com/users/mar-wel/repos",
"events_url": "https://api.github.com/users/mar-wel/events{/privacy}",
"received_events_url": "https://api.github.com/users/mar-wel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could you set your logging level to `INFO` and show us the results?\r\n\r\nYou can put the following lines above your script:\r\n```py\r\nfrom transformers import logging\r\n\r\nlogging.set_verbosity_info()\r\n```\r\n\r\nIt would really help us debug if you could also put a small script that reproduces the issue, so that we can easily reproduce and help you.",
"While trying putting together a minimal sample as you requested, I finally realized that the issue is on my side: as a novice in pytorch, I didn't expect that model (object) names in the code have to match (sub)models (e.g. base_model_prefix).\r\n\r\nWith a BERT configuration in the above AutoModel example, I replaced\r\n\r\nself.transformer = AutoModel.from_config(model_config)\r\n\r\nby\r\n\r\nself.bert = AutoModel.from_config(model_config)\r\n\r\nand it basically worked as expected. \r\n\r\nHowever, this constraint somehow limits my flexibility in \"just\" replacing the transformer part in my custom model.",
"Glad you got it to work!"
] | 1,599 | 1,600 | 1,599 | NONE | null | ## Environment info
- `transformers` version: 3.1.0
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.8.5
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): 2.3.0 (True)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: ???
### Who can help
@LysandreJik
## Information
Model I am using (Bert, XLNet ...): bert-base-uncased, roberta-base, albert-large-v2
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
(Conceptual) steps to reproduce the behavior:
1) Derive a custom model from a "PreTrainedModel" like:
class TransformerLSTMDocClassifier(PreTrainedModel):
def __init__(self, model_config: AutoConfig):
super(TransformerLSTMDocClassifier, self).__init__(model_config)
self.transformer = AutoModel.from_config(model_config)
self.dropout = nn.Dropout(p=model_config.hidden_dropout_prob)
self.lstm = LSTM(model_config.hidden_size, model_config.hidden_size)
self.classifier = nn.Sequential(
nn.Dropout(p=model_config.hidden_dropout_prob),
nn.Linear(model_config.hidden_size, model_config.num_labels),
nn.Tanh()
)
2) Train (fine-tune) the custom model starting from a pre-trained (standard) model such as bert-base-uncased, roberta-base, albert-large-v2 (using AutoModel features). My objective is to easily exchange the 'transformer' part in the above model
3) Initializing the model class 'TransformerLSTMDocClassifier' via the 'from_pretrained' method with the fine-tuned model (output of step 2) results in the message:
Some weights of TransformerLSTMDocClassifier were not initialized from the model checkpoint at <MODEL_CHECKPOINT> and are newly initialized: [...]
The list of weights apparently includes ALL weights of the TransformerLSTMDocClassifier (200+in case of using bert-base_uncased)
## Expected behavior
Properly initialized weights
## Addendum
Disclaimer: I am a novice in the "transformers" framework. The above described (unexpected) behavior might result from an incorrect use of 'transformers' features/methods at my end.
However, I was digging a bit in the code and made the following observations:
(1) in case of deriving the custom model from a specific class (e.g. BertPreTrainedModel in case of using bert-base-uncased) instead from the generic class 'PreTrainedModel', the custom model is correctly initialized.
(2) the 'cls.base_model_prefix' (module: modeling_utils; method: from_pretrained) is "" (empty string) in case the custom model is derived from "PreTrainedModel", resulting in 'has_prefix_module' being set to 'True' in line 925, finally resulting in an inappropriate 'start_prefix' (apparently preventing that weights are matched/loaded).
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7023/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7023/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7022 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7022/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7022/comments | https://api.github.com/repos/huggingface/transformers/issues/7022/events | https://github.com/huggingface/transformers/pull/7022 | 696,585,735 | MDExOlB1bGxSZXF1ZXN0NDgyNjQ4NjUw | 7,022 | New TF output proposal | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This proposal works for me. It's better to allow graph mode for the \"standard\" way to do things while keeping backwards compatibility imo. Users that want to leverage hidden states/attentions with graph mode should have no problems in doing a custom architecture that does exactly that. \r\n\r\nHowever, as a user I would feel like the last example is slightly misleading; in eager mode providing the input `return_dict` changes things, while in graph mode it doesn't, with no information whatsoever. I would expect the model to at least throw a warning telling me that this isn't supported in graph mode rather than silently doing nothing.",
"Having a warning message for these cases is a great idea! I take care of that to have this added in the next commit!!",
"Now everytime `return_dict`, `output_attentions` or `output_hidden_states` will be different of None, the following message will be displayed:\r\n\r\n```\r\nWarning: The parameters return_dict, output_attentions and output_hidden_states are disabled in graph mode.\r\n```\r\n\r\n@LysandreJik Does it looks ok for you now?\r\n\r\nJust to let you know, usual logging is disabled in graph mode, so only way you have to display a message is to use the internal `tf.print()` function.",
"@sgugger Just pushed a new commit, let me know if I forgot to address one of your comments.",
"I only commented on the first two models, but the same is applicable for all the other ones. Other than that, it looks good yes, thanks for addressing!",
"Thanks for taking care of this! I like the approach",
"@sshleifer @thomwolf @julien-c Do you have any comments?\r\n\r\n@sgugger @patrickvonplaten and @LysandreJik looks to be agree on that.",
"I will continue to work on this by applying it to the other models waiting @sshleifer @thomwolf and @julien-c comments.",
"Fine with me! In general, no need to wait for my input on tf decisions :)",
"I prefer to have the opinion of everyone as it is also a bit a design update :)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,599 | 1,686 | 1,606 | CONTRIBUTOR | null | Hello!
Currently the TensorFlow have several issues to properly working in graph mode when `output_attentions`, `outout_hidden_states` and `return_dict` takes a boolean tensor (`tf.constant(True/False)`). This is because the graph mode doesn't allows to have an output of different sizes in its conditional branches.
To fix this and lower as much as possible a breaking change with the current behavior of these models, I propose to keep the current behavior but only in eager mode, and disabling these three features (force to have them to False) in graph mode. This is for me the best compromise between having something that works in graph mode and not having a big breaking change.
The graph mode is most of the time used to get fast training/inference and not for doing experiments, and I don't see the point to deploy a model in production that gives you all the attentions/hidden states values, or possibly getting an OOM during the training/inference mostly for the new TFLongformer model.
Here some examples.
Example 1: run in eager mode with default parameters value
```python
from transformers import TFBertForMaskedLM
model = TFBertForMaskedLM.from_pretrained("bert-base-cased")
model(tf.constant([[10,11,12]]))
# Result
(<tf.Tensor: shape=(1, 3, 28996), dtype=float32, numpy=
array([[[ -9.900146 , -10.345711 , -9.721893 , ..., -7.7057724,
-8.035212 , -8.271875 ],
[ -7.866758 , -8.147887 , -7.864182 , ..., -6.7170696,
-6.2896423, -6.79779 ],
[ -9.790001 , -10.2271185, -9.584716 , ..., -7.539982 ,
-7.807548 , -8.135937 ]]], dtype=float32)>,)
```
Example 2: run in eager mode by updating a boolean parameter value
```python
from transformers import TFBertForMaskedLM
model = TFBertForMaskedLM.from_pretrained("bert-base-cased")
model(tf.constant([[10,11,12]]), return_dict=True)
# Result
TFMaskedLMOutput(loss=None, logits=<tf.Tensor: shape=(1, 3, 28996), dtype=float32, numpy=
array([[[ -9.900146 , -10.345711 , -9.721893 , ..., -7.7057724,
-8.035212 , -8.271875 ],
[ -7.866758 , -8.147887 , -7.864182 , ..., -6.7170696,
-6.2896423, -6.79779 ],
[ -9.790001 , -10.2271185, -9.584716 , ..., -7.539982 ,
-7.807548 , -8.135937 ]]], dtype=float32)>, hidden_states=None, attentions=None)
```
Example 3: run in graph mode with default parameters value
```python
from transformers import TFBertForMaskedLM
import tensorflow as tf
model = tf.function(TFBertForMaskedLM.from_pretrained("bert-base-cased"))
model(tf.constant([[10,11,12]]))
# Result
(<tf.Tensor: shape=(1, 3, 28996), dtype=float32, numpy=
array([[[ -9.900146 , -10.345711 , -9.721893 , ..., -7.7057724,
-8.035212 , -8.271875 ],
[ -7.866758 , -8.147887 , -7.864182 , ..., -6.7170696,
-6.2896423, -6.79779 ],
[ -9.790001 , -10.2271185, -9.584716 , ..., -7.539982 ,
-7.807548 , -8.135937 ]]], dtype=float32)>,)
```
Example 4: run in graph mode by updating a boolean parameter value
```python
from transformers import TFBertForMaskedLM
import tensorflow as tf
model = tf.function(TFBertForMaskedLM.from_pretrained("bert-base-cased"))
model(tf.constant([[10,11,12]]), return_dict=tf.constant(True))
# Result
(<tf.Tensor: shape=(1, 3, 28996), dtype=float32, numpy=
array([[[ -9.900146 , -10.345711 , -9.721893 , ..., -7.7057724,
-8.035212 , -8.271875 ],
[ -7.866758 , -8.147887 , -7.864182 , ..., -6.7170696,
-6.2896423, -6.79779 ],
[ -9.790001 , -10.2271185, -9.584716 , ..., -7.539982 ,
-7.807548 , -8.135937 ]]], dtype=float32)>,)
```
As you can see for the last example, no more graph compilation error due to the `tf.constant(True)` value.
For my examples I used only `return_dict` but of course the same behavior is applied to `output_attentions` and `output_hidden_states`. Moreover, this new behavior will be detailed in the documentation of the impacted parameters.
@LysandreJik @sgugger @patrickvonplaten @sshleifer @thomwolf @julien-c What do you think of this solution? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7022/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7022/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7022",
"html_url": "https://github.com/huggingface/transformers/pull/7022",
"diff_url": "https://github.com/huggingface/transformers/pull/7022.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7022.patch",
"merged_at": null
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.