url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/6221 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6221/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6221/comments | https://api.github.com/repos/huggingface/transformers/issues/6221/events | https://github.com/huggingface/transformers/pull/6221 | 672,269,708 | MDExOlB1bGxSZXF1ZXN0NDYyMzM2ODI0 | 6,221 | run_hans label fix | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,596 | 1,596 | 1,596 | MEMBER | null | Addresses issue https://github.com/huggingface/transformers/issues/6179
correct label extraction + add note on discrepancies on trained MNLI models and HANS
Feel free to merge if it looks good! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6221/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6221/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6221",
"html_url": "https://github.com/huggingface/transformers/pull/6221",
"diff_url": "https://github.com/huggingface/transformers/pull/6221.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6221.patch",
"merged_at": 1596481372000
} |
https://api.github.com/repos/huggingface/transformers/issues/6220 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6220/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6220/comments | https://api.github.com/repos/huggingface/transformers/issues/6220/events | https://github.com/huggingface/transformers/pull/6220 | 672,212,510 | MDExOlB1bGxSZXF1ZXN0NDYyMjkwMTcy | 6,220 | Improve type annotations in many places | {
"login": "dnaaun",
"id": 52462475,
"node_id": "MDQ6VXNlcjUyNDYyNDc1",
"avatar_url": "https://avatars.githubusercontent.com/u/52462475?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dnaaun",
"html_url": "https://github.com/dnaaun",
"followers_url": "https://api.github.com/users/dnaaun/followers",
"following_url": "https://api.github.com/users/dnaaun/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaaun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dnaaun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaaun/subscriptions",
"organizations_url": "https://api.github.com/users/dnaaun/orgs",
"repos_url": "https://api.github.com/users/dnaaun/repos",
"events_url": "https://api.github.com/users/dnaaun/events{/privacy}",
"received_events_url": "https://api.github.com/users/dnaaun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"From what I can see, test failures are because of the usage of `typing.Literal`, which is available starting on Python3.8.\r\n\r\nThe `typing_extensions` (from mypy devs) package backports such features into older versions of Python. If it's ok with the devs to add it to `setup.py`, I can do so. Otherwise, we can make the annotations less specific as to avoid `Literal` types.",
"I'd avoid adding a new dependency or pinning us on python 3.8. I'd also avoid using `overload` type annotations to keep the code readable. I think we can have less specific type annotations, and detail what the kwargs are if needed to be able to type-annotate them (which will be useful for tab-completion in an IDE anyways).\r\n\r\nFor instance, the first:\r\n```\r\n@overload\r\n @classmethod\r\n def from_pretrained(\r\n cls,\r\n pretrained_model_name_or_path: str,\r\n *,\r\n return_unused_kwargs: Literal[True],\r\n cache_dir: Optional[str] = None,\r\n force_download: bool = False,\r\n resume_download: bool = False,\r\n proxies: Optional[Dict[str, str]] = None,\r\n **kwargs: Any\r\n ) -> Tuple[PretrainedConfig, Dict[str, str]]:\r\n ...\r\n\r\n @overload\r\n @classmethod\r\n def from_pretrained(\r\n cls,\r\n pretrained_model_name_or_path: str,\r\n *,\r\n return_unused_kwargs: Literal[False] = False,\r\n cache_dir: Optional[str] = None,\r\n force_download: bool = False,\r\n resume_download: bool = False,\r\n proxies: Optional[Dict[str, str]] = None,\r\n **kwargs: Any\r\n ) -> PretrainedConfig:\r\n ...\r\n```\r\ncould be written like this in the main definition:\r\n```\r\ndef from_pretrained(\r\n cls,\r\n pretrained_model_name_or_path: str,\r\n *,\r\n return_unused_kwargs: bool = False,\r\n cache_dir: Optional[str] = None,\r\n force_download: bool = False,\r\n resume_download: bool = False,\r\n proxies: Optional[Dict[str, str]] = None,\r\n **kwargs: Any\r\n ) -> Union[PretrainedConfig, Tuple[PretrainedConfig, Dict[str, str]]]:\r\n```\r\nand then the body of the function can be updated.",
"Sounds good. I'll get around to removing usages of `Literal` and `overload` tomorrow.",
"Thinking a bit more about this, if you really need some types in `typing_extensions`, we can add this as a dep.",
"The types in this PR that need `typing_extensions` in py<3.8 are `Literal` and `Protocol`.\r\n1. There's only one usage of `Literal` in the PR that is not in conjunction with `@overload`. The other usages of `Literal` are to specify overloading signatures. If we avoid `@overload`, the use of `Literal` is much less justified.\r\n2. `Protocol` was used to replace\r\n```py\r\nDataClass = NewType(\"DataClass\", Any)\r\nDataClassType = NewType(\"DataClassType\", Any)\r\n```\r\nwith\r\n```py\r\nclass DataClassProtocol(Protocol):\r\n def __init__(self, *args: Any, **kwargs: Any) -> None: pass\r\n# And DataClassType would be replaced with Type[DataClassProtocol] in the rest of the code\r\n```\r\nThe benefit of this is that `mypy` complains that dataclasses are in fact not compatible with `DataClass` when using the first definition. \r\n\r\nI really like static type checking, so I'm for adding `typing_extensions`, but do the devs think the above usages are worth it, is the question ...",
"Personally I'd love to use `Literal`s and I'd be in favor of adding the dependency just for this, but I'll let others chime in.\r\n\r\nFor the `DataClassProtocol`, I'm not convinced, given that this type won't lead to a lot of inference type checking anyways... I feel like the current `DataClass` and `DataClassType` are pretty much just documentation.",
"Slightly off topic, but @davidatbu do you have experience on Pyright vs. mypy? (in particular in the context of vscode)\r\n\r\nI've been wanting to dig in for a long time.",
"@julien-c , \r\n\r\n> For the DataClassProtocol, I'm not convinced, given that this type won't lead to a lot of inference type checking anyways... I feel like the current DataClass and DataClassType are pretty much just documentation.\r\n\r\nYes, this won't lead to inference of the actual dataclass passed in the current mypy implementation(it might if mypy ever supports [variadic generics](https://github.com/python/typing/issues/193), and we make `HfArgumentParser` generic). However, using `DataClass` as previously defined leads mypy to *raise errors* about passing perfectly legal dataclasses to `HfArgumentParser`. Using `DataClassProtocol` doesn't. I use mypy a lot, so I might be the only one that this bothers.\r\n\r\n\r\n> Do you have experience on Pyright vs. mypy? (in particular in the context of vscode)\r\n\r\nI've used only mypy so far. (And I find it so helpful that I'm willing to create PRs for type annotation :) )Also, I use Vim with plugins, so wouldn't be able to comment on the vscode aspect either.\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@davidatbu I noticed you referenced [python/typing#193](https://github.com/python/typing/issues/193) on variadic generics in this thread. Heads up that we've been working on a draft of a PEP for this in [PEP 646](https://www.python.org/dev/peps/pep-0646/). If this is something you still care about, take a read and let us know any feedback in [this thread](https://mail.python.org/archives/list/[email protected]/thread/WFV5K2LK3LFQSO63X2KUOCK3VVLAQ374/) in typing-sig. Thanks!"
] | 1,596 | 1,613 | 1,602 | NONE | null | 1. This is purely a type annotation PR( pinging @sgugger ), except for number 3 below.
2. Admittedly, this PR orients type annotations as a code-correctness tool rather than a documentation tool, which I haven't seen in the repo before.
3. The only non-type-annotation change is making `DataProcessor` inherit from `abc.ABC`. Along with that, the methods that previously had `raise NotImplementedError()` are now decorated by `@abc.abstractmethod`. This change was motivated by the fact that the static analyzers like mypy can correctly identify unimplemented methods in subclasses.
This PR is a result of a question on [post on the forum](https://discuss.huggingface.co/t/static-type-checking-with-mypy-whats-the-official-position/464).
Closes #6118 too. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6220/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6220/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6220",
"html_url": "https://github.com/huggingface/transformers/pull/6220",
"diff_url": "https://github.com/huggingface/transformers/pull/6220.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6220.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/6219 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6219/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6219/comments | https://api.github.com/repos/huggingface/transformers/issues/6219/events | https://github.com/huggingface/transformers/pull/6219 | 672,177,788 | MDExOlB1bGxSZXF1ZXN0NDYyMjYxNTI2 | 6,219 | Add setup for TPU CI to run every hour. | {
"login": "zcain117",
"id": 14796584,
"node_id": "MDQ6VXNlcjE0Nzk2NTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/14796584?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zcain117",
"html_url": "https://github.com/zcain117",
"followers_url": "https://api.github.com/users/zcain117/followers",
"following_url": "https://api.github.com/users/zcain117/following{/other_user}",
"gists_url": "https://api.github.com/users/zcain117/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zcain117/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zcain117/subscriptions",
"organizations_url": "https://api.github.com/users/zcain117/orgs",
"repos_url": "https://api.github.com/users/zcain117/repos",
"events_url": "https://api.github.com/users/zcain117/events{/privacy}",
"received_events_url": "https://api.github.com/users/zcain117/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"From previous PR:\r\n\r\n\r\n\r\n**LysandreJik 4 hours ago Member**\r\nShouldn't the bert-based-case and bertBasedCase be bert-base-cased and bertBaseCased?\r\n\r\nBy the way, why is this model specific? Are we using these values somewhere?\r\n\r\n\r\n\r\n**zcain117 19 minutes ago Author**\r\nFixed that typo.\r\n\r\nThe local bertBasedCase is just a name of that jsonnet variable, it just needs to be a unique string but several punctuation marks are not allowed in variable names. We recommend 1 variable like this per test. On our team we have 1 file per \"family\" of tests, e.g. 1 file for pytorch resnet50 where the file contains [short test, long test] X [v2-8 TPU, v3-8 TPU, v100 GPU(s)] for the same model on the same ML framework.\r\n\r\nThe modelName and frameworkPrefix are just used in generating the name of the GKE job. For example, a recent run was named: hf-bert-based-case-example-v3-8-vtgrj. Future runs will look like hf-bert-base-cased-example-v3-8-xxxxx",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6219?src=pr&el=h1) Report\n> Merging [#6219](https://codecov.io/gh/huggingface/transformers/pull/6219?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6695450a23545bc9d5416f39ab39609c7811c653&el=desc) will **decrease** coverage by `0.10%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6219?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6219 +/- ##\n==========================================\n- Coverage 78.54% 78.44% -0.11% \n==========================================\n Files 148 146 -2 \n Lines 27196 26586 -610 \n==========================================\n- Hits 21361 20855 -506 \n+ Misses 5835 5731 -104 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6219?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.22% <0.00%> (-63.51%)` | :arrow_down: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `85.71% <0.00%> (-14.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `76.73% <0.00%> (-5.02%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `80.05% <0.00%> (-2.14%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.58% <0.00%> (-0.97%)` | :arrow_down: |\n| [src/transformers/modeling\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `81.55% <0.00%> (-0.58%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.36% <0.00%> (-0.43%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (-0.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `95.49% <0.00%> (-0.20%)` | :arrow_down: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `80.19% <0.00%> (-0.20%)` | :arrow_down: |\n| ... and [34 more](https://codecov.io/gh/huggingface/transformers/pull/6219/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6219?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6219?src=pr&el=footer). Last update [6695450...6e4a41b](https://codecov.io/gh/huggingface/transformers/pull/6219?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@LysandreJik let me know if there's anything else you can think of here"
] | 1,596 | 1,596 | 1,596 | CONTRIBUTOR | null | Use GKE to run TPU CI testing once per hour using latest code in master branch. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6219/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6219/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6219",
"html_url": "https://github.com/huggingface/transformers/pull/6219",
"diff_url": "https://github.com/huggingface/transformers/pull/6219.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6219.patch",
"merged_at": 1596813428000
} |
https://api.github.com/repos/huggingface/transformers/issues/6218 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6218/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6218/comments | https://api.github.com/repos/huggingface/transformers/issues/6218/events | https://github.com/huggingface/transformers/issues/6218 | 672,172,720 | MDU6SXNzdWU2NzIxNzI3MjA= | 6,218 | Comparison different methods for benchmarking | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"My main reason for not using PyTorch's `max_memory_resevered` function is that there is some GPU memory that is used, but not accounted for.",
"> One can see that the difference is always 856 MB (besides one exception where it is 868 MB)\r\n\r\nI won't be surprised if the delta of 856MB that you reported - is the size of cudnn kernel (loaded). Could you please run a test to measure `torch.ones((1, 1)).cuda()` - if that's what you get then pytorch's tool should work just fine and w/o any complicated polling or potential conflicts - say 2 tests using the benchmarking framework happen to run at the same time on the same GPU - won't it fail in this scenario?\r\n\r\nI have and old Titan X card so it's around 600MB, the novel Tesla T4 is ~1GB, so your being 856MB fits into the ballpark of it.\r\n",
"Yeah taking your code: \r\n\r\n```python\r\n#!/usr/bin/env python3\r\nfrom transformers import is_torch_available\r\nimport torch\r\n\r\nif is_torch_available():\r\n from transformers import (\r\n PyTorchBenchmarkArguments,\r\n PyTorchBenchmark\r\n )\r\n\r\nMODEL_ID = \"facebook/bart-base\"\r\nss = 8\r\nbs = 1\r\n\r\nbenchmark_args = PyTorchBenchmarkArguments(\r\n models=[MODEL_ID],\r\n training=False,\r\n no_inference=False,\r\n sequence_lengths=[ss],\r\n batch_sizes=[bs],\r\n no_multi_process=False,\r\n no_cuda=False,\r\n no_speed=False,\r\n)\r\nbenchmark = PyTorchBenchmark(benchmark_args)\r\n\r\n# measure cudnn kernel memory consumption\r\n# now we have a baseline that can be subtracted from all the other usages\r\n\r\n\r\ndef run_cuda_kernel_load():\r\n torch.ones((1, 1)).cuda()\r\n\r\n\r\nmem, _ = benchmark._measure_memory(run_cuda_kernel_load)\r\nmem_load = mem.bytes\r\nprint(f\"Mem on load: {mem_load >> 20}MB\")\r\n```\r\n\r\ngives me a result of 796 MB -> so we still have 60MB which is different from CUDA/CUDNN kernel loading + PyTorch's `max_memory_reseverd` vs. `py3nvml` that are not accounted for, but this is negligible IMO .",
"So we have two options here:\r\n\r\n1) Leave the functionality as it is and add: \r\n\r\n```python\r\ndef run_cuda_kernel_load():\r\n torch.ones((1, 1)).cuda()\r\n```\r\nto measure CUDA/CUDNN kernel loading. \r\n\r\nThe advantage is that we can leave the same functionality for TF\r\n\r\n2) Change to `torch.cuda.max_memory_reserved` + `py3nvml` or another tool to measure CUDA/CUDNN kernel loading. \r\nThis option seems a bit safer and this way multiple processes could be run on the same GPU. Because measuring CUDA/CUDNN kernel loading cannot be done with `torch.cuda.max_memory_reserved` and relies on `py3nvml` or similar, I think we would run into the same problem here in that the result will not be correct if other processes run on the GPU. Or do you know how this can be measured without conflicting with another measurement on the same GPU at the same time? \r\n\r\n\r\nI guess 2) is the better option though -> it seems safer and 2 measurements can be done at once. Maybe we should also only return this result as a default and optionally let the user decide if he / she wants to return the MB required to load CUDA/CUDNN. In the graphs on the model pages we would then include the MB required to load the CUDA/CUDNN kernel.\r\n\r\nAfter thinking a bit more about RAM measurements when running on GPU - I think it makes actually more sense to only do this in combination with the new torch profiler: https://pytorch.org/docs/stable/autograd.html#profiler . I tried out the profiler and it gives very in-detail measurements for both CPU and GPU time and memory. For me this profiler is very useful for analysis of the code, e.g. which layer consumes how much memory / time, how much time / gpu is spent on CPU / GPU\r\n\r\nSo overall, IMO it would be nice to have 2 use cases: \r\n\r\na) Run peak memory usage (either on GPU or CPU) -> get one number for CPU, get one number for GPU (default to `torch.cuda.max_memory_reserved` and optionally add GPU CUDA/CUDNN kernel loading mem requirement). Here, I don't think we need to report CPU mem usage when model is run on GPU IMO. This would be very useful for ML engineers that want to use `transformers` in production.\r\n\r\nb) Do in detail analysis -> run the new torch profiler: https://pytorch.org/docs/stable/autograd.html#profiler. For PyTorch this can replace the line-by-line tracing completely IMO and cover the case when the user wants to track CPU *as well as* GPU usage when running on GPU. We would require PyTorch 1.6 for this, but this is ok IMO. This use case would also be more interesting for researcher and \"experts\" and less more ML \"engineers\" with less research background. \r\n\r\nI think the default should be to run only a), where as the user could optionally turn on b) to in-depth analysis.\r\n\r\nSince TF does not (yet) have these tools, we will still have to rely on what we currently have for TF, but for PyTorch I'm happy to switch more to actual \"Pytorch\" tools to track memory since it seems to give very similar/equal results as `py3nvml`.\r\n\r\nAlso looping in @LysandreJik @julien-c and @sshleifer here to hear their opinions on that.\r\n",
"Same thing goes for `speed` I guess:\r\n\r\na) Leave functionality as it is for general overall speed (maybe change 30 averaging to 10 + some warmup) -> return one number\r\nb) Use PyTorch profiler for in-detail profiling of CPU / GPU time. User can optionally turn this on.\r\n\r\n",
"BTW, the new profiler can be run like this: \r\n\r\n```python \r\n!/usr/bin/env python3\r\nfrom transformers import is_torch_available\r\nimport torch\r\n\r\nif is_torch_available():\r\n from transformers import (\r\n BertModel\r\n )\r\n\r\ndef run_model():\r\n model = BertModel.from_pretrained(\"bert-base-cased\")\r\n model.cuda()\r\n outputs = model(torch.tensor(32 * [128 * [0]]).cuda())\r\n return outputs\r\n\r\n\r\nwith torch.autograd.profiler.profile(use_cuda=True, profile_memory=True) as prof:\r\n run_model()\r\n\r\nprint(prof.table())\r\n```",
"> gives me a result of 796 MB -> so we still have 60MB which is different from CUDA/CUDNN kernel loading + PyTorch's `max_memory_reseverd` vs. `py3nvml` that are not accounted for, but this is negligible IMO .\r\n\r\nAs I mentioned earlier, I'm not sure \"reserved\" is the right function, as it involves caching. Try `torch.cuda.memory_allocated` (and for peak `torch.cuda.max_memory_allocated`) instead.",
"I'd say option #2, plus the code from option #1, so a user can still know the overhead of the cudnn kernel load.\r\n\r\nThank you for mentioning the profiler and the sample code - let me study and experiment with it and then I will be able to comment on your suggestions.",
"> Same thing goes for `speed` I guess:\r\n> \r\n> a) Leave functionality as it is for general overall speed (maybe change 30 averaging to 10 + some warmup) -> return one number\r\n\r\nPlus, I'd suggest to make `n_repeats` configurable, with a sensible default. e.g. when developing code I'd want to run `n_repeats`=1 - e.g. currently a large model takes a really long time to `__init__` when it's run 30 times.",
"Wrt returns, as we discussed, my suggestion is to have a full API that returns rich outputs, and then design shortcut wrappers that return just single specific bits, so that it makes test writing much less cluttered. i.e., removing a need for writing code like this as we have to do now:\r\n```\r\nmem, _ = benchmark._measure_memory(func)\r\nmem = mem.bytes\r\n```\r\nit should be possible to do:\r\n```\r\nmem = benchmark._measure_memory_bytes(func)\r\n```\r\nno unpacking, no retrieving.\r\n\r\n`_measure_memory_bytes` is just a possible name - we could think of something else.",
"Ah, one more thing we discussed - we need to return general RAM when the benchmark is run on GPU. Memory leaks mainly happen in general RAM. So the main API should include measuring and returning this data too, and flags/shortcuts to enable/disable the calculation and retrieval of this data.",
"2 more things to consider:\r\n\r\n1. Should we control these:\r\n```\r\ntorch.backends.cudnn.benchmark = True\r\ntorch.backends.cudnn.enabled = True\r\n```\r\nas these settings should impact peformance\r\n\r\n2. allow a fixed seed arg?",
"> BTW, the new profiler can be run like this:\r\n\r\nIt appears that profiler has been around for quite some time. Of course, its table dump is huge and is difficult to work with, and there are almost no examples of the profiler use out there.\r\n\r\nHere is what I came up with so far:\r\n\r\n```\r\nimport torch\r\nfrom transformers import BertModel\r\n\r\ndef run_model():\r\n model = BertModel.from_pretrained(\"bert-base-cased\")\r\n model.cuda()\r\n model(torch.tensor(32 * [128 * [0]]).cuda())\r\n\r\nwith torch.autograd.profiler.profile(use_cuda=True, profile_memory=True) as prof:\r\n _=run_model()\r\n\r\ncpu_time = sum([e.cpu_time_total for e in prof.key_averages()]) / 1000\r\ncuda_time = sum([e.cuda_time_total for e in prof.key_averages()]) / 1000\r\ncpu_mem = sum([e.cpu_memory_usage for e in prof.key_averages()]) >> 20\r\ncuda_mem = sum([e.cuda_memory_usage for e in prof.key_averages()]) >> 20 \r\n\r\nprint(f\"Device | Mem MB | Speed ms\")\r\nprint(f\"CPU | { cpu_mem:8} | {cpu_time:8.2f}\")\r\nprint(f\"GPU | {cuda_mem:8} | {cuda_time:8.2f}\")\r\n```\r\ngives\r\n```\r\nDevice | Mem MB | Speed ms\r\nCPU | 1 | 1835.97\r\nGPU | 13258 | 1846.77\r\n```\r\nI'm not sure yet whether any of this is correct - need to correlate with our profiling functions.\r\n\r\nFigured out the self vs. total (the profiler results has a set of `self_` attributes, in addition to `total_`) :\r\n* Total CPU: calls to the function, and functions called by the function,\r\n* Self CPU: calls to the function in the selected time range, excluding functions called by the function.\r\n\r\nSo we only care about total then.\r\n\r\nIf I add for comparison the measurements from our benchmark tools, I get mostly very different results - I run these on T4, unlike the ones above that were run on TitanX:\r\n\r\n```\r\nDevice: Tesla T4\r\nModel: bert-base-cased\r\n\r\ninit\r\nDevice | Mem MB | Speed ms\r\nCPU | 500 | 3339.74\r\nGPU | 0 | 3339.54\r\nOurs | 914 | 3289.81\r\n\r\nfwd\r\nDevice | Mem MB | Speed ms\r\nCPU | 0 | 97.47\r\nGPU | 27 | 105.76\r\nOurs | 920 | 12.34\r\n\r\nfwd-bwd\r\nDevice | Mem MB | Speed ms\r\nCPU | 0 | 193.97\r\nGPU | 1723 | 211.86\r\nOurs | 1540 | 24.95\r\n```\r\nThe last row labelled as **Ours** is benchmark's gpu `measure_memory` + `measure_speed` results. And the first two rows are from `torch.autograd.profiler.profile` as shown before.\r\n\r\nAs you can see only speed measurements for `init` match, the rest is dramatically different... \r\n\r\nIf anybody wants to continue experimenting, this is a WIP colab nb:\r\nhttps://colab.research.google.com/drive/1i-_lxUCuuTKn5Nhe4ENMJd5RgHVeMSZP?usp=sharing\r\n\r\nIt has a bunch of other experiments, but if you run all - just watch the results of the last cell. Warning: the code is unpolished.\r\n\r\n",
"One additional potential caveat for speed measurements is async code returning too early?\r\ni.e. needing to run: `torch.cuda.synchronize()` before finishing the speed measurements? Most likely since it's a separate process it's of no need.",
"Thanks for posting this! Yes, I think we should maybe stick to our code for now regarding the total time and memory. \r\nA while ago, @LysandreJik made some speed expeirements: https://docs.google.com/spreadsheets/d/1sryqufw2D0XlUH4sq3e9Wnxu5EAQkaohzrJbd5HdQ_w/edit which match the results given my `PyTorchBenchmark` very nicely, so I'm quite positive that the speed measurements are correct.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,602 | 1,602 | MEMBER | null | Currently, the benchmarking tools make use of a multi-processing to be sure that all memory is released after each measurement and makes use of the `py3nvml` library to measure "peak GPU usage".
After some internal discussion, it is questionable whether the current code gives peak GPU memory usage. Thus, I ran a couple of experiments to see how torch benchmarking differs from `py3nvml`. It is known that there are differences in the memory benchmarking as explained here: https://stackoverflow.com/questions/62257967/why-does-a-single-conv2d-with-10x10x3-take-up-850mb-of-gpu#_=_
For a comparison, the following command was run:
```
python run_benchmark.py --models gpt2 bert-base-cased xlnet-base-cased --no_speed --save_to_csv --batch_sizes 8 64
```
The environment information is the following:
|transformers_version |3.0.2 |
|---------------------|---------------|
|framework |PyTorch |
|use_torchscript |False |
|framework_version |1.6.0 |
|python_version |3.6.10 |
|system |Linux |
|cpu |x86_64 |
|architecture |64bit |
|date |2020-08-03 |
|time |14:47:20.956286|
|fp16 |False |
|use_multiprocessing |True |
|only_pretrain_model |False |
|cpu_ram_mb |32088 |
|use_gpu |True |
|num_gpus |1 |
|gpu |TITAN RTX |
|gpu_ram_mb |24217 |
|gpu_power_watts |280.0 |
|gpu_performance_state|0 |
|use_tpu |False |
a) These are the results when running the command with the current code (`py3nvml`):
|model |batch_size |sequence_length|result|
|---------------------|---------------|---------------|------|
|gpt2 |8 |8 |1422 |
|gpt2 |8 |32 |1454 |
|gpt2 |8 |128 |1732 |
|gpt2 |8 |512 |2784 |
|gpt2 |64 |8 |1558 |
|gpt2 |64 |32 |2086 |
|gpt2 |64 |128 |4170 |
|gpt2 |64 |512 |12482 |
|bert-base-cased |8 |8 |1326 |
|bert-base-cased |8 |32 |1360 |
|bert-base-cased |8 |128 |1470 |
|bert-base-cased |8 |512 |2042 |
|bert-base-cased |64 |8 |1382 |
|bert-base-cased |64 |32 |1640 |
|bert-base-cased |64 |128 |2664 |
|bert-base-cased |64 |512 |7158 |
|xlnet-base-cased |8 |8 |1360 |
|xlnet-base-cased |8 |32 |1422 |
|xlnet-base-cased |8 |128 |1610 |
|xlnet-base-cased |8 |512 |2476 |
|xlnet-base-cased |64 |8 |1436 |
|xlnet-base-cased |64 |32 |1830 |
|xlnet-base-cased |64 |128 |3336 |
|xlnet-base-cased |64 |512 |10344 |
b) These are the results when using the function `torch.cuda.max_memory_resevered(torch.cuda.current_device())` instead:
|model |batch_size |sequence_length|result|
|---------------------|---------------|---------------|------|
|gpt2 |8 |8 |566 |
|gpt2 |8 |32 |598 |
|gpt2 |8 |128 |888 |
|gpt2 |8 |512 |1928 |
|gpt2 |64 |8 |702 |
|gpt2 |64 |32 |1230 |
|gpt2 |64 |128 |3314 |
|gpt2 |64 |512 |11626 |
|bert-base-cased |8 |8 |470 |
|bert-base-cased |8 |32 |504 |
|bert-base-cased |8 |128 |614 |
|bert-base-cased |8 |512 |1186 |
|bert-base-cased |64 |8 |526 |
|bert-base-cased |64 |32 |784 |
|bert-base-cased |64 |128 |1808 |
|bert-base-cased |64 |512 |6302 |
|xlnet-base-cased |8 |8 |504 |
|xlnet-base-cased |8 |32 |566 |
|xlnet-base-cased |8 |128 |754 |
|xlnet-base-cased |8 |512 |1620 |
|xlnet-base-cased |64 |8 |580 |
|xlnet-base-cased |64 |32 |974 |
|xlnet-base-cased |64 |128 |2480 |
|xlnet-base-cased |64 |512 |9488 |
One can see that the difference is always 856 MB (besides one exception where it is 868 MB). I ran the `py3nvml` benchmark multiple times and the result is very stable.
The same holds true when benchmarking training.
=> I tend to think that the way the code is currently implemented, it actually gives the peak memory usage, even though I could not find proof in the https://github.com/fbcotter/py3nvml library.
@stas00 - what is your opinion on that? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6218/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6218/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6217 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6217/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6217/comments | https://api.github.com/repos/huggingface/transformers/issues/6217/events | https://github.com/huggingface/transformers/pull/6217 | 672,160,107 | MDExOlB1bGxSZXF1ZXN0NDYyMjQ2OTAx | 6,217 | Remove outdated BERT tips | {
"login": "JetRunner",
"id": 22514219,
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JetRunner",
"html_url": "https://github.com/JetRunner",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions",
"organizations_url": "https://api.github.com/users/JetRunner/orgs",
"repos_url": "https://api.github.com/users/JetRunner/repos",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"received_events_url": "https://api.github.com/users/JetRunner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6217?src=pr&el=h1) Report\n> Merging [#6217](https://codecov.io/gh/huggingface/transformers/pull/6217?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/06f1692b023a701ab2bb443fa4f0bdd58c6bd234&el=desc) will **decrease** coverage by `0.64%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6217?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6217 +/- ##\n==========================================\n- Coverage 79.53% 78.88% -0.65% \n==========================================\n Files 146 146 \n Lines 26586 26586 \n==========================================\n- Hits 21145 20973 -172 \n- Misses 5441 5613 +172 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6217?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_outputs.py](https://codecov.io/gh/huggingface/transformers/pull/6217/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vdXRwdXRzLnB5) | `100.00% <ø> (ø)` | |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6217/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6217/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `68.14% <0.00%> (-25.67%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6217/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6217/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `80.30% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6217/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: |\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6217/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `96.11% <0.00%> (+17.47%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6217?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6217?src=pr&el=footer). Last update [b6b2f22...8a7c591](https://codecov.io/gh/huggingface/transformers/pull/6217?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I'd personally leave the first two tricks (users may have their custom script for padding and BERT is not good at language generation). For the third, as discussed, I agree.\r\nThe docstrings of `BaseModelOutputWithPooling` should be changed accordingly, but I can take care of that in a separate PR.",
"@sgugger I've restored tips no.1 and updated tips no.2. Also took care of `BaseModelOutputWithPooling`."
] | 1,596 | 1,596 | 1,596 | CONTRIBUTOR | null | Why remove the tips:
> - BERT is a model with absolute position embeddings so it's usually advised to pad the inputs on
> the right rather than the left.
Yes but since we don't provide an option to pad from the left I think it's not necessary.
> - BERT was trained with a masked language modeling (MLM) objective. It is therefore efficient at predicting masked
> tokens and at NLU in general, but is not optimal for text generation. Models trained with a causal language
> modeling (CLM) objective are better in that regard.
No. T5 & BART proved it wrong.
> - Alongside MLM, BERT was trained using a next sentence prediction (NSP) objective using the [CLS] token as a sequence
> approximate. The user may use this token (the first token in a sequence built with special tokens) to get a sequence
> prediction rather than a token prediction. However, averaging over the sequence may yield better results than using
> the [CLS] token.
No. [CLS] can do learnable self-attention pooling, which is way much better than parameter-free average pooling especially when fine-tuned. (w.r.t. SentenceBERT)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6217/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6217/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6217",
"html_url": "https://github.com/huggingface/transformers/pull/6217",
"diff_url": "https://github.com/huggingface/transformers/pull/6217.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6217.patch",
"merged_at": 1596475076000
} |
https://api.github.com/repos/huggingface/transformers/issues/6216 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6216/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6216/comments | https://api.github.com/repos/huggingface/transformers/issues/6216/events | https://github.com/huggingface/transformers/pull/6216 | 672,131,018 | MDExOlB1bGxSZXF1ZXN0NDYyMjIyODIw | 6,216 | Add do_lower_case parameter to GPT2TokenizerFast and RobertaTokenizerFast tokenizers | {
"login": "poudro",
"id": 999699,
"node_id": "MDQ6VXNlcjk5OTY5OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/999699?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/poudro",
"html_url": "https://github.com/poudro",
"followers_url": "https://api.github.com/users/poudro/followers",
"following_url": "https://api.github.com/users/poudro/following{/other_user}",
"gists_url": "https://api.github.com/users/poudro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/poudro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/poudro/subscriptions",
"organizations_url": "https://api.github.com/users/poudro/orgs",
"repos_url": "https://api.github.com/users/poudro/repos",
"events_url": "https://api.github.com/users/poudro/events{/privacy}",
"received_events_url": "https://api.github.com/users/poudro/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6216?src=pr&el=h1) Report\n> Merging [#6216](https://codecov.io/gh/huggingface/transformers/pull/6216?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/06f1692b023a701ab2bb443fa4f0bdd58c6bd234&el=desc) will **decrease** coverage by `0.25%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6216?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6216 +/- ##\n==========================================\n- Coverage 79.53% 79.27% -0.26% \n==========================================\n Files 146 146 \n Lines 26586 26586 \n==========================================\n- Hits 21145 21076 -69 \n- Misses 5441 5510 +69 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6216?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6216/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `97.22% <ø> (ø)` | |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6216/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <ø> (-21.92%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6216/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.43% <0.00%> (-7.42%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6216/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.73% <0.00%> (-3.76%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6216/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.14% <0.00%> (-2.15%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6216/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6216/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `90.86% <0.00%> (-0.46%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6216/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.00% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6216/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/6216/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6216?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6216?src=pr&el=footer). Last update [b6b2f22...ba68ac7](https://codecov.io/gh/huggingface/transformers/pull/6216?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,602 | 1,602 | NONE | null | Fixes #6215
This adds lowercasing handling to GPT2TokenizerFast and RobertaTokenizerFast tokenizers | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6216/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6216/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6216",
"html_url": "https://github.com/huggingface/transformers/pull/6216",
"diff_url": "https://github.com/huggingface/transformers/pull/6216.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6216.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/6215 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6215/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6215/comments | https://api.github.com/repos/huggingface/transformers/issues/6215/events | https://github.com/huggingface/transformers/issues/6215 | 672,130,112 | MDU6SXNzdWU2NzIxMzAxMTI= | 6,215 | Add `do_lower_case` handling to GPT2TokenizerFast and descendant tokenizers | {
"login": "poudro",
"id": 999699,
"node_id": "MDQ6VXNlcjk5OTY5OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/999699?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/poudro",
"html_url": "https://github.com/poudro",
"followers_url": "https://api.github.com/users/poudro/followers",
"following_url": "https://api.github.com/users/poudro/following{/other_user}",
"gists_url": "https://api.github.com/users/poudro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/poudro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/poudro/subscriptions",
"organizations_url": "https://api.github.com/users/poudro/orgs",
"repos_url": "https://api.github.com/users/poudro/repos",
"events_url": "https://api.github.com/users/poudro/events{/privacy}",
"received_events_url": "https://api.github.com/users/poudro/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I've just openned a pull request that handles this for GPT2TokenizerFast and RobertaTokenizerFast",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,602 | 1,602 | NONE | null | I'm currently pretraining a model from scratch based on RoBERTa architecture and am using a custom `ByteLevelBPETokenizer` tokenizer trained on my data with `lowercase` set to `True`.
For ease of use, I load it with
```
from transformers import RobertaTokenizerFast
tokenizer = RobertaTokenizerFast(
'custom-vocab.json',
'custom-merges.txt',
)
```
The issue, is that unlike `BertTokenizerFast`, `RobertaTokenizerFast` (and it's parent class `GPT2TokenizerFast`) doesn't handle the `do_lower_case` parameter.
I prefer using a BPE rather than WordPiece on this task so at the moment I make sure to lowercase text before tokenization but it can be error prone in the future.
Looking at the code, it seems `GPT2TokenizerFast` and descendants are the main ones lacking the parameter.
Could it be possible to add this?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6215/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6215/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6214 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6214/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6214/comments | https://api.github.com/repos/huggingface/transformers/issues/6214/events | https://github.com/huggingface/transformers/pull/6214 | 672,093,159 | MDExOlB1bGxSZXF1ZXN0NDYyMTkxNzYz | 6,214 | Fix _shift_right function in TFT5PreTrainedModel | {
"login": "maurice-g",
"id": 2892585,
"node_id": "MDQ6VXNlcjI4OTI1ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2892585?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maurice-g",
"html_url": "https://github.com/maurice-g",
"followers_url": "https://api.github.com/users/maurice-g/followers",
"following_url": "https://api.github.com/users/maurice-g/following{/other_user}",
"gists_url": "https://api.github.com/users/maurice-g/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maurice-g/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maurice-g/subscriptions",
"organizations_url": "https://api.github.com/users/maurice-g/orgs",
"repos_url": "https://api.github.com/users/maurice-g/repos",
"events_url": "https://api.github.com/users/maurice-g/events{/privacy}",
"received_events_url": "https://api.github.com/users/maurice-g/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6214?src=pr&el=h1) Report\n> Merging [#6214](https://codecov.io/gh/huggingface/transformers/pull/6214?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9996f697e3ed7a0d6fe4348953723ad6b9d51477&el=desc) will **decrease** coverage by `0.17%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6214?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6214 +/- ##\n==========================================\n- Coverage 79.66% 79.48% -0.18% \n==========================================\n Files 146 146 \n Lines 26582 26584 +2 \n==========================================\n- Hits 21176 21131 -45 \n- Misses 5406 5453 +47 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6214?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6214/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `90.95% <100.00%> (+0.03%)` | :arrow_up: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6214/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `68.14% <0.00%> (-25.67%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6214/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6214/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6214/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `80.05% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6214/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+1.25%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6214/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.77% <0.00%> (+35.21%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6214?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6214?src=pr&el=footer). Last update [9996f69...30e2c28](https://codecov.io/gh/huggingface/transformers/pull/6214?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,596 | 1,596 | 1,596 | CONTRIBUTOR | null | See ticket #5991 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6214/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6214/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6214",
"html_url": "https://github.com/huggingface/transformers/pull/6214",
"diff_url": "https://github.com/huggingface/transformers/pull/6214.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6214.patch",
"merged_at": 1596464484000
} |
https://api.github.com/repos/huggingface/transformers/issues/6213 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6213/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6213/comments | https://api.github.com/repos/huggingface/transformers/issues/6213/events | https://github.com/huggingface/transformers/pull/6213 | 672,056,364 | MDExOlB1bGxSZXF1ZXN0NDYyMTYxMjY4 | 6,213 | [DataCollatorForLanguageModeling] fix labels | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6213?src=pr&el=h1) Report\n> Merging [#6213](https://codecov.io/gh/huggingface/transformers/pull/6213?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9996f697e3ed7a0d6fe4348953723ad6b9d51477&el=desc) will **decrease** coverage by `0.38%`.\n> The diff coverage is `50.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6213?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6213 +/- ##\n==========================================\n- Coverage 79.66% 79.27% -0.39% \n==========================================\n Files 146 146 \n Lines 26582 26583 +1 \n==========================================\n- Hits 21176 21075 -101 \n- Misses 5406 5508 +102 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6213?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6213/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `96.58% <50.00%> (-0.84%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6213/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.40% <0.00%> (-34.39%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6213/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <0.00%> (-21.92%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6213/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.43% <0.00%> (-7.42%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6213/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.73% <0.00%> (-3.76%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6213/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.14% <0.00%> (-2.15%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6213/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6213/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `90.86% <0.00%> (-0.46%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6213/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.00% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6213/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `80.05% <0.00%> (-0.26%)` | :arrow_down: |\n| ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/6213/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6213?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6213?src=pr&el=footer). Last update [9996f69...a0d4291](https://codecov.io/gh/huggingface/transformers/pull/6213?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,596 | 1,596 | 1,596 | MEMBER | null | This PR fixes #6211. Only set ignore index (-100) when `pad_token_id` is not `None`
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6213/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6213/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6213",
"html_url": "https://github.com/huggingface/transformers/pull/6213",
"diff_url": "https://github.com/huggingface/transformers/pull/6213.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6213.patch",
"merged_at": 1596464376000
} |
https://api.github.com/repos/huggingface/transformers/issues/6212 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6212/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6212/comments | https://api.github.com/repos/huggingface/transformers/issues/6212/events | https://github.com/huggingface/transformers/pull/6212 | 672,047,415 | MDExOlB1bGxSZXF1ZXN0NDYyMTUzODg2 | 6,212 | [BartTokenizer] add prepare s2s batch | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Thanks @sgugger for theses helpful suggestions!. Will keep these in mind for future PRs.",
"@sgugger , can you help me with the build_doc failure ? Thanks! ",
"Fixed, you needed to have the beginning of the docstrings on a new line for sphinx to understand the indentation.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6212?src=pr&el=h1) Report\n> Merging [#6212](https://codecov.io/gh/huggingface/transformers/pull/6212?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8edfaaa81b9995cedea2f8805e4c18c2b6cb5bfc&el=desc) will **increase** coverage by `0.05%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6212?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6212 +/- ##\n==========================================\n+ Coverage 78.29% 78.35% +0.05% \n==========================================\n Files 146 146 \n Lines 26607 26619 +12 \n==========================================\n+ Hits 20832 20856 +24 \n+ Misses 5775 5763 -12 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6212?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6212/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `96.38% <100.00%> (+0.61%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6212/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/6212/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `27.63% <0.00%> (+1.31%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6212/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `90.09% <0.00%> (+1.80%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6212/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+2.25%)` | :arrow_up: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6212/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+2.73%)` | :arrow_up: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6212/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `81.00% <0.00%> (+14.00%)` | :arrow_up: |\n| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6212/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `49.09% <0.00%> (+17.09%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6212/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.32% <0.00%> (+23.67%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6212/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.14% <0.00%> (+24.04%)` | :arrow_up: |\n| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/6212/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6212?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6212?src=pr&el=footer). Last update [8edfaaa...b05ee8f](https://codecov.io/gh/huggingface/transformers/pull/6212?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"> Fixed, you needed to have the beginning of the docstrings on a new line for sphinx to understand the indentation.\r\n\r\nThanks @sgugger !",
"Hi @LysandreJik \r\n> and not the fast tokenizer, is there a reason for that?\r\n\r\nNo, just forgot to add that.\r\n\r\nUpstream will be useful but we will need handle few cases differently for each seq2seq model i.e in case of t5 we manually need to add the deocder_start_token_id as T5 don't have a `bos` token. Also `eos' needs to be added manually. In case of mBart, it needs the language code as prefix token etc. And also AFAIK lot of people seem to be unfamiliar with the processors API",
"hi @sshleifer , @LysandreJik any update ?",
"@sshleifer updated the docs.",
"@LysandreJik I will add this for fast tokenizer too once this PR is merged.",
"Sounds good!",
"@LysandreJik , doc error is fixed, not sure if current failure is related to this PR."
] | 1,596 | 1,597 | 1,597 | MEMBER | null | This PR adds prepare_seq2seq_batch method to BartTokenizer as per the proposal in #6080
@sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6212/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6212/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6212",
"html_url": "https://github.com/huggingface/transformers/pull/6212",
"diff_url": "https://github.com/huggingface/transformers/pull/6212.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6212.patch",
"merged_at": 1597679086000
} |
https://api.github.com/repos/huggingface/transformers/issues/6211 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6211/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6211/comments | https://api.github.com/repos/huggingface/transformers/issues/6211/events | https://github.com/huggingface/transformers/issues/6211 | 672,036,196 | MDU6SXNzdWU2NzIwMzYxOTY= | 6,211 | Error when fine tuning GPT2 on GPU | {
"login": "cppntn",
"id": 26765504,
"node_id": "MDQ6VXNlcjI2NzY1NTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26765504?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cppntn",
"html_url": "https://github.com/cppntn",
"followers_url": "https://api.github.com/users/cppntn/followers",
"following_url": "https://api.github.com/users/cppntn/following{/other_user}",
"gists_url": "https://api.github.com/users/cppntn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cppntn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cppntn/subscriptions",
"organizations_url": "https://api.github.com/users/cppntn/orgs",
"repos_url": "https://api.github.com/users/cppntn/repos",
"events_url": "https://api.github.com/users/cppntn/events{/privacy}",
"received_events_url": "https://api.github.com/users/cppntn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @antocapp thank you for reporting this. ignore index (-100) shouldn't be set when `pad_token_id` is `None`, which is the case with GPT-2 ",
"Hi @patil-suraj , thanks for the answer! Where should I set this parameter?\r\n\r\nedit: Never mind, just saw the fix you merged. Thanks!"
] | 1,596 | 1,596 | 1,596 | NONE | null | An error occur when training with `run_language_modeling.py` file (Ubuntu 18, pytorch with cuda compiled), while the error does not occur on Macbook without the GPU
```
File "train.py", line 283, in <module>
main()
File "train.py", line 247, in main
trainer.train(model_path=model_path)
File "/home/antonio/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 518, in train
for step, inputs in enumerate(epoch_iterator):
File "/home/antonio/anaconda3/lib/python3.7/site-packages/tqdm/_tqdm.py", line 1017, in __iter__
for obj in iterable:
File "/home/antonio/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 346, in __next__
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/antonio/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "/home/antonio/anaconda3/lib/python3.7/site-packages/transformers/data/data_collator.py", line 90, in __call__
labels[labels == self.tokenizer.pad_token_id] = -100
TypeError: eq() received an invalid combination of arguments - got (NoneType), but expected one of:
* (Tensor other)
didn't match because some of the arguments have invalid types: (NoneType)
* (Number other)
didn't match because some of the arguments have invalid types: (NoneType)
```
Command and arguments used to run:
```bash
python train.py \
--output_dir ckpts/prova \
--model_type=gpt2 \
--model_name_or_path=gpt2 \
--do_train \
--train_data_file data.train \
--num_train_epochs 1 \
--per_device_train_batch_size 1 \
--gradient_accumulation_steps 1 \
--save_steps 200
```
Any idea on how to solve this? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6211/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6211/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6210 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6210/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6210/comments | https://api.github.com/repos/huggingface/transformers/issues/6210/events | https://github.com/huggingface/transformers/issues/6210 | 672,034,178 | MDU6SXNzdWU2NzIwMzQxNzg= | 6,210 | XLM-R has extremely low accuracy after fine-tuning on MNLI | {
"login": "samsontmr",
"id": 15007950,
"node_id": "MDQ6VXNlcjE1MDA3OTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/15007950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/samsontmr",
"html_url": "https://github.com/samsontmr",
"followers_url": "https://api.github.com/users/samsontmr/followers",
"following_url": "https://api.github.com/users/samsontmr/following{/other_user}",
"gists_url": "https://api.github.com/users/samsontmr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/samsontmr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samsontmr/subscriptions",
"organizations_url": "https://api.github.com/users/samsontmr/orgs",
"repos_url": "https://api.github.com/users/samsontmr/repos",
"events_url": "https://api.github.com/users/samsontmr/events{/privacy}",
"received_events_url": "https://api.github.com/users/samsontmr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"~Someone mentioned that it could be due to the label indices being swapped for Roberta, but it seems there is already some [code](https://github.com/huggingface/transformers/blob/57eb1cb68d1c567b25ac256444e5c1a77b8817a7/src/transformers/data/datasets/glue.py#L100) to deal with that.~ Tested Roberta and it works fine.",
"Tried running `run_xnli.py` with `xlm-roberta-base` and got this error (seems like it could be related?):\r\n```\r\n08/06/2020 06:12:56 - INFO - transformers.data.processors.glue - *** Example ***\r\n08/06/2020 06:12:56 - INFO - transformers.data.processors.glue - guid: train-5\r\n08/06/2020 06:12:56 - INFO - transformers.data.processors.glue - features: InputFeatures(input_ids=[0, 152725, 17, 14192, 398, 2367, 21208, 2174, 398, 738, 27167, 3060, 111, 8382, 69686, 148100, 17, 831, 1957, 15400, 5036, 398, 3714, 1836, 242, 107, 20949, 1257, 23, 70, 75281, 15437, 37457, 2, 2, 581, 69686, 148100, 765, 10, 37457, 111, 112034, 6, 5, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], attention_mask=[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], token_type_ids=None, label=2)\r\n08/06/2020 06:12:56 - INFO - __main__ - Saving features into cached file /export/data/cached_train_xlm-roberta-base_128_xnli_en\r\nTraceback (most recent call last):\r\n File \"transformers/examples/text-classification/run_xnli.py\", line 617, in <module>\r\n main()\r\n File \"transformers/examples/text-classification/run_xnli.py\", line 570, in main\r\n train_dataset = load_and_cache_examples(args, args.task_name, tokenizer, evaluate=False)\r\n File \"transformers/examples/text-classification/run_xnli.py\", line 343, in load_and_cache_examples\r\n all_token_type_ids = torch.tensor([f.token_type_ids for f in features], dtype=torch.long)\r\nTypeError: an integer is required (got type NoneType)\r\n```\r\n\r\n@LysandreJik any idea what could be going wrong with XLM-R?",
"I don't think XLM-R is handled by the XLNI script. We should update it to the new trainer API and make these few lines model-agnostic.\r\n\r\nhttps://github.com/huggingface/transformers/blob/45e26125de1b9fbae46837856b1f518a4b56eb65/examples/text-classification/run_xnli.py#L269-L273",
"I see, any idea what's going on with the GLUE/MNLI script? It seems to be using the new trainer API.",
"`run_pl_glue.py` has a similar issue in the dataloader.\r\n```\r\n File \"/export/multilingual-morpheus-analysis/transformers/examples/text-classification/lightning_base.py\", line 155, in setup\r\n dataloader = self.get_dataloader(\"train\", train_batch_size)\r\n File \"text-classification/run_pl_glue.py\", line 89, in get_dataloader\r\n all_token_type_ids = torch.tensor([f.token_type_ids for f in features], dtype=torch.long)\r\nTypeError: an integer is required (got type NoneType)\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@samsontmr\r\nHi, i have the same problem with `xlm-roberta-large` on MNLI using the official script. `xlm-roberta-large` has extremely low accuracy. any suggestion?"
] | 1,596 | 1,671 | 1,603 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Linux-4.14.138+-x86_64-with-debian-buster-sid
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.0a0+3bbb36e (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help
albert, bert, GPT2, XLM: @LysandreJik
## Information
Model I am using (Bert, XLNet ...): XLM-R
The problem arises when using:
* [x] the official example scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
## To reproduce
Steps to reproduce the behavior:
1. Run below command
```
python -m torch.distributed.launch \
--nproc_per_node 8 transformers/examples/text-classification/run_glue.py \
--model_name_or_path xlm-roberta-base \
--task_name mnli \
--do_train \
--do_eval \
--data_dir ../data/MNLI \
--max_seq_length 128 \
--per_device_train_batch_size 8 \
--learning_rate 5e-5 \
--num_train_epochs 2 \
--save_steps 2000 \
--output_dir xlmr_base_mnli \
--overwrite_cache \
--overwrite_output_dir
```
```
eval_mnli/acc = 0.31818644931227713
eval_mnli-mm/acc = 0.318246541903987
```
## Expected behavior
Higher performance on MNLI. Running script with `bert-base-multilingual-cased` instead yields around 80+ points, which was closer to the monolingual models' results. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6210/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6210/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6209 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6209/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6209/comments | https://api.github.com/repos/huggingface/transformers/issues/6209/events | https://github.com/huggingface/transformers/issues/6209 | 672,033,238 | MDU6SXNzdWU2NzIwMzMyMzg= | 6,209 | Tips for Tensorflow 2.0 NER task by using fit method. | {
"login": "rainmaker712",
"id": 10111029,
"node_id": "MDQ6VXNlcjEwMTExMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/10111029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rainmaker712",
"html_url": "https://github.com/rainmaker712",
"followers_url": "https://api.github.com/users/rainmaker712/followers",
"following_url": "https://api.github.com/users/rainmaker712/following{/other_user}",
"gists_url": "https://api.github.com/users/rainmaker712/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rainmaker712/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rainmaker712/subscriptions",
"organizations_url": "https://api.github.com/users/rainmaker712/orgs",
"repos_url": "https://api.github.com/users/rainmaker712/repos",
"events_url": "https://api.github.com/users/rainmaker712/events{/privacy}",
"received_events_url": "https://api.github.com/users/rainmaker712/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@aiscientist \r\nWhy would you need to use `model.fit()` method? I think it is a more low level way to train your model. There's an easy way to train HuggingFace models with `TFTrainer`. All you need to do, is to initialize pretrained config, tokenizer and model, create your own Dataset class for NER data, and just pass it to the trainer. It will do all the rest.",
"> @aiscientist\r\n> Why would you need to use `model.fit()` method? I think it is a more low level way to train your model. There's an easy way to train HuggingFace models with `TFTrainer`. All you need to do, is to initialize pretrained config, tokenizer and model, create your own Dataset class for NER data, and just pass it to the trainer. It will do all the rest.\r\n\r\nYes, I need to finish this job using model.fit() following from the TensorFlow guideline (Like Keras Style), but for NER task it's kinda hard to manage loss because of model.fit(). Using Pytorch is easier but, I need to use model.fit for this task because of several reasons. I just want to know that it is possible to use TF Token Classifier on model.fit(). If not, I need to discuss with my co-workers and find another way to do it.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,602 | 1,602 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
Any tips for implementing TF2 NER (TF Token Classifier) by using model.fit method?
An official example does not use model.fit for NER task.
Any links or tips will be appreciated.
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6209/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6209/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6208 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6208/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6208/comments | https://api.github.com/repos/huggingface/transformers/issues/6208/events | https://github.com/huggingface/transformers/issues/6208 | 671,972,829 | MDU6SXNzdWU2NzE5NzI4Mjk= | 6,208 | torch模型的forward方法里面内容为空,用Electra模型无法训练。 | {
"login": "ChrisChaw",
"id": 41299010,
"node_id": "MDQ6VXNlcjQxMjk5MDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/41299010?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChrisChaw",
"html_url": "https://github.com/ChrisChaw",
"followers_url": "https://api.github.com/users/ChrisChaw/followers",
"following_url": "https://api.github.com/users/ChrisChaw/following{/other_user}",
"gists_url": "https://api.github.com/users/ChrisChaw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChrisChaw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChrisChaw/subscriptions",
"organizations_url": "https://api.github.com/users/ChrisChaw/orgs",
"repos_url": "https://api.github.com/users/ChrisChaw/repos",
"events_url": "https://api.github.com/users/ChrisChaw/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChrisChaw/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,602 | 1,602 | NONE | null | 

上图是训练模型时调用的forward方法,这个方法里的代码为空,下图是报的错,请教您这个问题该如何解决?非常感谢您!!! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6208/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6208/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6207 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6207/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6207/comments | https://api.github.com/repos/huggingface/transformers/issues/6207/events | https://github.com/huggingface/transformers/pull/6207 | 671,961,733 | MDExOlB1bGxSZXF1ZXN0NDYyMDgyMTg5 | 6,207 | Problems with TFT5 | {
"login": "Guillem96",
"id": 21279306,
"node_id": "MDQ6VXNlcjIxMjc5MzA2",
"avatar_url": "https://avatars.githubusercontent.com/u/21279306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Guillem96",
"html_url": "https://github.com/Guillem96",
"followers_url": "https://api.github.com/users/Guillem96/followers",
"following_url": "https://api.github.com/users/Guillem96/following{/other_user}",
"gists_url": "https://api.github.com/users/Guillem96/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Guillem96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Guillem96/subscriptions",
"organizations_url": "https://api.github.com/users/Guillem96/orgs",
"repos_url": "https://api.github.com/users/Guillem96/repos",
"events_url": "https://api.github.com/users/Guillem96/events{/privacy}",
"received_events_url": "https://api.github.com/users/Guillem96/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Closing in favor of #6214 "
] | 1,596 | 1,596 | 1,596 | CONTRIBUTOR | null | As far as we are concerned, Tensorflow T5 has some problems:
- Shifting inputs right should add at the beginning of the sequences a *Start of Sequence token* (padding token in T5 case), but currently, it generates a useless sequence of zeros. This is due to this line of code:
```python
shifted_input_ids = tf.zeros_like(input_ids, dtype=tf.int32) # No matter what, this generates zeros
# ... carry this 0s all over the function
```
*Note*
We observe, that PyTorch's `T5ForConditionalGeneration` loss is a scalar. Contrary, with the `TFT5ForConditionalGeneration`, the loss is a tensor where each loss element corresponds to a *non-ignored* label. We think that the loss should be reduced. Is there any reason for keeping the loss *as is* instead of reducing it?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6207/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6207/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6207",
"html_url": "https://github.com/huggingface/transformers/pull/6207",
"diff_url": "https://github.com/huggingface/transformers/pull/6207.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6207.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/6206 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6206/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6206/comments | https://api.github.com/repos/huggingface/transformers/issues/6206/events | https://github.com/huggingface/transformers/issues/6206 | 671,947,240 | MDU6SXNzdWU2NzE5NDcyNDA= | 6,206 | Error while saving electra model in tensorflow "savedModel" format | {
"login": "nirajkale",
"id": 40765055,
"node_id": "MDQ6VXNlcjQwNzY1MDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/40765055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nirajkale",
"html_url": "https://github.com/nirajkale",
"followers_url": "https://api.github.com/users/nirajkale/followers",
"following_url": "https://api.github.com/users/nirajkale/following{/other_user}",
"gists_url": "https://api.github.com/users/nirajkale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nirajkale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nirajkale/subscriptions",
"organizations_url": "https://api.github.com/users/nirajkale/orgs",
"repos_url": "https://api.github.com/users/nirajkale/repos",
"events_url": "https://api.github.com/users/nirajkale/events{/privacy}",
"received_events_url": "https://api.github.com/users/nirajkale/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey!\r\n\r\nSorry for the inconvenience, there are actually some issues with saved model, but we are currently working on several fix in this PR https://github.com/huggingface/transformers/pull/5468",
"Thanks for quick response!\r\nWill wait for this PR to complete :-) ",
"The PR has been merged in master, can you retry and let us know if it is really fixed please? Otherwise I will work on it ^^",
"Just pulled the latest code from master & now the SavedModel export is working fine for Electra 👍. \r\nThanks a lot for your quick help!\r\nClosing the issue now.",
"@nirajkale Hi, i find ur model's: ahotrod/electra_large_discriminator_squad2_512\"\". Is this model a finetune model, i mean you already used electra_large to finetune a model with your task right. thanks for your reply."
] | 1,596 | 1,609 | 1,596 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Linux-5.3.0-1034-azure-x86_64-with-debian-buster-sid
- Python version: 3.6.10
- PyTorch version (GPU?): 1.5.1 (True)
- Tensorflow version (GPU?): 2.0.0 (True)
- Using GPU in script?: Yes (Tesla P40)
- Using distributed or parallel set-up in script?: No
tensorflow: @jplu
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
-->
## Information
Model I am using "Electra" with TFElectraForQuestionAnswering:
The tasks I am working on is:
I'm trying to use [Pre-trained electra-large](https://huggingface.co/ahotrod/electra_large_discriminator_squad2_512) on squad 2.0 dataset.
The model seems to be working fine. The problem arises when I treat it as a Keras model &
try to save it in tensorflow's binary/SavedModel format.
The reason I'm using SavedModel format is because I want to host the model using tensorflow model server.
## To reproduce
Steps to reproduce the behavior:
1.Load the model with TFElectraForQuestionAnswering
2.Add input layers
3.save the model
```
from transformers import BertTokenizer, TFElectraForQuestionAnswering
import tensorflow as tf
model_name_or_path= 'ahotrod/electra_large_discriminator_squad2_512'
max_len = None
tokenizer = BertTokenizer.from_pretrained(model_name_or_path, cache_dir='transformers')
model = TFElectraForQuestionAnswering.from_pretrained(model_name_or_path, cache_dir='transformers')
input_ids = tf.keras.layers.Input(shape=(max_len,), name='input_ids', dtype='int32')
attention_mask = tf.keras.layers.Input(shape=(max_len,), name='attention_mask', dtype='int32')
token_type_ids = tf.keras.layers.Input(shape=(max_len,), name='token_type_ids', dtype='int32')
keras_input = [input_ids, attention_mask, token_type_ids]
qa_output = model(keras_input)
keras_model = tf.keras.Model(inputs= keras_input, outputs = qa_output)
print(keras_model.summary())
keras_model.save(r'exported/electra_large/0011')
```
When I run above snippet it gives error:
`Exception has occurred: TypeError
Expected Operation, Variable, or Tensor, got None
File "/datadrive/users/niraj/qa/export_electra.py", line 31, in <module>
keras_model.save(r'exported/electra_large/0011')`
## Expected behavior
Not able to figure out the reason for this but above snipper works if i use bert-large or any other model instead of electra. Probably something wrong with modeling scripts for electra ?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6206/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6206/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6205 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6205/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6205/comments | https://api.github.com/repos/huggingface/transformers/issues/6205/events | https://github.com/huggingface/transformers/pull/6205 | 671,701,558 | MDExOlB1bGxSZXF1ZXN0NDYxODY5MjMx | 6,205 | s2s: fix LR logging, remove some dead code. | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6205?src=pr&el=h1) Report\n> Merging [#6205](https://codecov.io/gh/huggingface/transformers/pull/6205?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/82a0e2b67ec94d28b20e24b3393644002bbd0d4b&el=desc) will **increase** coverage by `0.07%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6205?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6205 +/- ##\n==========================================\n+ Coverage 79.65% 79.73% +0.07% \n==========================================\n Files 146 146 \n Lines 26607 26607 \n==========================================\n+ Hits 21194 21214 +20 \n+ Misses 5413 5393 -20 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6205?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6205/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6205/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `81.70% <0.00%> (+1.00%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6205/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.77% <0.00%> (+35.21%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6205?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6205?src=pr&el=footer). Last update [82a0e2b...f39a67f](https://codecov.io/gh/huggingface/transformers/pull/6205?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,596 | 1,596 | 1,596 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6205/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6205/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6205",
"html_url": "https://github.com/huggingface/transformers/pull/6205",
"diff_url": "https://github.com/huggingface/transformers/pull/6205.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6205.patch",
"merged_at": 1596465386000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6204 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6204/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6204/comments | https://api.github.com/repos/huggingface/transformers/issues/6204/events | https://github.com/huggingface/transformers/issues/6204 | 671,667,437 | MDU6SXNzdWU2NzE2Njc0Mzc= | 6,204 | QA Loss Cleanup | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 2139563322,
"node_id": "MDU6TGFiZWwyMTM5NTYzMzIy",
"url": "https://api.github.com/repos/huggingface/transformers/labels/cleanup",
"name": "cleanup",
"color": "e7fc49",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"I will experiment. Thank you!\r\n",
"See: https://github.com/huggingface/transformers/pull/6430",
"So I guess we can close this one, since the experiment wasn't accepted.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,605 | 1,605 | CONTRIBUTOR | null | This snippet appears a lot of places and could be factored out into a `calc_qa_loss(logits)`
Requires some care, because I'm not sure how good the test coverage is, and if it doesn't improve readability we shouldn't do it.
```python
logits = self.qa_outputs(sequence_output)
start_logits, end_logits = logits.split(1, dim=-1)
start_logits = start_logits.squeeze(-1)
end_logits = end_logits.squeeze(-1)
outputs = (start_logits, end_logits,) + outputs[2:]
if start_positions is not None and end_positions is not None:
# If we are on multi-GPU, split add a dimension
if len(start_positions.size()) > 1:
start_positions = start_positions.squeeze(-1)
if len(end_positions.size()) > 1:
end_positions = end_positions.squeeze(-1)
# sometimes the start/end positions are outside our model inputs, we ignore these terms
ignored_index = start_logits.size(1)
start_positions.clamp_(0, ignored_index)
end_positions.clamp_(0, ignored_index)
loss_fct = CrossEntropyLoss(ignore_index=ignored_index)
start_loss = loss_fct(start_logits, start_positions)
end_loss = loss_fct(end_logits, end_positions)
total_loss = (start_loss + end_loss) / 2
```
@stas00 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6204/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6204/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6203 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6203/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6203/comments | https://api.github.com/repos/huggingface/transformers/issues/6203/events | https://github.com/huggingface/transformers/issues/6203 | 671,653,996 | MDU6SXNzdWU2NzE2NTM5OTY= | 6,203 | Issue with fp16_opt_level default | {
"login": "c-col",
"id": 12224330,
"node_id": "MDQ6VXNlcjEyMjI0MzMw",
"avatar_url": "https://avatars.githubusercontent.com/u/12224330?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/c-col",
"html_url": "https://github.com/c-col",
"followers_url": "https://api.github.com/users/c-col/followers",
"following_url": "https://api.github.com/users/c-col/following{/other_user}",
"gists_url": "https://api.github.com/users/c-col/gists{/gist_id}",
"starred_url": "https://api.github.com/users/c-col/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/c-col/subscriptions",
"organizations_url": "https://api.github.com/users/c-col/orgs",
"repos_url": "https://api.github.com/users/c-col/repos",
"events_url": "https://api.github.com/users/c-col/events{/privacy}",
"received_events_url": "https://api.github.com/users/c-col/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"I have this issue too, reported to pl https://github.com/PyTorchLightning/pytorch-lightning/issues/2673 \r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,602 | 1,602 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Linux-4.15.0-1091-oem-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.1+cu101 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes, 1 GPU
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
tensorflow: @jplu
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): BART @sshleifer
The problem arises when using:
* [ ] the official example scripts:
* [x] my own modified scripts: Running a modified version of finetune.py (via finetune.sh) that specifies "bad word tokens" that BART is prohibited from generating
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task:
* [x] my own task or dataset: Finetuning on a question understanding dataset
## To reproduce
Steps to reproduce the behavior:
1. Run finetune.sh using a GPU with fp16 flag and default settings, instructing the model to perform testing. See set-up below:
Defaults for this optimization level are:
enabled : True
opt_level : O2
cast_model_type : torch.float16
patch_torch_functions : False
keep_batchnorm_fp32 : True
master_weights : True
loss_scale : dynamic
Processing user overrides (additional kwargs that are not None)...
After processing overrides, optimization options are:
enabled : True
opt_level : O2
cast_model_type : torch.float16
patch_torch_functions : False
keep_batchnorm_fp32 : True
master_weights : True
loss_scale : dynamic
2. At the start of testing, an error related to amp will be encountered:
```python
Traceback (most recent call last):
File "finetune_prohibition.py", line 444, in <module>
main(args)
File "finetune_prohibition.py", line 433, in main
trainer.test()
File "/home/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1281, in test
results = self.__test_using_best_weights(ckpt_path, test_dataloaders)
File "/home/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1321, in __test_using_best_weights
results = self.fit(model)
File "/home/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1003, in fit
results = self.single_gpu_train(model)
File "/home/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 182, in single_gpu_train
model, optimizers = model.configure_apex(amp, model, self.optimizers, self.amp_level)
File "/home/lib/python3.6/site-packages/pytorch_lightning/core/lightning.py", line 1006, in configure_apex
model, optimizers = amp.initialize(model, optimizers, opt_level=amp_level)
File "/home/lib/python3.6/site-packages/apex/amp/frontend.py", line 358, in initialize
return _initialize(models, optimizers, _amp_state.opt_properties, num_losses, cast_model_outputs)
File "/home/lib/python3.6/site-packages/apex/amp/_initialize.py", line 171, in _initialize
check_params_fp32(models)
File "/home/lib/python3.6/site-packages/apex/amp/_initialize.py", line 87, in check_params_fp32
name, param.type()))
File "/home/lib/python3.6/site-packages/apex/amp/_amp_state.py", line 32, in warn_or_err
raise RuntimeError(msg)
RuntimeError: Found param model.model.shared.weight with type torch.cuda.HalfTensor, expected torch.cuda.FloatTensor.
When using amp.initialize, you do not need to call .half() on your model
before passing it, no matter what optimization level you choose.
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
When using the new default of fp16_opt_level=O2, this error is encountered - using fp16_opt_level=O1 solves the issue. I'm unsure if this problem is just related to my machine, and would be interested to see whether others can recreate the problem. Although I used modified scripts, the modifications are simple and shouldn't interact with torch.cuda.HalfTensor classes differently. Let me know if you'd like me to try to recreate this issue on an official GLUE/SQUaD task or if you need any other information from me. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6203/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6203/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6202 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6202/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6202/comments | https://api.github.com/repos/huggingface/transformers/issues/6202/events | https://github.com/huggingface/transformers/issues/6202 | 671,633,854 | MDU6SXNzdWU2NzE2MzM4NTQ= | 6,202 | Cannot fine tune my distilbart-cnn-12-6 model because of cuda memory | {
"login": "Hildweig",
"id": 34550304,
"node_id": "MDQ6VXNlcjM0NTUwMzA0",
"avatar_url": "https://avatars.githubusercontent.com/u/34550304?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hildweig",
"html_url": "https://github.com/Hildweig",
"followers_url": "https://api.github.com/users/Hildweig/followers",
"following_url": "https://api.github.com/users/Hildweig/following{/other_user}",
"gists_url": "https://api.github.com/users/Hildweig/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hildweig/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hildweig/subscriptions",
"organizations_url": "https://api.github.com/users/Hildweig/orgs",
"repos_url": "https://api.github.com/users/Hildweig/repos",
"events_url": "https://api.github.com/users/Hildweig/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hildweig/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi - it looks like the memory blowup occurs during beam search (presumably during eval).\r\n\r\nIn `_generative_step` in `fine_tune.py`, feel free to play around with the `num_beams` parameter in the `self.model.generate` call. I can confirm that explicitly setting `num_beams=1` works on a V100 16GB. I believe the default is 5. beam search is very memory & computation intensive. Maybe for your final model evaluations, you can find a bigger GPU or just do on CPU. Greedy decoding should be OK during model development.\r\n\r\nAlso, if you look at the available distilled BART models (https://huggingface.co/sshleifer/distilbart-xsum-12-1), you'll see some options with fewer params than distilbart-cnn-12-6 (i.e., distilbart-cnn-12-1)\r\n\r\nplease let me know if this works!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,602 | 1,602 | NONE | null | I'm trying to fine tune my model like this:
```
import os
os.environ['PYTHONPATH'] += ":/content/transformers/examples"
%cd "/content/transformers/examples"
!python /content/transformers/examples/seq2seq/finetune.py \
--learning_rate=3e-5 \
--fp16 \
--gpus 1 \
--do_train \
--do_predict \
--n_val 1000 \
--val_check_interval 0.1 \
--sortish_sampler \
--data_dir '/content/dataset' \
--train_batch_size=1\
--eval_batch_size=1\
--output_dir=distilbart_multi_news \
--num_train_epochs 1 \
--model_name_or_path /content/model/best_tfmr
```
But even with a batch size of 1 I get this error:
File "/content/transformers/examples/seq2seq/finetune.py", line 344, in <module>
main(args)
File "/content/transformers/examples/seq2seq/finetune.py", line 322, in main
logger=logger,
File "/content/transformers/examples/lightning_base.py", line 330, in generic_train
trainer.fit(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 918, in fit
self.single_gpu_train(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/distrib_parts.py", line 176, in single_gpu_train
self.run_pretrain_routine(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 1076, in run_pretrain_routine
False)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 279, in _evaluate
output = self.evaluation_forward(model, batch, batch_idx, dataloader_idx, test_mode)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 452, in evaluation_forward
output = model.validation_step(*args)
File "/content/transformers/examples/seq2seq/finetune.py", line 136, in validation_step
return self._generative_step(batch)
File "/content/transformers/examples/seq2seq/finetune.py", line 163, in _generative_step
generated_ids = self.model.generate(input_ids=source_ids, attention_mask=source_mask, use_cache=True,)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/generation_utils.py", line 459, in generate
model_specific_kwargs=model_specific_kwargs,
File "/usr/local/lib/python3.6/dist-packages/transformers/generation_utils.py", line 638, in _generate_beam_search
outputs = self(**model_inputs) # (batch_size * num_beams, cur_len, vocab_size)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_bart.py", line 1005, in forward
lm_logits = F.linear(outputs[0], self.model.shared.weight, bias=self.final_logits_bias)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1676, in linear
output = input.matmul(weight.t())
RuntimeError: CUDA out of memory. Tried to allocate 100.00 MiB (GPU 0; 11.17 GiB total capacity; 10.59 GiB already allocated; 91.81 MiB free; 10.66 GiB reserved in total by PyTorch)
Any idea what to do?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6202/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6202/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6201 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6201/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6201/comments | https://api.github.com/repos/huggingface/transformers/issues/6201/events | https://github.com/huggingface/transformers/pull/6201 | 671,601,961 | MDExOlB1bGxSZXF1ZXN0NDYxNzk1OTkx | 6,201 | Update model card | {
"login": "alisafaya",
"id": 22398153,
"node_id": "MDQ6VXNlcjIyMzk4MTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/22398153?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alisafaya",
"html_url": "https://github.com/alisafaya",
"followers_url": "https://api.github.com/users/alisafaya/followers",
"following_url": "https://api.github.com/users/alisafaya/following{/other_user}",
"gists_url": "https://api.github.com/users/alisafaya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alisafaya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alisafaya/subscriptions",
"organizations_url": "https://api.github.com/users/alisafaya/orgs",
"repos_url": "https://api.github.com/users/alisafaya/repos",
"events_url": "https://api.github.com/users/alisafaya/events{/privacy}",
"received_events_url": "https://api.github.com/users/alisafaya/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6201?src=pr&el=h1) Report\n> Merging [#6201](https://codecov.io/gh/huggingface/transformers/pull/6201?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/82a0e2b67ec94d28b20e24b3393644002bbd0d4b&el=desc) will **decrease** coverage by `0.06%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6201?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6201 +/- ##\n==========================================\n- Coverage 79.65% 79.58% -0.07% \n==========================================\n Files 146 146 \n Lines 26607 26607 \n==========================================\n- Hits 21194 21176 -18 \n- Misses 5413 5431 +18 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6201?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6201/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.18% <0.00%> (-34.62%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6201/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: |\n| [src/transformers/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6201/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `87.50% <0.00%> (-9.73%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6201/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.55% <0.00%> (-0.29%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6201/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+5.76%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6201/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.77% <0.00%> (+35.21%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6201/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.19% <0.00%> (+63.97%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6201?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6201?src=pr&el=footer). Last update [82a0e2b...e752ac3](https://codecov.io/gh/huggingface/transformers/pull/6201?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,596 | 1,596 | 1,596 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6201/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6201/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6201",
"html_url": "https://github.com/huggingface/transformers/pull/6201",
"diff_url": "https://github.com/huggingface/transformers/pull/6201.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6201.patch",
"merged_at": 1596577455000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6200 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6200/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6200/comments | https://api.github.com/repos/huggingface/transformers/issues/6200/events | https://github.com/huggingface/transformers/pull/6200 | 671,601,921 | MDExOlB1bGxSZXF1ZXN0NDYxNzk1OTYz | 6,200 | Update model card | {
"login": "alisafaya",
"id": 22398153,
"node_id": "MDQ6VXNlcjIyMzk4MTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/22398153?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alisafaya",
"html_url": "https://github.com/alisafaya",
"followers_url": "https://api.github.com/users/alisafaya/followers",
"following_url": "https://api.github.com/users/alisafaya/following{/other_user}",
"gists_url": "https://api.github.com/users/alisafaya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alisafaya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alisafaya/subscriptions",
"organizations_url": "https://api.github.com/users/alisafaya/orgs",
"repos_url": "https://api.github.com/users/alisafaya/repos",
"events_url": "https://api.github.com/users/alisafaya/events{/privacy}",
"received_events_url": "https://api.github.com/users/alisafaya/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,596 | 1,596 | 1,596 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6200/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6200/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6200",
"html_url": "https://github.com/huggingface/transformers/pull/6200",
"diff_url": "https://github.com/huggingface/transformers/pull/6200.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6200.patch",
"merged_at": 1596577446000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6199 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6199/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6199/comments | https://api.github.com/repos/huggingface/transformers/issues/6199/events | https://github.com/huggingface/transformers/pull/6199 | 671,601,865 | MDExOlB1bGxSZXF1ZXN0NDYxNzk1OTE3 | 6,199 | Update model card | {
"login": "alisafaya",
"id": 22398153,
"node_id": "MDQ6VXNlcjIyMzk4MTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/22398153?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alisafaya",
"html_url": "https://github.com/alisafaya",
"followers_url": "https://api.github.com/users/alisafaya/followers",
"following_url": "https://api.github.com/users/alisafaya/following{/other_user}",
"gists_url": "https://api.github.com/users/alisafaya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alisafaya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alisafaya/subscriptions",
"organizations_url": "https://api.github.com/users/alisafaya/orgs",
"repos_url": "https://api.github.com/users/alisafaya/repos",
"events_url": "https://api.github.com/users/alisafaya/events{/privacy}",
"received_events_url": "https://api.github.com/users/alisafaya/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,596 | 1,596 | 1,596 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6199/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6199/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6199",
"html_url": "https://github.com/huggingface/transformers/pull/6199",
"diff_url": "https://github.com/huggingface/transformers/pull/6199.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6199.patch",
"merged_at": 1596577429000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6198 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6198/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6198/comments | https://api.github.com/repos/huggingface/transformers/issues/6198/events | https://github.com/huggingface/transformers/pull/6198 | 671,601,836 | MDExOlB1bGxSZXF1ZXN0NDYxNzk1ODk3 | 6,198 | Update model card | {
"login": "alisafaya",
"id": 22398153,
"node_id": "MDQ6VXNlcjIyMzk4MTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/22398153?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alisafaya",
"html_url": "https://github.com/alisafaya",
"followers_url": "https://api.github.com/users/alisafaya/followers",
"following_url": "https://api.github.com/users/alisafaya/following{/other_user}",
"gists_url": "https://api.github.com/users/alisafaya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alisafaya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alisafaya/subscriptions",
"organizations_url": "https://api.github.com/users/alisafaya/orgs",
"repos_url": "https://api.github.com/users/alisafaya/repos",
"events_url": "https://api.github.com/users/alisafaya/events{/privacy}",
"received_events_url": "https://api.github.com/users/alisafaya/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6198?src=pr&el=h1) Report\n> Merging [#6198](https://codecov.io/gh/huggingface/transformers/pull/6198?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/82a0e2b67ec94d28b20e24b3393644002bbd0d4b&el=desc) will **decrease** coverage by `0.80%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6198?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6198 +/- ##\n==========================================\n- Coverage 79.65% 78.85% -0.81% \n==========================================\n Files 146 146 \n Lines 26607 26607 \n==========================================\n- Hits 21194 20981 -213 \n- Misses 5413 5626 +213 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6198?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6198/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6198/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.18% <0.00%> (-34.62%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6198/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `68.14% <0.00%> (-25.67%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6198/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.77% <0.00%> (+35.21%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6198/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.19% <0.00%> (+63.97%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6198?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6198?src=pr&el=footer). Last update [82a0e2b...937eea4](https://codecov.io/gh/huggingface/transformers/pull/6198?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,596 | 1,596 | 1,596 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6198/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6198/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6198",
"html_url": "https://github.com/huggingface/transformers/pull/6198",
"diff_url": "https://github.com/huggingface/transformers/pull/6198.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6198.patch",
"merged_at": 1596577419000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6196 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6196/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6196/comments | https://api.github.com/repos/huggingface/transformers/issues/6196/events | https://github.com/huggingface/transformers/pull/6196 | 671,560,832 | MDExOlB1bGxSZXF1ZXN0NDYxNzY2NjAw | 6,196 | cleanup torch unittests | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6196?src=pr&el=h1) Report\n> Merging [#6196](https://codecov.io/gh/huggingface/transformers/pull/6196?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/82a0e2b67ec94d28b20e24b3393644002bbd0d4b&el=desc) will **decrease** coverage by `1.15%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6196?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6196 +/- ##\n==========================================\n- Coverage 79.65% 78.49% -1.16% \n==========================================\n Files 146 146 \n Lines 26607 26607 \n==========================================\n- Hits 21194 20886 -308 \n- Misses 5413 5721 +308 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6196?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6196/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6196/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `80.30% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6196/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+5.76%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6196/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.32% <0.00%> (+23.67%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6196/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.77% <0.00%> (+35.21%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6196?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6196?src=pr&el=footer). Last update [82a0e2b...f559e11](https://codecov.io/gh/huggingface/transformers/pull/6196?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I added one more `check_loss_output` that was missing, no changes otherwise.\r\n\r\nCI is randomly failing again..."
] | 1,596 | 1,596 | 1,596 | CONTRIBUTOR | null | a large group of tests has been modified according to the request in https://github.com/huggingface/transformers/issues/5973
If this is what's needed then just running this magic perl sequence should take care of most of them:
```
perl -pi -e 's|^\s+self.check_loss_output\(result\)\n||' tests/test_modeling_bert.py
perl -0777 -pi -e 's|^\s+def check_loss_output\(self, result\):[\s\n]+ self.parent.assertListEqual\(list\(result\["loss"\].size\(\)\), \[]\)\s*\n|\n|msg' tests/test_modeling_bert.py
perl -0777 -pi -e 's#self.parent.assertListEqual\(
[\s\n]*
list\((result\w*)\[" ([^"]+) "\].(?:shape|size\(\))\),[\s\n]+\[ ( [^\]]* ) \],?
[\s\n]*
\)
#self.parent.assertEqual($1.$2.shape, ($3))#xmsg' tests/test_modeling_bert.py
```
(edit, adjusted for multiple various inputs)
well, add:
```
make style
```
to fix the style.
problem: not all results are objects, some are plain `dict` and can't be called with .key_name. See my comment below.
@sshleifer
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6196/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6196/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6196",
"html_url": "https://github.com/huggingface/transformers/pull/6196",
"diff_url": "https://github.com/huggingface/transformers/pull/6196.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6196.patch",
"merged_at": 1596523377000
} |
https://api.github.com/repos/huggingface/transformers/issues/6195 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6195/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6195/comments | https://api.github.com/repos/huggingface/transformers/issues/6195/events | https://github.com/huggingface/transformers/pull/6195 | 671,555,720 | MDExOlB1bGxSZXF1ZXN0NDYxNzYxODEw | 6,195 | Encoder decoder config docs | {
"login": "afcruzs",
"id": 4340932,
"node_id": "MDQ6VXNlcjQzNDA5MzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4340932?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/afcruzs",
"html_url": "https://github.com/afcruzs",
"followers_url": "https://api.github.com/users/afcruzs/followers",
"following_url": "https://api.github.com/users/afcruzs/following{/other_user}",
"gists_url": "https://api.github.com/users/afcruzs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/afcruzs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/afcruzs/subscriptions",
"organizations_url": "https://api.github.com/users/afcruzs/orgs",
"repos_url": "https://api.github.com/users/afcruzs/repos",
"events_url": "https://api.github.com/users/afcruzs/events{/privacy}",
"received_events_url": "https://api.github.com/users/afcruzs/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6195?src=pr&el=h1) Report\n> Merging [#6195](https://codecov.io/gh/huggingface/transformers/pull/6195?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d8dbf3b75d58667e2ecaf42b4aa076e83d034d26&el=desc) will **increase** coverage by `0.32%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6195?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6195 +/- ##\n==========================================\n+ Coverage 79.47% 79.80% +0.32% \n==========================================\n Files 146 146 \n Lines 26607 26607 \n==========================================\n+ Hits 21146 21233 +87 \n+ Misses 5461 5374 -87 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6195?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6195/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/modeling\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6195/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `92.20% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6195/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.22% <0.00%> (-63.98%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6195/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6195/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `90.09% <0.00%> (+23.42%)` | :arrow_up: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6195/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `93.80% <0.00%> (+25.66%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6195/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+34.61%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6195?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6195?src=pr&el=footer). Last update [d8dbf3b...4a03156](https://codecov.io/gh/huggingface/transformers/pull/6195?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@afcruzs - thanks a lot for your PR. I changed the examples a bit trying to make sure:\r\n1) Examples in `config` only concern the EncoderDecoderConfig\r\n2) Examples in `model` only concern the EncoderDecoderModel\r\n\r\nsorry for meddling in your PR.",
"Thanks for improving the examples! I've fixed the whitespace issue"
] | 1,596 | 1,596 | 1,596 | CONTRIBUTOR | null | As discussed on #5826, this PR adds more details on how to load encoder/decoder config objects from pretrained folders and how to instantiate encoder_decoder pretrained models given their corresponding configuration objects (useful for loading pre-trained models and modifying some config members for fine-tuning).
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6195/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6195/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6195",
"html_url": "https://github.com/huggingface/transformers/pull/6195",
"diff_url": "https://github.com/huggingface/transformers/pull/6195.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6195.patch",
"merged_at": 1596525809000
} |
https://api.github.com/repos/huggingface/transformers/issues/6194 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6194/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6194/comments | https://api.github.com/repos/huggingface/transformers/issues/6194/events | https://github.com/huggingface/transformers/issues/6194 | 671,480,852 | MDU6SXNzdWU2NzE0ODA4NTI= | 6,194 | longformertokenizerFast gives error | {
"login": "manishiitg",
"id": 1370315,
"node_id": "MDQ6VXNlcjEzNzAzMTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1370315?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manishiitg",
"html_url": "https://github.com/manishiitg",
"followers_url": "https://api.github.com/users/manishiitg/followers",
"following_url": "https://api.github.com/users/manishiitg/following{/other_user}",
"gists_url": "https://api.github.com/users/manishiitg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manishiitg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manishiitg/subscriptions",
"organizations_url": "https://api.github.com/users/manishiitg/orgs",
"repos_url": "https://api.github.com/users/manishiitg/repos",
"events_url": "https://api.github.com/users/manishiitg/events{/privacy}",
"received_events_url": "https://api.github.com/users/manishiitg/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@patrickvonplaten please see. if relevant ",
"Hi @manishiitg , can you post the command/code you used to run this example ? Won't be able to re-produce from the stack-trace.",
"Thanks for answering @patil-suraj . As @patil-suraj said, we need some code and also the environment info (it's empty above) to better answer here :-) ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,602 | 1,602 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
tensorflow: @jplu
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ X ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ X ] my own task or dataset: (give details below)
## To reproduce
using LongformerTokenizerFast gives error. but using LongformerTokenizer works without any issues keeping everything same
```
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
<ipython-input-39-263240bbee7e> in <module>
----> 1 main()
<ipython-input-37-2c27a8a4db79> in main()
99 )
100
--> 101 train_dataset = CustomDataset(data_args, tokenizer=tokenizer) if training_args.do_train else None
102 eval_dataset = CustomDataset(data_args, tokenizer=tokenizer, mode="test") if training_args.do_eval else None
103
<ipython-input-36-85278feb74ec> in __init__(self, args, tokenizer, limit_length, mode)
184 max_length=args.max_seq_length,
185 label_list=label_list,
--> 186 output_mode=self.output_mode,
187 )
188 start = time.time()
/opt/conda/lib/python3.7/site-packages/transformers/data/processors/glue.py in glue_convert_examples_to_features(examples, tokenizer, max_length, task, label_list, output_mode)
63 return _tf_glue_convert_examples_to_features(examples, tokenizer, max_length=max_length, task=task)
64 return _glue_convert_examples_to_features(
---> 65 examples, tokenizer, max_length=max_length, task=task, label_list=label_list, output_mode=output_mode
66 )
67
/opt/conda/lib/python3.7/site-packages/transformers/data/processors/glue.py in _glue_convert_examples_to_features(examples, tokenizer, max_length, task, label_list, output_mode)
133 max_length=max_length,
134 padding="max_length",
--> 135 truncation=True,
136 )
137
/opt/conda/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in __call__(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, is_pretokenized, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
1918 return_length=return_length,
1919 verbose=verbose,
-> 1920 **kwargs,
1921 )
1922 else:
/opt/conda/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, padding, truncation, max_length, stride, is_pretokenized, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
2103 return_length=return_length,
2104 verbose=verbose,
-> 2105 **kwargs,
2106 )
2107
/opt/conda/lib/python3.7/site-packages/transformers/tokenization_gpt2.py in _batch_encode_plus(self, *args, **kwargs)
385 )
386
--> 387 return super()._batch_encode_plus(*args, **kwargs)
388
389 def _encode_plus(self, *args, **kwargs) -> BatchEncoding:
/opt/conda/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py in _batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, padding_strategy, truncation_strategy, max_length, stride, is_pretokenized, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
378 else:
379 encodings = self._tokenizer.encode_batch(
--> 380 batch_text_or_text_pairs, add_special_tokens=add_special_tokens, is_pretokenized=is_pretokenized
381 )
382
/opt/conda/lib/python3.7/site-packages/tokenizers/implementations/base_tokenizer.py in encode_batch(self, inputs, is_pretokenized, add_special_tokens)
247 raise ValueError("encode_batch: `inputs` can't be `None`")
248
--> 249 return self._tokenizer.encode_batch(inputs, is_pretokenized, add_special_tokens)
250
251 def decode(self, ids: List[int], skip_special_tokens: Optional[bool] = True) -> str:
Exception: Truncation error: Specified max length is too low to respect the various constraints
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6194/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6194/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6193 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6193/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6193/comments | https://api.github.com/repos/huggingface/transformers/issues/6193/events | https://github.com/huggingface/transformers/issues/6193 | 671,280,841 | MDU6SXNzdWU2NzEyODA4NDE= | 6,193 | Some weights not initialized in pre-trained RobertaForMaskedLM | {
"login": "HarshTrivedi",
"id": 3285313,
"node_id": "MDQ6VXNlcjMyODUzMTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3285313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HarshTrivedi",
"html_url": "https://github.com/HarshTrivedi",
"followers_url": "https://api.github.com/users/HarshTrivedi/followers",
"following_url": "https://api.github.com/users/HarshTrivedi/following{/other_user}",
"gists_url": "https://api.github.com/users/HarshTrivedi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HarshTrivedi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HarshTrivedi/subscriptions",
"organizations_url": "https://api.github.com/users/HarshTrivedi/orgs",
"repos_url": "https://api.github.com/users/HarshTrivedi/repos",
"events_url": "https://api.github.com/users/HarshTrivedi/events{/privacy}",
"received_events_url": "https://api.github.com/users/HarshTrivedi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! These warnings are not important, as these weights are not necessary (the position IDs is a buffer that is initialized if not defined, and the lm head decoder bias already exists in the lm head decoder linear layer).\r\n\r\nOn the `master` branch we've updated the warnings to only list those that could have an impact, so running your code on the current master branch results in:\r\n\r\n```py\r\nWARNING:transformers.modeling_utils:Some weights of RobertaForMaskedLM were not initialized from the model checkpoint at roberta-base and are newly initialized: ['lm_head.decoder.bias']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n```\r\n\r\nSo nothing wrong here! Did you make sure you were using the exact same implementation of perplexity calculation than Fairseq?",
"Thanks for the clarification! The difference in `fairseq` and `transformers` ppl is coming from different implementation - `e^cross_entropy` ([in transformers](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py#L274)) vs `2^cross_entropy` ([in fairseq](https://github.com/pytorch/fairseq/blob/master/fairseq/utils.py#L418)). Nothing wrong here! : )",
"@LysandreJik \r\nHi, did not understand \"the lm head decoder bias already exists in the lm head decoder linear layer\", but it still warnings that lm_head.decoder.bias is newly initialized? I'm confused, sorry. Could you elaborate more about why this is not a problem?",
"Hi @cloudygoose, I recommend you take a look at the following class: https://github.com/huggingface/transformers/blob/b8462b5b2ac84f63293900ae168dbde039443a22/src/transformers/models/roberta/modeling_roberta.py#L1065-L1087\r\n\r\nThe error tells you that the following weight: `lm_head.decoder.bias` was not initialized from the model checkpoint: the model checkpoint did not contain that weight.\r\n\r\nHowever, the weight `lm_head.bias` isn't in the error because it was correctly initialized. If you take a look at the last line of the initialization of the class above, you'll see:\r\n\r\n```py\r\n # Need a link between the two variables so that the bias is correctly resized with `resize_token_embeddings` \r\n self.decoder.bias = self.bias \r\n```\r\n\r\nTherefore, the `lm_head.decoder.bias` weight that was not initialized is now set to the value of `self.bias`, which is correctly initialized.\r\n\r\nLet me know if something isn't clear.",
"Hi @LysandreJik,\r\n\r\nI am trying to make a custom model based on Roberta. I use `RobertaModel` internally.\r\n\r\nThe warning is:\r\n```bash\r\nSome weights of the model checkpoint at roberta-large were not used when initializing RobertaModelForTokenAndSpans: ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight']\r\n- This IS expected if you are initializing RobertaModelForTokenAndSpans from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing RobertaModelForTokenAndSpans from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome weights of RobertaModelForTokenAndSpans were not initialized from the model checkpoint at roberta-large and are newly initialized: ['roberta.embeddings.position_ids', 'classifier.weight', 'classifier.bias', 'qa_outputs.weight', 'qa_outputs.bias']\r\n```\r\n\r\n`classifier` and `qa_outputs` are my own layers. I assume from the previous release (4.2.2), this warning has not been removed?\r\nWill having `roberta.embeddings.position_ids` not initialized from roberta affect things in any way?",
"This warning shouldn't be removed, it's telling you what it initializes randomly, and what isn't used. Apparently it's not an issue in your case since you're aware of it, so that's great!\r\n\r\nNot an issue for the position IDs, this warning should have been removed in version v4.3.2, though!",
"I meant the warning about for position IDs.\r\nThanks a lot @LysandreJik :)",
"Happy to help!"
] | 1,596 | 1,613 | 1,596 | CONTRIBUTOR | null | The bug is similar to #2202.
I am trying to evaluate MLM perplexity (without training/finetuning) using Roberta with `run_language_modeling.py` (from the [official example](https://github.com/huggingface/transformers/tree/master/examples/language-modeling)). However, some weights seems to be reinitialized instead of getting loading from the pretrained Roberta checkpoint.
## To Reproduce (~~with master branch~~):
```
import logging
logging.basicConfig(level=logging.INFO)
from transformers import RobertaForMaskedLM
_ = RobertaForMaskedLM.from_pretrained('roberta-base')
```
It gives the following warning message:
```
WARNING:transformers.modeling_utils:Some weights of RobertaForMaskedLM were not initialized from the model checkpoint at roberta-base and are newly initialized: ['roberta.embeddings.position_ids', 'lm_head.decoder.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
The perplexities I get on direct evaluation on Wikitext-2/103 datasets are also much higher than the official Roberta implementation from fairseq. I suspect this could be the reason. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6193/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6193/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6192 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6192/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6192/comments | https://api.github.com/repos/huggingface/transformers/issues/6192/events | https://github.com/huggingface/transformers/issues/6192 | 671,112,086 | MDU6SXNzdWU2NzExMTIwODY= | 6,192 | GPT2 crashing at loss.backward() | {
"login": "vibhavagarwal5",
"id": 23319631,
"node_id": "MDQ6VXNlcjIzMzE5NjMx",
"avatar_url": "https://avatars.githubusercontent.com/u/23319631?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vibhavagarwal5",
"html_url": "https://github.com/vibhavagarwal5",
"followers_url": "https://api.github.com/users/vibhavagarwal5/followers",
"following_url": "https://api.github.com/users/vibhavagarwal5/following{/other_user}",
"gists_url": "https://api.github.com/users/vibhavagarwal5/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vibhavagarwal5/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vibhavagarwal5/subscriptions",
"organizations_url": "https://api.github.com/users/vibhavagarwal5/orgs",
"repos_url": "https://api.github.com/users/vibhavagarwal5/repos",
"events_url": "https://api.github.com/users/vibhavagarwal5/events{/privacy}",
"received_events_url": "https://api.github.com/users/vibhavagarwal5/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @vibhavagarwal5 , you can safely ignore this warning, that issue is resolved in this PR #5922 .\r\n\r\nDo you think you can post the stack-trace after the crash, and also the version and memory of the GPU used",
"**GPU DETAILS:** NVIDIA 2080 TI (12GB)\r\n\r\nNVIDIA-SMI 440.95.01\r\nDriver Version: 440.95.01\r\nCUDA Version: 10.2\r\n\r\n**STACK TRACE:**\r\n```\r\nEpoch: 0%| | 0/3 [00:00<?, ?it/sTraceback (most recent call last): | 0/87472 [00:00<?, ?it/s]\r\n File \"finetune_lm.py\", line 553, in <module>\r\n main()\r\n File \"finetune_lm.py\", line 507, in main\r\n global_step, tr_loss = train(args, train_dataset, model, tokenizer)\r\n File \"finetune_lm.py\", line 157, in train\r\n loss.backward()\r\n File \"/home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/tensor.py\", line 198, in backward\r\n torch.autograd.backward(self, gradient, retain_graph, create_graph)\r\n File \"/home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/autograd/__init__.py\", line 100, in backward\r\n allow_unreachable=True) # allow_unreachable flag\r\nRuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)` (createCublasHandle at /opt/conda/conda-bld/pytorch_1587428266983/work/aten/src/ATen/cuda/CublasHandlePool.cpp:8)\r\nframe #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x4e (0x7f6cca012b5e in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libc10.so)\r\nframe #1: <unknown function> + 0xdba405 (0x7f6ccaff9405 in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_cuda.so)\r\nframe #2: at::cuda::getCurrentCUDABlasHandle() + 0x94c (0x7f6ccaffa1ec in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_cuda.so)\r\nframe #3: <unknown function> + 0xdafb01 (0x7f6ccafeeb01 in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_cuda.so)\r\nframe #4: <unknown function> + 0x1263db7 (0x7f6ccb4a2db7 in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_cuda.so)\r\nframe #5: THCudaTensor_addmm + 0x5c (0x7f6ccb4a84ac in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_cuda.so)\r\nframe #6: <unknown function> + 0xea5f28 (0x7f6ccb0e4f28 in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_cuda.so)\r\nframe #7: <unknown function> + 0xdc92e8 (0x7f6ccb0082e8 in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_cuda.so)\r\nframe #8: <unknown function> + 0xe224d0 (0x7f6cf5c264d0 in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #9: <unknown function> + 0x29f9d0e (0x7f6cf77fdd0e in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #10: <unknown function> + 0xe224d0 (0x7f6cf5c264d0 in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #11: at::Tensor::mm(at::Tensor const&) const + 0xf0 (0x7f6cf57ea180 in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #12: <unknown function> + 0x264517c (0x7f6cf744917c in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #13: torch::autograd::generated::MmBackward::apply(std::vector<at::Tensor, std::allocator<at::Tensor> >&&) + 0x151 (0x7f6cf7449f81 in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #14: <unknown function> + 0x2ae8215 (0x7f6cf78ec215 in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #15: torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&) + 0x16f3 (0x7f6cf78e9513 in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #16: torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&, bool) + 0x3d2 (0x7f6cf78ea2f2 in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #17: torch::autograd::Engine::thread_init(int) + 0x39 (0x7f6cf78e2969 in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #18: torch::autograd::python::PythonEngine::thread_init(int) + 0x38 (0x7f6cfac29558 in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_python.so)\r\nframe #19: <unknown function> + 0xc819d (0x7f6d1200b19d in /mnt/c7cfa338-89cd-4d15-b0b9-f1befc9a2c68/vibhav/anaconda3/envs/vesnli/bin/../lib/libstdc++.so.6)\r\nframe #20: <unknown function> + 0x76db (0x7f6d269a16db in /lib/x86_64-linux-gnu/libpthread.so.0)\r\nframe #21: clone + 0x3f (0x7f6d266caa3f in /lib/x86_64-linux-gnu/libc.so.6)\r\n```",
"Seems like memory error, you can try running with batch size 1 and see if it still crashes.",
"Nope not a memory error. Still crashing with batch size 1",
"Which GPT-2 model are you using ? ",
"Tried both 'gpt2' and 'gpt2-medium'. Same issue",
"Could you run it in on CPU, erros will be more readable.",
"```\r\nraceback (most recent call last): | 0/174944 [00:00<?, ?it/s]\r\n File \"finetune_lm.py\", line 553, in <module>\r\n main()\r\n File \"finetune_lm.py\", line 507, in main\r\n global_step, tr_loss = train(args, train_dataset, model, tokenizer)\r\n File \"finetune_lm.py\", line 144, in train\r\n outputs = model(inputs, labels=labels)\r\n File \"/home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 550, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/transformers/modeling_gpt2.py\", line 601, in forward\r\n output_hidden_states=output_hidden_states,\r\n File \"/home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 550, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/transformers/modeling_gpt2.py\", line 469, in forward\r\n inputs_embeds = self.wte(input_ids)\r\n File \"/home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 550, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/nn/modules/sparse.py\", line 114, in forward\r\n self.norm_type, self.scale_grad_by_freq, self.sparse)\r\n File \"/home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/nn/functional.py\", line 1724, in embedding\r\n return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\r\nIndexError: index out of range in self\r\n```",
"So looks like error is form the embedding layer, what's the shape of your `inputs`",
"bs x seq_len (4x63 or 4*52 ... anything)",
"could you be more specific, what is your seq_len ? ",
"Input:\r\n```\r\ntorch.Size([4, 47])\r\nPremise: Children smiling and waving at camera Hypothesis: There are children present [EXP] The children must be present to see them smiling and waving. [EOS] [EOS] [EOS] [EOS] [EOS] [EOS] [EOS] [EOS] [EOS] [EOS] [EOS] [EOS] [EOS] [EOS] [EOS] [EOS] [EOS]\r\n```\r\nlabel:\r\n```\r\ntorch.Size([4, 47])\r\n[None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, 'The', 'Ġchildren', 'Ġmust', 'Ġbe', 'Ġpresent', 'Ġto', 'Ġsee', 'Ġthem', 'Ġsmiling', 'Ġand', 'Ġwaving', '.', '[EOS]', '[EOS]', '[EOS]', '[EOS]', '[EOS]', '[EOS]', '[EOS]', '[EOS]', '[EOS]', '[EOS]', '[EOS]', '[EOS]', '[EOS]', '[EOS]', '[EOS]', '[EOS]', '[EOS]']\r\n```",
"Did you add any new tokens to the tokenizer ?\r\n\r\nget the shape of embeddings using `model.transformer.wte.weight.shape` the first dim of shape and len of tokenizer should match. See if this asserts is True\r\n```python3\r\nassert modle.transformer.wte.weight.shape[0] == len(tokenizer)\r\n```\r\n\r\nif not then that means, your vocab size and embed input size are not matching. If you added new tokens to the vocab, you'll need to resize the token embeddings of the model. You can resize it using\r\n\r\n```python3\r\nmodel.resize_token_embeddings(len(tokenizer))\r\n```\r\n\r\n",
"I'm doing this already..\r\n\r\n```python\r\ntokenizer.add_special_tokens(SPECIAL_TOKENS_DICT)\r\nmodel.resize_token_embeddings(len(tokenizer))\r\n```",
"I was able to reprouce the bug only when embedding size and vocab len didn't match. \r\n\r\n`assert modle.transformer.wte.weight.shape[0] == len(tokenizer)` did this assert result in `False` ?",
"No, its True because I did the model.resize so this should have anyways asserted True.",
"For whatever reason, `transformers v2.3` is working and the latest `3.x`",
"Hi @vibhavagarwal5, could you provide a sample script so that we may reproduce on our side? Something with a sample text that makes it crash would be wonderful, if you have one.",
"I figured it out, it was due to the change in ignore_index=-100 instead of -1 in the cross entropy loss which was causing the issue. I'll close this. ",
"Glad you could find the source of the issue!"
] | 1,596 | 1,596 | 1,596 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Ubuntu
- Python version: 3.6
- PyTorch version (GPU?): 1.5.0
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
@LysandreJik
## Information
Trying to finetune GPT2 model but the GPU is crashing after `loss.backward()`. I thought it might be just my code but I ran some different code involving finetuning GPT2 and that as well crashed in the same manner.
Getting this warning as well.
```
WARNING - transformers.modeling_utils - Some weights of GPT2LMHeadModel were not initialized from the model checkpoint at gpt2 and are newly initialized: ['h.0.attn.masked_bias', 'h.1.attn.masked_bias', 'h.2.attn.masked_bias', 'h.3.attn.masked_bias', 'h.4.attn.masked_bias', 'h.5.attn.masked_bias', 'h.6.attn.masked_bias', 'h.7.attn.masked_bias', 'h.8.attn.masked_bias', 'h.9.attn.masked_bias', 'h.10.attn.masked_bias', 'h.11.attn.masked_bias', 'lm_head.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
A week or 2 back, everything was working fine but now the same code is crashing on `loss.backward()`.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6192/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6192/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6191 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6191/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6191/comments | https://api.github.com/repos/huggingface/transformers/issues/6191/events | https://github.com/huggingface/transformers/issues/6191 | 671,086,430 | MDU6SXNzdWU2NzEwODY0MzA= | 6,191 | How to integrate the Pyro module with HuggingFace Transformers? | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"please let me know how this progresses I am also interested in doing this",
"Hello,\r\nWould it be possible for your team to also work on GPT2LMHeadModel on this same issue?\r\n\r\nThank you,",
"Hey @h56cho - did you find a solution here? Would be great if you can post code :-) ",
"Likewise, would be very interested in what came of this."
] | 1,596 | 1,617 | 1,596 | NONE | null | Hello,
I am trying to convert the HuggingFace Transformer into a Bayesian neural network by using the `Pyro` module.
I provided my code below. Everything works well except I am stuck at the line `svi_loss = svi.step(input_ids = input_ids, attention_mask = attention_mask, labels = label)`. At that line an error is generated, because after converting the HuggingFace Transformer into a Pyro model, the new model does not have any set parameter (since it is a Bayesian model...so the weights for a Pyro model are not fixed, meaning the weights are sampled from a statistical distribution). Is there any way that I can get around this issue? I have also posted the similar question on Pyro forum. Thank you,
CODE:
```python
import torch
from torch import distributions
from transformers import RobertaTokenizer, RobertaForMultipleChoice, AdamW, get_constant_schedule
import pyro
import pyro.infer
import pyro.optim
import pyro.distributions as dist
import pyro.nn.module as module
import pyro.infer.autoguide.guides as guides
from torch import nn
from pyro.optim import Adam
from pyro.infer import SVI
from pyro.infer import Trace_ELBO
from pyro.infer import Predictive
# get the pre-trained HuggingFace RobertaForMultipleChoice and resize the token embeddings
# after adding the special token
model_RobertaForMultipleChoice = RobertaForMultipleChoice.from_pretrained('Roberta-base')
# convert the HuggingFace model into a pyro model
module.to_pyro_module_(model_RobertaForMultipleChoice)
for m in model_RobertaForMultipleChoice.modules():
for name, value in list(m.named_parameters(recurse=False)):
setattr(m, name, module.PyroSample(prior=dist.Normal(0, 1)
.expand(value.shape)
.to_event(value.dim())))
# define parameters for training
guide_delta = guides.AutoDelta(model_RobertaForMultipleChoice)
optimizer_2 = Adam({"lr": 0.000000055})
scheduler_2 = pyro.optim.StepLR({'optimizer': optimizer_2, 'optim_args': {'lr': 0.000000055}})
svi_delta = SVI(model_RobertaForMultipleChoice, guide_delta, optimizer_2, loss=Trace_ELBO())
# training loop
for m in range(num_iter):
# calculate the loss and take a gradient step for svi
# ERRORS OCCUR HERE
svi_loss = svi.step(input_ids = input_ids,
attention_mask = attention_mask,
labels = label)
# update the with the calculated loss
total_svi_loss = total_svi_loss + svi_loss
if m % log_interval == 0 and m > 0:
cur_svi_loss = total_svi_loss / log_interval
print('| epoch {:3d} | {:5d}/{:5d} batches | lr {:02.9f} |
loss {:5.4f} | ppl {:8.4f}'.format(
epoch, m, int(num_lines_train/4), scheduler.get_lr()[0],
cur_svi_loss, math.exp(cur_svi_loss)))
total_svi_loss = 0
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6191/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6191/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6190 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6190/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6190/comments | https://api.github.com/repos/huggingface/transformers/issues/6190/events | https://github.com/huggingface/transformers/issues/6190 | 670,987,339 | MDU6SXNzdWU2NzA5ODczMzk= | 6,190 | Add support for truncation argument when calling a Pipeline | {
"login": "mkaze",
"id": 8656825,
"node_id": "MDQ6VXNlcjg2NTY4MjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8656825?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mkaze",
"html_url": "https://github.com/mkaze",
"followers_url": "https://api.github.com/users/mkaze/followers",
"following_url": "https://api.github.com/users/mkaze/following{/other_user}",
"gists_url": "https://api.github.com/users/mkaze/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mkaze/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mkaze/subscriptions",
"organizations_url": "https://api.github.com/users/mkaze/orgs",
"repos_url": "https://api.github.com/users/mkaze/repos",
"events_url": "https://api.github.com/users/mkaze/events{/privacy}",
"received_events_url": "https://api.github.com/users/mkaze/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
] | [
"is there any workaround for this? i'm seeing the `Token indices sequence length is longer than the specified maximum sequence length for this model (... > 512). Running this sequence through the model will result in indexing errors` when using TextClassificationPipeline.\r\n\r\n(This is preventing me from upgrading to 3.x.)",
"routing this to @mfuntowicz @LysandreJik ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"+1 on this",
"Hi, even though this has been closed as stale, without comment or supposed fix, it seems that in recent versions you can in fact pass both `truncation` and `padding` arguments to the pipeline's `__call__` method, and it will correctly use them when tokenizing. I've tested it with long texts that fail without the truncation argument, and it seems to work as expected. "
] | 1,596 | 1,627 | 1,606 | NONE | null | # 🚀 Feature request
Currently, only the `padding` argument [is supported](https://github.com/huggingface/transformers/blob/a39dfe4fb122c11be98a563fb8ca43b322e01036/src/transformers/pipelines.py#L500) when calling a pipeline, and it's not possible to pass `truncation` argument. For example, running the following code sample would raise an error:
```python
import transformers as trf
model = trf.pipeline(task='feature-extraction', model='bert-base-cased')
output = model('a sample text', padding=False, truncation=True)
```
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
If toggling padding is supported, then why truncation shouldn't be?
## Your contribution
I think to achieve this, same as `padding`, only a `truncation` argument should be added to `_parse_and_tokenize` method and also when calling the tokenizer. If that's the case, I would be willing to work on a PR. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6190/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6190/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6189 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6189/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6189/comments | https://api.github.com/repos/huggingface/transformers/issues/6189/events | https://github.com/huggingface/transformers/pull/6189 | 670,981,529 | MDExOlB1bGxSZXF1ZXN0NDYxMjE2NjIy | 6,189 | Support new tokenizers in distillation example | {
"login": "mapmeld",
"id": 643918,
"node_id": "MDQ6VXNlcjY0MzkxOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/643918?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mapmeld",
"html_url": "https://github.com/mapmeld",
"followers_url": "https://api.github.com/users/mapmeld/followers",
"following_url": "https://api.github.com/users/mapmeld/following{/other_user}",
"gists_url": "https://api.github.com/users/mapmeld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mapmeld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mapmeld/subscriptions",
"organizations_url": "https://api.github.com/users/mapmeld/orgs",
"repos_url": "https://api.github.com/users/mapmeld/repos",
"events_url": "https://api.github.com/users/mapmeld/events{/privacy}",
"received_events_url": "https://api.github.com/users/mapmeld/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6189?src=pr&el=h1) Report\n> Merging [#6189](https://codecov.io/gh/huggingface/transformers/pull/6189?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8edfaaa81b9995cedea2f8805e4c18c2b6cb5bfc&el=desc) will **increase** coverage by `1.42%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6189?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6189 +/- ##\n==========================================\n+ Coverage 78.29% 79.71% +1.42% \n==========================================\n Files 146 146 \n Lines 26607 26607 \n==========================================\n+ Hits 20832 21210 +378 \n+ Misses 5775 5397 -378 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6189?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.22% <0.00%> (-63.98%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `80.70% <0.00%> (-3.51%)` | :arrow_down: |\n| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/6189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `27.63% <0.00%> (+1.31%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `90.09% <0.00%> (+1.80%)` | :arrow_up: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+2.73%)` | :arrow_up: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `81.00% <0.00%> (+14.00%)` | :arrow_up: |\n| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `49.09% <0.00%> (+17.09%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.14% <0.00%> (+24.04%)` | :arrow_up: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `93.80% <0.00%> (+25.66%)` | :arrow_up: |\n| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/6189/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6189?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6189?src=pr&el=footer). Last update [8edfaaa...6a2d21e](https://codecov.io/gh/huggingface/transformers/pull/6189?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,602 | 1,602 | CONTRIBUTOR | null | I'm creating a distilled model based on a new transformers model, and needed these two lines from examples changed to make that process easier.
- Change filename of output binarized text vectors to replace '/' with '-'; for example tokenizer 'monsoon-nlp/hindi-bert' will output to a file and not create a new directory
- Load max_model_input_size / max_position_embeddings from teacher_config_class and not from a hardcoded list of common tokenizers in the tokenizer class | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6189/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6189/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6189",
"html_url": "https://github.com/huggingface/transformers/pull/6189",
"diff_url": "https://github.com/huggingface/transformers/pull/6189.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6189.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/6188 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6188/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6188/comments | https://api.github.com/repos/huggingface/transformers/issues/6188/events | https://github.com/huggingface/transformers/issues/6188 | 670,965,926 | MDU6SXNzdWU2NzA5NjU5MjY= | 6,188 | taeminlee/kogpt2 not working | {
"login": "ksjae",
"id": 17930170,
"node_id": "MDQ6VXNlcjE3OTMwMTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/17930170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ksjae",
"html_url": "https://github.com/ksjae",
"followers_url": "https://api.github.com/users/ksjae/followers",
"following_url": "https://api.github.com/users/ksjae/following{/other_user}",
"gists_url": "https://api.github.com/users/ksjae/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ksjae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ksjae/subscriptions",
"organizations_url": "https://api.github.com/users/ksjae/orgs",
"repos_url": "https://api.github.com/users/ksjae/repos",
"events_url": "https://api.github.com/users/ksjae/events{/privacy}",
"received_events_url": "https://api.github.com/users/ksjae/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"~It seems to work? What's happening on your end?~ Indeed, the generated text seems to be truncated.",
"@ksjae, I can't seem to make that model work in my environment, without relying on the inference API. \r\n\r\n```py\r\nfrom transformers import AutoModelWithLMHead, pipeline\r\nfrom transformers import GPT2Tokenizer\r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained(\"taeminlee/kogpt2\", do_lower_case=False)\r\n\r\nprint(tokenizer.tokenize(\"제 이름은 홍길동\"))\r\n# ['ì', 'ł', 'ľ', 'Ġ', 'ì', 'Ŀ', '´', 'ë', '¦', 'Ħ', 'ì', 'Ŀ', 'Ģ', 'Ġ', 'í', 'Ļ', 'į', 'ê', '¸', '¸', 'ë', 'ı', 'Ļ'] # probably not what we're looking for\r\n\r\nprint(tokenizer.decode(tokenizer.encode(\"제 이름은 홍길동\")))\r\n# <unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk>�<unk><unk><unk><unk><unk><unk><unk><unk>\r\n# Definitely not what we're looking for\r\n```\r\n\r\nDo you know the author?",
"No, I don't.\r\nI'll try to add a new one(in training right now) though.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,602 | 1,602 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- None: hosted inference API
### Who can help
@LysandreJik
@julien-c
@TevenLeScao
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [O] the official example scripts: Hosted Inference API testing page
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [O] my own task or dataset: Text generation, or anything
## To reproduce
Steps to reproduce the behavior:
1. https://huggingface.co/taeminlee/kogpt2?text=제+이름은+홍길동
2. Or type any text
3. Model returns just the text
## Expected behavior
Generated text should be returned, but isn't.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6188/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6188/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6187 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6187/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6187/comments | https://api.github.com/repos/huggingface/transformers/issues/6187/events | https://github.com/huggingface/transformers/pull/6187 | 670,839,671 | MDExOlB1bGxSZXF1ZXN0NDYxMDgzMTc1 | 6,187 | add new model prophetnet | {
"login": "qiweizhen",
"id": 23720856,
"node_id": "MDQ6VXNlcjIzNzIwODU2",
"avatar_url": "https://avatars.githubusercontent.com/u/23720856?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qiweizhen",
"html_url": "https://github.com/qiweizhen",
"followers_url": "https://api.github.com/users/qiweizhen/followers",
"following_url": "https://api.github.com/users/qiweizhen/following{/other_user}",
"gists_url": "https://api.github.com/users/qiweizhen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qiweizhen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qiweizhen/subscriptions",
"organizations_url": "https://api.github.com/users/qiweizhen/orgs",
"repos_url": "https://api.github.com/users/qiweizhen/repos",
"events_url": "https://api.github.com/users/qiweizhen/events{/privacy}",
"received_events_url": "https://api.github.com/users/qiweizhen/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | {
"login": "JetRunner",
"id": 22514219,
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JetRunner",
"html_url": "https://github.com/JetRunner",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions",
"organizations_url": "https://api.github.com/users/JetRunner/orgs",
"repos_url": "https://api.github.com/users/JetRunner/repos",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"received_events_url": "https://api.github.com/users/JetRunner/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "JetRunner",
"id": 22514219,
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JetRunner",
"html_url": "https://github.com/JetRunner",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions",
"organizations_url": "https://api.github.com/users/JetRunner/orgs",
"repos_url": "https://api.github.com/users/JetRunner/repos",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"received_events_url": "https://api.github.com/users/JetRunner/received_events",
"type": "User",
"site_admin": false
}
] | [
"> @sshleifer Is there anything else needed to do in order to make ProphetNet work with your seq2seq example?\r\n> \r\n> Also @mfuntowicz for pipelines\r\n\r\nI tried examples/seq2seq/finetune.py and it works with python finetune.py --do_train and --do_predict.",
"I will try to complete document and unit test by this week",
"@patrickvonplaten Thanks for your review! I learned a lot, too.\r\n\r\n@qiweizhen Please be free to contact me for discussion via WeChat if you have trouble understanding Patrick's comments or you want to have another person to double-check! Thanks for your great work!"
] | 1,596 | 1,600 | 1,600 | CONTRIBUTOR | null | # Add new model structure [ProphetNet](https://arxiv.org/abs/2001.04063).
## Description:
ProphetNet is a new pre-trained language model for sequence-to-sequence learning with a novel self-supervised objective called future n-gram prediction. ProphetNet is able to predict more future tokens with a n-stream decoder. The original implementation is Fairseq version at [github repo](https://github.com/microsoft/ProphetNet).
xProphetNet has the same model structure, but is pretrained with wikipedia 100 languages dataset as described in [xGLUE](https://arxiv.org/abs/2004.01401). xGLUE is a benchmark for corss-lingual NLU and NLG tasks. xProphetNet is also served as a baseline model for cross-lingual generation tasks in xGLUE NTG and QG.
## Usage:
Take xGLUE NTG task as an example:
Cross-lingual pretrained model is finetuned with English news title generation data, but inference with both English and other zero-shot language data.
A quick usage is like:
```
from transformers import ProphetNetTokenizer, ProphetNetForConditionalGeneration, ProphetNetConfig
model = ProphetNetForConditionalGeneration.from_pretrained('microsoft/xprophetnet-large-wiki100-cased-xglue-ntg')
tokenizer = ProphetNetTokenizer.from_pretrained('microsoft/xprophetnet-large-wiki100-cased-xglue-ntg')
EN_SENTENCE_TO_QUESTION = "Microsoft Corporation intends to officially end free support for the Windows 7 operating system after January 14, 2020, according to the official portal of the organization. From that day, users of this system will not be able to receive security updates, which could make their computers vulnerable to cyber attacks."
RU_SENTENCE_TO_QUESTION = "орпорация Microsoft намерена официально прекратить бесплатную поддержку операционной системы Windows 7 после 14 января 2020 года, сообщается на официальном портале организации . С указанного дня пользователи этой системы не смогут получать обновления безопасности, из-за чего их компьютеры могут стать уязвимыми к кибератакам."
ZH_SENTENCE_TO_QUESTION = "根据该组织的官方门户网站,微软公司打算在2020年1月14日之后正式终止对Windows 7操作系统的免费支持。从那时起,该系统的用户将无法接收安全更新,这可能会使他们的计算机容易受到网络攻击。"
inputs = tokenizer([EN_SENTENCE_TO_QUESTION, RU_SENTENCE_TO_QUESTION, ZH_SENTENCE_TO_QUESTION], padding=True, max_length=256, return_tensors='pt')
summary_ids = model.generate(inputs['input_ids'], num_beams=4, max_length=100, early_stopping=True)
print([tokenizer.decode(g) for g in summary_ids])
```
Model will generate news titles like:
```
['[SEP] Microsoft to end Windows 7 free support after January 14, 2020[SEP][PAD][PAD][PAD][PAD]',
'[SEP] Microsoft намерена прекратить бесплатную поддержку Windows 7 после 14 января 2020 года[SEP]',
'[SEP]微软打算终止对Windows 7操作系统的免费支持[SEP][PAD][PAD][PAD][PAD][PAD][PAD]']
```
## Released checkpoints:
pretrained:
```
microsoft/prophetnet-large-uncased
microsoft/xprophetnet-large-wiki100-cased
```
fine-tuned:
```
microsoft/prophetnet-large-uncased-cnndm
microsoft/xprophetnet-large-wiki100-cased-xglue-ntg
microsoft/xprophetnet-large-wiki100-cased-xglue-qg
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6187/reactions",
"total_count": 12,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 4,
"confused": 0,
"heart": 4,
"rocket": 4,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6187/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6187",
"html_url": "https://github.com/huggingface/transformers/pull/6187",
"diff_url": "https://github.com/huggingface/transformers/pull/6187.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6187.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/6186 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6186/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6186/comments | https://api.github.com/repos/huggingface/transformers/issues/6186/events | https://github.com/huggingface/transformers/issues/6186 | 670,739,315 | MDU6SXNzdWU2NzA3MzkzMTU= | 6,186 | Remove inconsistency between BertTokenizer and BertTokenizerFast | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Fixed with #6280"
] | 1,596 | 1,596 | 1,596 | CONTRIBUTOR | null | # 🚀 Feature request
`BertTokenizerFast` has the option to specify `strip_accents=False`. The `BertTokenizer` does not have this option. This inconsistency should be removed by adding the `strip_accents` parameter to `BertTokenizer`.
## Motivation
Without adding this, the `BertTokenizer` can not be used for language models which are lowercase but have accents.
In case of a language model with lowercase and with accents you are forced to load the tokenizer by this:
```python
tokenizer = AutoTokenizer.from_pretrained("<model_name_or_path>", use_fast=True, strip_accents=False)
```
This will NOT work: `tokenizer = AutoTokenizer.from_pretrained("<model_name_or_path>")`
And even this would not work: `tokenizer = AutoTokenizer.from_pretrained("<model_name_or_path>", strip_accents=False)`
## Your contribution
With some hints I am willing to contribute. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6186/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6186/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6185 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6185/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6185/comments | https://api.github.com/repos/huggingface/transformers/issues/6185/events | https://github.com/huggingface/transformers/pull/6185 | 670,726,123 | MDExOlB1bGxSZXF1ZXN0NDYwOTc1MDgz | 6,185 | Fix docstring for `BertTokenizerFast`. | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6185?src=pr&el=h1) Report\n> Merging [#6185](https://codecov.io/gh/huggingface/transformers/pull/6185?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a39dfe4fb122c11be98a563fb8ca43b322e01036&el=desc) will **increase** coverage by `1.25%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6185?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6185 +/- ##\n==========================================\n+ Coverage 78.34% 79.59% +1.25% \n==========================================\n Files 146 146 \n Lines 26607 26607 \n==========================================\n+ Hits 20844 21178 +334 \n+ Misses 5763 5429 -334 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6185?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.32% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.22% <0.00%> (-63.98%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `70.32% <0.00%> (-26.66%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `80.30% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+34.61%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6185?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6185?src=pr&el=footer). Last update [a39dfe4...f44324f](https://codecov.io/gh/huggingface/transformers/pull/6185?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,596 | 1,596 | 1,596 | CONTRIBUTOR | null | - remove duplicate doc-entry for `tokenize_chinese_chars`
- add doc for `strip_accents` and `wordpieces_prefix` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6185/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6185/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6185",
"html_url": "https://github.com/huggingface/transformers/pull/6185",
"diff_url": "https://github.com/huggingface/transformers/pull/6185.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6185.patch",
"merged_at": 1596355106000
} |
https://api.github.com/repos/huggingface/transformers/issues/6184 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6184/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6184/comments | https://api.github.com/repos/huggingface/transformers/issues/6184/events | https://github.com/huggingface/transformers/pull/6184 | 670,555,152 | MDExOlB1bGxSZXF1ZXN0NDYwODE0MzYy | 6,184 | [s2s] clean up + doc | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6184?src=pr&el=h1) Report\n> Merging [#6184](https://codecov.io/gh/huggingface/transformers/pull/6184?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8edfaaa81b9995cedea2f8805e4c18c2b6cb5bfc&el=desc) will **increase** coverage by `0.20%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6184?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6184 +/- ##\n==========================================\n+ Coverage 78.29% 78.50% +0.20% \n==========================================\n Files 146 146 \n Lines 26607 26607 \n==========================================\n+ Hits 20832 20887 +55 \n+ Misses 5775 5720 -55 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6184?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.22% <0.00%> (-63.98%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `80.30% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/6184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `27.63% <0.00%> (+1.31%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `90.09% <0.00%> (+1.80%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+2.50%)` | :arrow_up: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+2.73%)` | :arrow_up: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `81.00% <0.00%> (+14.00%)` | :arrow_up: |\n| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `49.09% <0.00%> (+17.09%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.32% <0.00%> (+23.67%)` | :arrow_up: |\n| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/6184/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6184?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6184?src=pr&el=footer). Last update [8edfaaa...566b357](https://codecov.io/gh/huggingface/transformers/pull/6184?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,596 | 1,596 | 1,596 | CONTRIBUTOR | null | this is a follow up to https://github.com/huggingface/transformers/pull/6149
- there was no need to add newly added options to finetune.sh - reverted that change
- added a hint to users how to get all the options (--help)
@sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6184/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6184/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6184",
"html_url": "https://github.com/huggingface/transformers/pull/6184",
"diff_url": "https://github.com/huggingface/transformers/pull/6184.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6184.patch",
"merged_at": 1596307868000
} |
https://api.github.com/repos/huggingface/transformers/issues/6183 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6183/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6183/comments | https://api.github.com/repos/huggingface/transformers/issues/6183/events | https://github.com/huggingface/transformers/pull/6183 | 670,335,570 | MDExOlB1bGxSZXF1ZXN0NDYwNjA5MTI0 | 6,183 | Fix tokenizer saving/loading with custom token objects | {
"login": "mozharovsky",
"id": 6762769,
"node_id": "MDQ6VXNlcjY3NjI3Njk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6762769?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mozharovsky",
"html_url": "https://github.com/mozharovsky",
"followers_url": "https://api.github.com/users/mozharovsky/followers",
"following_url": "https://api.github.com/users/mozharovsky/following{/other_user}",
"gists_url": "https://api.github.com/users/mozharovsky/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mozharovsky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mozharovsky/subscriptions",
"organizations_url": "https://api.github.com/users/mozharovsky/orgs",
"repos_url": "https://api.github.com/users/mozharovsky/repos",
"events_url": "https://api.github.com/users/mozharovsky/events{/privacy}",
"received_events_url": "https://api.github.com/users/mozharovsky/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Investigating the issues 🤓 ",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6183?src=pr&el=h1) Report\n> Merging [#6183](https://codecov.io/gh/huggingface/transformers/pull/6183?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b25cec13c57656941aac3b920eeb488c1915df18&el=desc) will **increase** coverage by `0.57%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6183?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6183 +/- ##\n==========================================\n+ Coverage 79.08% 79.66% +0.57% \n==========================================\n Files 149 147 -2 \n Lines 27685 26603 -1082 \n==========================================\n- Hits 21894 21192 -702 \n+ Misses 5791 5411 -380 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6183?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_io.py](https://codecov.io/gh/huggingface/transformers/pull/6183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX2lvLnB5) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.71% <100.00%> (-0.14%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.22% <0.00%> (-63.51%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-23.43%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.65% <0.00%> (-23.08%)` | :arrow_down: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `85.71% <0.00%> (-14.29%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `71.21% <0.00%> (-12.88%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `76.47% <0.00%> (-11.88%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `80.05% <0.00%> (-2.14%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (-1.51%)` | :arrow_down: |\n| ... and [54 more](https://codecov.io/gh/huggingface/transformers/pull/6183/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6183?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6183?src=pr&el=footer). Last update [cdf1f7e...d159584](https://codecov.io/gh/huggingface/transformers/pull/6183?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Looks much better now 🎉\n\n@n1t0 @thomwolf could you please take a glance? :)",
"I did a bit of refactoring to have load/dump methods reusable across the library. \r\n\r\n\r\nThese changes introduce a new scope for the common File IO methods (like json pickle load/dump). Although there is an already existing `file_utils` scope, I believe it's not supposed to contain the common IO operations as the docs say (`Utilities for working with the local dataset cache.`).\r\n\r\n@LysandreJik, could you please review the changes? :)",
"It looks like #6026 fixed the issue. But nevertheless this PR is still relevant since it guarantees tokenizers saving/loading with objects of arbitrary types. ",
"Hi, thanks a lot for your contribution! Unfortunately we're very strict about adding new dependencies, and this doesn't really change the existing behavior. I'm not sure I see the pros of introducing this new scope vs the cons of integrating a new dependencies + updating existing code.",
"Hi, thanks for replying! I respect your policy regarding the third-party dependencies, this makes sense. Anyway, the current approaches compromise tokenizers (de)serialization with introducing new objects to include (besides `AddedToken` instances). This might be a subject of future improvements once new types are introduced into the tokenizer configs, but I think it makes sense to consider generalizing the behavior.\r\n\r\nIf it makes sense, I could wrap saving/loading configs into the new scope without a third-party library, though it's far from generalizing, at least adding new cases to handle will be easier. What do you think? :)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,604 | 1,604 | NONE | null | ## Summary
This PR fixes issue #5571.
Pre-trained tokenizers might wrap tokens around custom types (e.g. `AddedToken` from **🤗/tokenizers**) which makes (de)serialization without additional meta information difficult. This PR uses `jsonpickle` library which particularly solves objects (de)serialization problem.
A little drawback of this approach is that type information is a subject of change and such changes will break back-compatibility once happened. But this seems just like an agreement for making careful back-compatible changes. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6183/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6183/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6183",
"html_url": "https://github.com/huggingface/transformers/pull/6183",
"diff_url": "https://github.com/huggingface/transformers/pull/6183.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6183.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/6182 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6182/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6182/comments | https://api.github.com/repos/huggingface/transformers/issues/6182/events | https://github.com/huggingface/transformers/issues/6182 | 670,244,665 | MDU6SXNzdWU2NzAyNDQ2NjU= | 6,182 | Failing XLMModelTest | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,596 | 1,596 | 1,596 | CONTRIBUTOR | null | https://github.com/huggingface/transformers/runs/929962952?check_suite_focus=true
FAILED tests/test_modeling_xlm.py::XLMModelTest::test_inputs_embeds - RuntimeError
Failure introduced somewhere in here:
```
* d951c14a Sylvain Gugger: Model output test (#6155) - (8 hours ago)
* 86caab1e Sylvain Gugger: Harmonize both Trainers API (#6157) - (8 hours ago)
* 603cd81a Mehrdad Farahani: readme m3hrdadfi/albert-fa-base-v2 (#6153) - (11 hours ago)
* 838dc06f Suraj Patil: parse arguments from dict (#4869) - (13 hours ago)
* cf3cf304 Paul O'Leary McCann: Replace mecab-python3 with fugashi for Japanese tokenization (#6086) - (13 hours ago)
* f250beb8 Stas Bekman: enable easy checkout switch (#5645) - (13 hours ago)
* 7d50af4b kolk: Create README.md (#6169) - (13 hours ago)
* 0034a1d2 Prajjwal Bhargava: Add Pytorch Native AMP support in Trainer (#6151) - (13 hours ago)
* 7231f7b5 Funtowicz Morgan: Enable ONNX/ONNXRuntime optimizations through converter script (#6131) - (14 hours ago)
* c0b93a1c Stas Bekman: correct the correction (#6163) - (23 hours ago)
* a2f6d521 Stas Bekman: typos (#6162) - (24 hours ago)
* f3065abd Sylvain Gugger: Doc tokenizer (#6110) - (26 hours ago)
* e642c789 guillaume-be: Addition of a DialoguePipeline (#5516) - (27 hours ago)
* ec026747 Lysandre Debut: Fix FlauBERT GPU test (#6142) - (30 hours ago)
* 91cb9546 Sylvain Gugger: Switch from return_tuple to return_dict (#6138) - (32 hours ago)
* 562b6369 Sylvain Gugger: Tf trainer cleanup (#6143) - (32 hours ago)
* c127d055 Oren Amsalem: add another e.g. to avoid confusion (#6055) - (32 hours ago)
* d24ea708 Oren Amsalem: Actually the extra_id are from 0-99 and not from 1-100 (#5967) - (35 hours ago)
* 3212b885 Stas Bekman: [s2s] add support for overriding config params (#6149) - (2 days ago)
* 54f9fbef Julien Plu: Rework TF trainer (#6038) - (2 days ago)
* 3f94170a Lysandre Debut: [WIP] Test TF Flaubert + Add {XLM, Flaubert}{TokenClassification, MultipleC… (#5614) - (2 days ago)
* 8a8ae276 Sylvain Gugger: Use google style to document properties (#6130) - (2 days ago)
* fc64559c Julien Plu: Fix TF CTRL model naming (#6134) - (2 days ago)
* 641b873c Lysandre Debut: XLNet PLM Readme (#6121) - (2 days ago)
* 8d157c93 Timo Moeller: add deepset/xlm-roberta-large-squad2 model card (#6128) - (2 days ago)
* 6c002853 Funtowicz Morgan: Added capability to quantize a model while exporting through ONNX. (#6089) - (2 days ago)
* 25de74cc Sylvain Gugger: Use FutureWarning to deprecate (#6111) - (3 days ago)
* 640550fc Funtowicz Morgan: ONNX documentation (#5992) - (3 days ago)
```
Any idea on this @sgugger or @LysandreJik ? Otherwise I'll dig in. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6182/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6182/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6181 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6181/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6181/comments | https://api.github.com/repos/huggingface/transformers/issues/6181/events | https://github.com/huggingface/transformers/issues/6181 | 670,239,856 | MDU6SXNzdWU2NzAyMzk4NTY= | 6,181 | Failing ONNX Export test | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,602 | 1,602 | CONTRIBUTOR | null | https://github.com/huggingface/transformers/runs/929962952?check_suite_focus=true
```
FAILED tests/test_onnx.py::OnnxExportTestCase::test_quantize_pytorch - TypeError
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6181/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6181/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6180 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6180/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6180/comments | https://api.github.com/repos/huggingface/transformers/issues/6180/events | https://github.com/huggingface/transformers/pull/6180 | 670,237,815 | MDExOlB1bGxSZXF1ZXN0NDYwNTE5NDUw | 6,180 | Fixed typo in Longformer | {
"login": "faiazrahman",
"id": 42232624,
"node_id": "MDQ6VXNlcjQyMjMyNjI0",
"avatar_url": "https://avatars.githubusercontent.com/u/42232624?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/faiazrahman",
"html_url": "https://github.com/faiazrahman",
"followers_url": "https://api.github.com/users/faiazrahman/followers",
"following_url": "https://api.github.com/users/faiazrahman/following{/other_user}",
"gists_url": "https://api.github.com/users/faiazrahman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/faiazrahman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/faiazrahman/subscriptions",
"organizations_url": "https://api.github.com/users/faiazrahman/orgs",
"repos_url": "https://api.github.com/users/faiazrahman/repos",
"events_url": "https://api.github.com/users/faiazrahman/events{/privacy}",
"received_events_url": "https://api.github.com/users/faiazrahman/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,596 | 1,596 | 1,596 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6180/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6180/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6180",
"html_url": "https://github.com/huggingface/transformers/pull/6180",
"diff_url": "https://github.com/huggingface/transformers/pull/6180.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6180.patch",
"merged_at": 1596277249000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6179 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6179/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6179/comments | https://api.github.com/repos/huggingface/transformers/issues/6179/events | https://github.com/huggingface/transformers/issues/6179 | 670,233,002 | MDU6SXNzdWU2NzAyMzMwMDI= | 6,179 | HANS Dataset: Incorrect `label_list` and `label`. | {
"login": "HanGuo97",
"id": 18187806,
"node_id": "MDQ6VXNlcjE4MTg3ODA2",
"avatar_url": "https://avatars.githubusercontent.com/u/18187806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HanGuo97",
"html_url": "https://github.com/HanGuo97",
"followers_url": "https://api.github.com/users/HanGuo97/followers",
"following_url": "https://api.github.com/users/HanGuo97/following{/other_user}",
"gists_url": "https://api.github.com/users/HanGuo97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HanGuo97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HanGuo97/subscriptions",
"organizations_url": "https://api.github.com/users/HanGuo97/orgs",
"repos_url": "https://api.github.com/users/HanGuo97/repos",
"events_url": "https://api.github.com/users/HanGuo97/events{/privacy}",
"received_events_url": "https://api.github.com/users/HanGuo97/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes indeed!\r\nSomehow, I was sure it was corrected on the master but I guess it was only on my private branch.\r\nLet me open a PR, thanks for pointing that out @HanGuo97!",
"Great, appreciate the help!"
] | 1,596 | 1,597 | 1,596 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: n/a
- Python version: n/a
- PyTorch version (GPU?): n/a
- Tensorflow version (GPU?): n/a
- Using GPU in script?: n/a
- Using distributed or parallel set-up in script?: n/a
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
tensorflow: @jplu
documentation: @sgugger
-->
@VictorSanh
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
### Incorrect `label_list`
See line: https://github.com/huggingface/transformers/blob/master/examples/adversarial/utils_hans.py#L259
```python
def get_labels(self):
"""See base class."""
return ["contradiction", "entailment", "neutral"]
```
HANS dataset has only two labels, non-entailment, and entailment, but here three are given. Similarly, when mapping from text label to label-id, the label "non-entailment" (which exists in the task but not in the afore-defined labels), the below line is used. I'm curious if this is intentional? If so, would be great to add a warning/comment as those might cause subtle errors in the future.
https://github.com/huggingface/transformers/blob/master/examples/adversarial/utils_hans.py#L311
```python
label = label_map[example.label] if example.label in label_map else 0
```
### Incorrect `label` index
The below line uses the last column as the label. However, the HANS dataset uses the first column for `label`
https://github.com/huggingface/transformers/blob/master/examples/adversarial/utils_hans.py#L271
```python
label = line[-1]
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6179/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6179/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6178 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6178/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6178/comments | https://api.github.com/repos/huggingface/transformers/issues/6178/events | https://github.com/huggingface/transformers/issues/6178 | 670,222,167 | MDU6SXNzdWU2NzAyMjIxNjc= | 6,178 | Why are the `device()` and `dtype()` functions in `modelling_utils.py` needed? | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,602 | 1,602 | NONE | null | Hello,
For BERT and RoBERTa HuggingFace pre-trained models, why are the `device()` and `dtype()` functions in `modeling_utils.py` needed?
See: https://github.com/huggingface/transformers/blob/8edfaaa81b9995cedea2f8805e4c18c2b6cb5bfc/src/transformers/modeling_utils.py#L158
https://github.com/huggingface/transformers/blob/8edfaaa81b9995cedea2f8805e4c18c2b6cb5bfc/src/transformers/modeling_utils.py#L177
Would it be possible for my RoBERTa model to function without an error, if I modify these `device()` and `dtype()` functions in a way that they will always return `cpu` and `torch.float32` (or `torch.float64`), respectively?
Also, while it is easy to modify the original code to do this, I am not sure on how to get my HuggingFace RoBERTa model to take those modified functions into effect. How can I do this?
Thank you (sorry for asking so many questions),
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6178/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6178/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6177 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6177/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6177/comments | https://api.github.com/repos/huggingface/transformers/issues/6177/events | https://github.com/huggingface/transformers/issues/6177 | 670,219,925 | MDU6SXNzdWU2NzAyMTk5MjU= | 6,177 | RoBERTa for QuestionAnswering | {
"login": "mchari",
"id": 30506151,
"node_id": "MDQ6VXNlcjMwNTA2MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/30506151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchari",
"html_url": "https://github.com/mchari",
"followers_url": "https://api.github.com/users/mchari/followers",
"following_url": "https://api.github.com/users/mchari/following{/other_user}",
"gists_url": "https://api.github.com/users/mchari/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchari/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchari/subscriptions",
"organizations_url": "https://api.github.com/users/mchari/orgs",
"repos_url": "https://api.github.com/users/mchari/repos",
"events_url": "https://api.github.com/users/mchari/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchari/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @mchari, could you please post your environment information and a code sample that we can run to reproduce your error?",
"@patrickvonplaten , thanks for your reply.\r\n\r\ni am using transformers 3.0.2 that i got from a pip install. \r\nHere is the code \r\n`\r\nfrom transformers import RobertaTokenizer, RobertaForQuestionAnswering\r\nimport torch\r\nimport tensorflow\r\n\r\ntokenizer = RobertaTokenizer.from_pretrained('roberta-base')\r\nprint(\"Loaded tokenizer !!!\")\r\nmodel = RobertaForQuestionAnswering.from_pretrained('roberta-base')\r\n\r\nprint(\"Loaded QA model !!!\")\r\n\r\nquestion = \"Who was Jim Henson?\"\r\ncontext = \"Jim Henson was a nice puppet\"\r\n\r\ninput_text = \"[CLS] \" + question + \" [SEP] \" + context + \" [SEP]\"\r\n#input_text = question + \" [SEP] \" + context\r\n#print(tokenizer(input_text))\r\n\r\ninput_ids = tokenizer.encode(input_text)\r\nstart_scores, end_scores = model(torch.tensor([input_ids]))\r\nprint(input_ids)\r\n\r\ntoken_type_ids = [0 if i <= input_ids.index(102) else 1 for i in range(len(input_ids))] \r\nstart_scores, end_scores = model(torch.tensor([input_ids]), token_type_ids=torch.tensor([token_type_ids]))\r\nall_tokens = tokenizer.convert_ids_to_tokens(input_ids) \r\nprint(' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1]))\r\n`",
"i was able to workaround the issue by using the following code that doesn't look for id 102...\r\nnot sure if it is equivalent....\r\n\r\n\r\nencoding = tokenizer.encode_plus(question,context)\r\ninput_ids, attention_mask = encoding[\"input_ids\"], encoding[\"attention_mask\"]\r\nstart_scores, end_scores = model(torch.tensor([input_ids]), attention_mask=torch.tensor([attention_mask]))",
"your workaround is correct. The link you posted above points to a very old example. Please take a look at the example of the updated model of `BertForQuestionAnswering` : https://huggingface.co/transformers/model_doc/bert.html#transformers.BertForQuestionAnswering"
] | 1,596 | 1,597 | 1,597 | NONE | null | I am trying to replicate the example in this link
https://github.com/huggingface/transformers/pull/1502/files, but I get the following error :
``
ValueError Traceback (most recent call last)
<ipython-input-23-823cc70a5d4f> in <module>
----> 1 token_type_ids = [0 if i <= input_ids.index(102) else 1 for i in range(len(input_ids))]
2 start_scores, end_scores = model(torch.tensor([input_ids]), token_type_ids=torch.tensor([token_type_ids]))
3 all_tokens = tokenizer.convert_ids_to_tokens(input_ids)
4 print(' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1]))
<ipython-input-23-823cc70a5d4f> in <listcomp>(.0)
----> 1 token_type_ids = [0 if i <= input_ids.index(102) else 1 for i in range(len(input_ids))]
2 start_scores, end_scores = model(torch.tensor([input_ids]), token_type_ids=torch.tensor([token_type_ids]))
3 all_tokens = tokenizer.convert_ids_to_tokens(input_ids)
4 print(' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1]))
ValueError: 102 is not in list
``
I see the same error discussed in https://github.com/huggingface/transformers/issues/2261
Any ideas how I could resolve this issue ?
Thanks in advance.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6177/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6177/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6176 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6176/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6176/comments | https://api.github.com/repos/huggingface/transformers/issues/6176/events | https://github.com/huggingface/transformers/pull/6176 | 670,186,843 | MDExOlB1bGxSZXF1ZXN0NDYwNDcyNjU3 | 6,176 | Adds comet_ml to the list of auto-experiment loggers | {
"login": "dsblank",
"id": 168568,
"node_id": "MDQ6VXNlcjE2ODU2OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/168568?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dsblank",
"html_url": "https://github.com/dsblank",
"followers_url": "https://api.github.com/users/dsblank/followers",
"following_url": "https://api.github.com/users/dsblank/following{/other_user}",
"gists_url": "https://api.github.com/users/dsblank/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dsblank/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dsblank/subscriptions",
"organizations_url": "https://api.github.com/users/dsblank/orgs",
"repos_url": "https://api.github.com/users/dsblank/repos",
"events_url": "https://api.github.com/users/dsblank/events{/privacy}",
"received_events_url": "https://api.github.com/users/dsblank/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@LysandreJik Will do! When I run `make style` it makes corrections to many other files, and also (for example) changes the import sort order on code that I didn't touch in the code that I am editing. Am I doing something wrong? Or should I commit those changes (in the files I am editing)?",
"Sounds weird, maybe you're missing some packages in your environment. Let me know when you're finished and I'll push the styling on your branch.",
"@sgugger Should have all of the review comments addressed. Thanks to everyone!",
"Great, thanks for iterating @dsblank!",
"You're welcome, and thanks for all of the work on this project! Looking forward to productive ML!"
] | 1,596 | 1,596 | 1,596 | CONTRIBUTOR | null | This PR does three things:
* abstracts the auto-experiment-loggers' is_available() functions into an "integrations.py" file
* adds comet_ml to the list of auto-loggers available
* updates and reorganizes the docs slightly | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6176/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6176/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6176",
"html_url": "https://github.com/huggingface/transformers/pull/6176",
"diff_url": "https://github.com/huggingface/transformers/pull/6176.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6176.patch",
"merged_at": 1596727891000
} |
https://api.github.com/repos/huggingface/transformers/issues/6175 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6175/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6175/comments | https://api.github.com/repos/huggingface/transformers/issues/6175/events | https://github.com/huggingface/transformers/pull/6175 | 670,184,498 | MDExOlB1bGxSZXF1ZXN0NDYwNDcwNTE0 | 6,175 | Doc pipelines | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6175?src=pr&el=h1) Report\n> Merging [#6175](https://codecov.io/gh/huggingface/transformers/pull/6175?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d951c14ae46ee36b76981588ed6d03ab353ad766&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6175?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6175 +/- ##\n=======================================\n Coverage 79.51% 79.52% \n=======================================\n Files 146 146 \n Lines 26607 26618 +11 \n=======================================\n+ Hits 21156 21167 +11 \n Misses 5451 5451 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6175?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.79% <100.00%> (+0.42%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `92.71% <0.00%> (-1.13%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `80.05% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `95.77% <0.00%> (+23.94%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6175?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6175?src=pr&el=footer). Last update [d951c14...1ec7448](https://codecov.io/gh/huggingface/transformers/pull/6175?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,596 | 1,596 | 1,596 | COLLABORATOR | null | Continue the improvement of the main classes documentation with pipelines.
[Preview](https://67155-155220641-gh.circle-artifacts.com/0/docs/_build/html/main_classes/pipelines.html) of the new pipeline page.
[Preview](https://67155-155220641-gh.circle-artifacts.com/0/docs/_build/html/internal/pipelines_utils.html) of the new pipeline utils page. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6175/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6175/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6175",
"html_url": "https://github.com/huggingface/transformers/pull/6175",
"diff_url": "https://github.com/huggingface/transformers/pull/6175.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6175.patch",
"merged_at": 1596469487000
} |
https://api.github.com/repos/huggingface/transformers/issues/6174 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6174/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6174/comments | https://api.github.com/repos/huggingface/transformers/issues/6174/events | https://github.com/huggingface/transformers/issues/6174 | 669,976,857 | MDU6SXNzdWU2Njk5NzY4NTc= | 6,174 | t | {
"login": "owowobread",
"id": 60356326,
"node_id": "MDQ6VXNlcjYwMzU2MzI2",
"avatar_url": "https://avatars.githubusercontent.com/u/60356326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/owowobread",
"html_url": "https://github.com/owowobread",
"followers_url": "https://api.github.com/users/owowobread/followers",
"following_url": "https://api.github.com/users/owowobread/following{/other_user}",
"gists_url": "https://api.github.com/users/owowobread/gists{/gist_id}",
"starred_url": "https://api.github.com/users/owowobread/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/owowobread/subscriptions",
"organizations_url": "https://api.github.com/users/owowobread/orgs",
"repos_url": "https://api.github.com/users/owowobread/repos",
"events_url": "https://api.github.com/users/owowobread/events{/privacy}",
"received_events_url": "https://api.github.com/users/owowobread/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,596 | 1,596 | 1,596 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6174/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6174/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6173 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6173/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6173/comments | https://api.github.com/repos/huggingface/transformers/issues/6173/events | https://github.com/huggingface/transformers/issues/6173 | 669,742,696 | MDU6SXNzdWU2Njk3NDI2OTY= | 6,173 | My finetuned gpt2 model is taking wayy too long to generate samples, like 5-8 minutes | {
"login": "krishnerkar",
"id": 49949733,
"node_id": "MDQ6VXNlcjQ5OTQ5NzMz",
"avatar_url": "https://avatars.githubusercontent.com/u/49949733?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/krishnerkar",
"html_url": "https://github.com/krishnerkar",
"followers_url": "https://api.github.com/users/krishnerkar/followers",
"following_url": "https://api.github.com/users/krishnerkar/following{/other_user}",
"gists_url": "https://api.github.com/users/krishnerkar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/krishnerkar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krishnerkar/subscriptions",
"organizations_url": "https://api.github.com/users/krishnerkar/orgs",
"repos_url": "https://api.github.com/users/krishnerkar/repos",
"events_url": "https://api.github.com/users/krishnerkar/events{/privacy}",
"received_events_url": "https://api.github.com/users/krishnerkar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @Krish-Nerkar , [this](https://discuss.huggingface.co/t/speeding-up-gpt2-generation/470) might help",
"@patil-suraj Thanks Alot! will check those methods out\r\n\r\n"
] | 1,596 | 1,597 | 1,597 | NONE | null | I fine tuned the gpt2 model using transformers, i trained it on a lyrics dataset, and after successful training, when i do model.generate(args), it takes like a hell lot of time to genrate results
What Should i do?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6173/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6173/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6172 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6172/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6172/comments | https://api.github.com/repos/huggingface/transformers/issues/6172/events | https://github.com/huggingface/transformers/issues/6172 | 669,662,241 | MDU6SXNzdWU2Njk2NjIyNDE= | 6,172 | 🐛 Not adding `token_type_ids` when the model is `electra` (pytorch_lightning example) | {
"login": "monologg",
"id": 28896432,
"node_id": "MDQ6VXNlcjI4ODk2NDMy",
"avatar_url": "https://avatars.githubusercontent.com/u/28896432?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/monologg",
"html_url": "https://github.com/monologg",
"followers_url": "https://api.github.com/users/monologg/followers",
"following_url": "https://api.github.com/users/monologg/following{/other_user}",
"gists_url": "https://api.github.com/users/monologg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/monologg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/monologg/subscriptions",
"organizations_url": "https://api.github.com/users/monologg/orgs",
"repos_url": "https://api.github.com/users/monologg/repos",
"events_url": "https://api.github.com/users/monologg/events{/privacy}",
"received_events_url": "https://api.github.com/users/monologg/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @monologg , yes toke_type_ids should be there for electra,\r\n\r\neasy fix would be somethin like\r\n```python3\r\nif self.config.model_type not in [\"xlm\", \"roberta\", \"distilbert\", \"camembert\", \"longformer\"]:\r\n inputs[\"token_type_ids\"] = batch[2] \r\n```\r\n\r\nThis is how it's done in the squad dataset\r\n\r\n@LysandreJik , if yes, I can open a PR",
"@patil-suraj, I also think that is the best to way to fix this issue:)\r\n\r\nhttps://github.com/huggingface/transformers/blob/838dc06ff5a438159ac25f531d622e8f344476f5/examples/text-classification/run_pl_glue.py#L98-L102\r\n\r\nAnd not only in `training_step()`, also `validation_step()` has to be fixed.",
"Yes, totally forgot about that, thanks! ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,602 | 1,602 | CONTRIBUTOR | null | ### Who can help
@sshleifer (examples issue)
## Information
Model I am using (Bert, XLNet ...): `ELECTRA`
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## About Issue
https://github.com/huggingface/transformers/blob/838dc06ff5a438159ac25f531d622e8f344476f5/examples/text-classification/run_pl_glue.py#L38-L39
As above, it seems that `token_type_ids` is not included if the `model_type=='electra'` (which also have `token_type_ids`)
I think this code should be changed in other way. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6172/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6172/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6171 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6171/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6171/comments | https://api.github.com/repos/huggingface/transformers/issues/6171/events | https://github.com/huggingface/transformers/pull/6171 | 669,568,021 | MDExOlB1bGxSZXF1ZXN0NDU5OTIxNDk0 | 6,171 | Update convert_pytorch_checkpoint_to_tf2.py | {
"login": "sunyanhust",
"id": 61798996,
"node_id": "MDQ6VXNlcjYxNzk4OTk2",
"avatar_url": "https://avatars.githubusercontent.com/u/61798996?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sunyanhust",
"html_url": "https://github.com/sunyanhust",
"followers_url": "https://api.github.com/users/sunyanhust/followers",
"following_url": "https://api.github.com/users/sunyanhust/following{/other_user}",
"gists_url": "https://api.github.com/users/sunyanhust/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sunyanhust/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sunyanhust/subscriptions",
"organizations_url": "https://api.github.com/users/sunyanhust/orgs",
"repos_url": "https://api.github.com/users/sunyanhust/repos",
"events_url": "https://api.github.com/users/sunyanhust/events{/privacy}",
"received_events_url": "https://api.github.com/users/sunyanhust/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hello! What do you want to do with this PR?",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6171?src=pr&el=h1) Report\n> Merging [#6171](https://codecov.io/gh/huggingface/transformers/pull/6171?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c0b93a1c7a961e30b30d02d641c9d22120ef5d73&el=desc) will **decrease** coverage by `0.05%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6171?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6171 +/- ##\n==========================================\n- Coverage 79.82% 79.76% -0.06% \n==========================================\n Files 146 146 \n Lines 26597 26597 \n==========================================\n- Hits 21231 21216 -15 \n- Misses 5366 5381 +15 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6171?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.18% <0.00%> (-34.62%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+1.50%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.77% <0.00%> (+35.21%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.19% <0.00%> (+63.97%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6171?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6171?src=pr&el=footer). Last update [c0b93a1...37a803b](https://codecov.io/gh/huggingface/transformers/pull/6171?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"> Hello! What do you want to do with this PR?\r\n\r\nHi! The original code has the following problems:\r\n\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,602 | 1,602 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6171/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6171/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6171",
"html_url": "https://github.com/huggingface/transformers/pull/6171",
"diff_url": "https://github.com/huggingface/transformers/pull/6171.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6171.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6170 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6170/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6170/comments | https://api.github.com/repos/huggingface/transformers/issues/6170/events | https://github.com/huggingface/transformers/issues/6170 | 669,517,063 | MDU6SXNzdWU2Njk1MTcwNjM= | 6,170 | [Benchmark] | {
"login": "julio3361",
"id": 67687496,
"node_id": "MDQ6VXNlcjY3Njg3NDk2",
"avatar_url": "https://avatars.githubusercontent.com/u/67687496?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julio3361",
"html_url": "https://github.com/julio3361",
"followers_url": "https://api.github.com/users/julio3361/followers",
"following_url": "https://api.github.com/users/julio3361/following{/other_user}",
"gists_url": "https://api.github.com/users/julio3361/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julio3361/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julio3361/subscriptions",
"organizations_url": "https://api.github.com/users/julio3361/orgs",
"repos_url": "https://api.github.com/users/julio3361/repos",
"events_url": "https://api.github.com/users/julio3361/events{/privacy}",
"received_events_url": "https://api.github.com/users/julio3361/received_events",
"type": "User",
"site_admin": false
} | [] | closed | true | null | [] | [
"`j`",
"Y",
"Grgj6r",
"Yes mi bro",
"Yes mi bro ",
"K",
"J"
] | 1,596 | 1,596 | 1,596 | NONE | spam | # 🖥 Benchmarking `transformers`
## Benchmark
Which part of `transformers` did you benchmark?
## Set-up
What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use?
## Results
Put your results here!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6170/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6170/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6169 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6169/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6169/comments | https://api.github.com/repos/huggingface/transformers/issues/6169/events | https://github.com/huggingface/transformers/pull/6169 | 669,480,667 | MDExOlB1bGxSZXF1ZXN0NDU5ODQzMjg2 | 6,169 | Create README.md | {
"login": "kolk",
"id": 9049591,
"node_id": "MDQ6VXNlcjkwNDk1OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/9049591?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kolk",
"html_url": "https://github.com/kolk",
"followers_url": "https://api.github.com/users/kolk/followers",
"following_url": "https://api.github.com/users/kolk/following{/other_user}",
"gists_url": "https://api.github.com/users/kolk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kolk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kolk/subscriptions",
"organizations_url": "https://api.github.com/users/kolk/orgs",
"repos_url": "https://api.github.com/users/kolk/repos",
"events_url": "https://api.github.com/users/kolk/events{/privacy}",
"received_events_url": "https://api.github.com/users/kolk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6169?src=pr&el=h1) Report\n> Merging [#6169](https://codecov.io/gh/huggingface/transformers/pull/6169?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c0b93a1c7a961e30b30d02d641c9d22120ef5d73&el=desc) will **decrease** coverage by `0.87%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6169?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6169 +/- ##\n==========================================\n- Coverage 79.82% 78.94% -0.88% \n==========================================\n Files 146 146 \n Lines 26597 26597 \n==========================================\n- Hits 21231 20997 -234 \n- Misses 5366 5600 +234 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6169?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6169/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6169/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.18% <0.00%> (-34.62%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6169/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `71.21% <0.00%> (-12.88%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6169/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6169/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+1.25%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6169/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.77% <0.00%> (+35.21%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6169/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.19% <0.00%> (+63.97%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6169?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6169?src=pr&el=footer). Last update [c0b93a1...ea1f76a](https://codecov.io/gh/huggingface/transformers/pull/6169?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks @kolk!"
] | 1,596 | 1,596 | 1,596 | CONTRIBUTOR | null | README for MiniLM-L12-H384-uncased for QA | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6169/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6169/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6169",
"html_url": "https://github.com/huggingface/transformers/pull/6169",
"diff_url": "https://github.com/huggingface/transformers/pull/6169.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6169.patch",
"merged_at": 1596184116000
} |
https://api.github.com/repos/huggingface/transformers/issues/6168 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6168/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6168/comments | https://api.github.com/repos/huggingface/transformers/issues/6168/events | https://github.com/huggingface/transformers/pull/6168 | 669,462,327 | MDExOlB1bGxSZXF1ZXN0NDU5ODI2Njc2 | 6,168 | Albert pretrain datasets/ datacollator | {
"login": "yl-to",
"id": 23205976,
"node_id": "MDQ6VXNlcjIzMjA1OTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/23205976?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yl-to",
"html_url": "https://github.com/yl-to",
"followers_url": "https://api.github.com/users/yl-to/followers",
"following_url": "https://api.github.com/users/yl-to/following{/other_user}",
"gists_url": "https://api.github.com/users/yl-to/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yl-to/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yl-to/subscriptions",
"organizations_url": "https://api.github.com/users/yl-to/orgs",
"repos_url": "https://api.github.com/users/yl-to/repos",
"events_url": "https://api.github.com/users/yl-to/events{/privacy}",
"received_events_url": "https://api.github.com/users/yl-to/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sgugger @LysandreJik thanks for the advice, will update soon!",
"Great, let us know when this PR is ready to review again!",
"@LysandreJik @sgugger Ready for reviewing again! Thanks for all the suggestions. \r\nI noticed that the check_code_quality test is failed however I can't see the actual failing part, please let me know if this matters.\r\nAnd also please let me know if there are addition modification required, thanks guys!",
"@LysandreJik @sgugger tests added and style check modification were done. Please help me to review this if got time, thanks!\r\n\r\nBesides, I did black reformat all the check_code_quality required files. However it could not pass the tests in CI, have no idea why this happen.\r\n\r\n```\r\nblack --line-length 119 --target-version py35 src/transformers/data/datasets/language_modeling.py\r\nAll done! ✨ 🍰 ✨\r\n1 file left unchanged.\r\n```",
"This is because your black/isort versions aren't up to date. This is not a problem, I just pushed to your branch with the fix, but there's a remaining issue with flake8 that you will have to fix:\r\n\r\n```\r\nsrc/transformers/data/data_collator.py:253:13: F841 local variable 'attention_padding_mask' is assigned to but never used\r\nsrc/transformers/data/datasets/language_modeling.py:153:21: F541 f-string is missing placeholders\r\n```",
"Thanks for adding the test, it's great!",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6168?src=pr&el=h1) Report\n> Merging [#6168](https://codecov.io/gh/huggingface/transformers/pull/6168?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ed71c21d6afcbfa2d8e5bb03acbb88ae0e0ea56a?el=desc) will **decrease** coverage by `0.38%`.\n> The diff coverage is `96.58%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6168?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6168 +/- ##\n==========================================\n- Coverage 79.51% 79.13% -0.39% \n==========================================\n Files 164 164 \n Lines 31022 31137 +115 \n==========================================\n- Hits 24668 24641 -27 \n- Misses 6354 6496 +142 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6168?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/6168/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.31% <ø> (ø)` | |\n| [...rc/transformers/data/datasets/language\\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/6168/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `92.94% <95.23%> (+2.13%)` | :arrow_up: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6168/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `93.18% <100.00%> (+0.88%)` | :arrow_up: |\n| [src/transformers/data/datasets/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/6168/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL19faW5pdF9fLnB5) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/6168/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `16.37% <0.00%> (-82.31%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/6168/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `19.71% <0.00%> (-72.34%)` | :arrow_down: |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/6168/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `75.00% <0.00%> (-25.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6168/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6168/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `86.66% <0.00%> (-0.55%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6168/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/6168/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6168?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6168?src=pr&el=footer). Last update [ed71c21...911b5b4](https://codecov.io/gh/huggingface/transformers/pull/6168?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"> This is because your black/isort versions aren't up to date. This is not a problem, I just pushed to your branch with the fix, but there's a remaining issue with flake8 that you will have to fix:\r\n> \r\n> ```\r\n> src/transformers/data/data_collator.py:253:13: F841 local variable 'attention_padding_mask' is assigned to but never used\r\n> src/transformers/data/datasets/language_modeling.py:153:21: F541 f-string is missing placeholders\r\n> ```\r\n\r\nresolved. @LysandreJik \r\nPlease help to review again, thanks!",
"Thanks for all your efforts on this!",
"> Thanks for all your efforts on this!\r\n\r\nthanks for reviewing!",
"> Very cool, I updated the style again.\r\n> \r\n> Thanks for iterating!\r\n\r\nthanks for your help!",
"First of all I have to admit, I am new here, so still trying to understand how different modalities work in hugging face.\r\nGoing through the documents, it seems that the modifications proposed by @yl-to are only for PyTorch. Right ?\r\nPretraining of ALBERT with TF is still not supported ? ",
"That is correct @UmarSpa! However, models trained in PyTorch can easily be ported to TensorFlow if you're looking to serve a model using TensorFlow."
] | 1,596 | 1,603 | 1,599 | CONTRIBUTOR | null | partially fix #5984
Add supports for Albert model pretraining:
- Add `AlbertTextDataset` class
- Create `segment_ids` and `sentence_order_labels` attributes for sentence order prediction task
- Add `DataCollatorForAlbertPretrain` class
- inherited from `DataCollatorForLanguageModeling` class
- create `attention_mask` for both masked and padding tokens
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6168/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6168/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6168",
"html_url": "https://github.com/huggingface/transformers/pull/6168",
"diff_url": "https://github.com/huggingface/transformers/pull/6168.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6168.patch",
"merged_at": 1599738989000
} |
https://api.github.com/repos/huggingface/transformers/issues/6167 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6167/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6167/comments | https://api.github.com/repos/huggingface/transformers/issues/6167/events | https://github.com/huggingface/transformers/pull/6167 | 669,303,516 | MDExOlB1bGxSZXF1ZXN0NDU5NjkxOTYx | 6,167 | fix the slow tests doc | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Nope. They appear again, 2 paras later.\r\n\r\nMoreover, the 2 deleted instructions are themselves a problem as they are identical ;)"
] | 1,596 | 1,596 | 1,596 | CONTRIBUTOR | null | remove unnecessary duplication wrt `RUN_SLOW=yes`
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6167/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6167/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6167",
"html_url": "https://github.com/huggingface/transformers/pull/6167",
"diff_url": "https://github.com/huggingface/transformers/pull/6167.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6167.patch",
"merged_at": 1596806253000
} |
https://api.github.com/repos/huggingface/transformers/issues/6166 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6166/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6166/comments | https://api.github.com/repos/huggingface/transformers/issues/6166/events | https://github.com/huggingface/transformers/pull/6166 | 669,283,923 | MDExOlB1bGxSZXF1ZXN0NDU5Njc1ODI3 | 6,166 | [wip] diagnose MT metrics regression from pl 0.8.5 upgrade | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 2206883508,
"node_id": "MDU6TGFiZWwyMjA2ODgzNTA4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/lightning",
"name": "lightning",
"color": "a707bc",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6166?src=pr&el=h1) Report\n> Merging [#6166](https://codecov.io/gh/huggingface/transformers/pull/6166?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c0b93a1c7a961e30b30d02d641c9d22120ef5d73&el=desc) will **decrease** coverage by `1.38%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6166?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6166 +/- ##\n==========================================\n- Coverage 79.82% 78.44% -1.39% \n==========================================\n Files 146 146 \n Lines 26597 26597 \n==========================================\n- Hits 21231 20863 -368 \n- Misses 5366 5734 +368 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6166?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6166/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6166/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.18% <0.00%> (-34.62%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6166/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+1.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6166/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.32% <0.00%> (+23.67%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6166/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.77% <0.00%> (+35.21%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6166/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.19% <0.00%> (+63.97%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6166?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6166?src=pr&el=footer). Last update [c0b93a1...68319f0](https://codecov.io/gh/huggingface/transformers/pull/6166?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,596 | 1,596 | 1,596 | CONTRIBUTOR | null | **Base Command:**
```bash
BS=8
GAS=4
MAX_LEN=128
python finetune.py \
--learning_rate=3e-5 \
--do_train \
--val_check_interval=0.25 \
--adam_eps 1e-06 \
--num_train_epochs 6 --src_lang en_XX --tgt_lang ro_RO \
--data_dir $ENRO_DIR \
--max_source_length $MAX_LEN --max_target_length $MAX_LEN --val_max_target_length $MAX_LEN --test_max_target_length $MAX_LEN \
--train_batch_size=$BS --eval_batch_size=$BS --gradient_accumulation_steps=$GAS \
--task translation \
--warmup_steps 500 \
--freeze_embeds \
--model_name_or_path=facebook/mbart-large-cc25 \
--label_smoothing 0.1 --freeze_embeds --gpus 1 --logger_name wandb --sortish_sampler \
$@
```
**Clues:**
- many more steps per epoch in wandb on `distillmbart` branch
- lr reasonable on both branches.
- loss much **higher** on `distillmbart` branch.
- val_avg_bleu after ¼ epoch much higher (23 vs 19)
- fp32 loss goes to NaN after .25 epochs. ( `bru_baseline_pl85_fp32`)
**Suspects:**
- not lr scheduler, though lr schedules differ (because steps differs I presume)
- **early stopping**
- maybe fp16_opt_level being used regardless of `--fp16`?
(maybe scaler line in PL?)
- optimizer_step unchanged besides `lr_scheduler.step` and the scheduler is clearly stepping. Feels wrong.
- dataloader shuffle/`setup`
- src_lens change in LineByLine ds
- just a change in the way val metrics are computed?
**TLDR**:
can get test BLEU = 26.27 with gradient accumulation steps=1 and no early stopping:
```bash
./train_mbart_cc25_enro.sh --output_dir bru_pl85_long --label_smoothing 0.1 --freeze_embeds --logger_name wandb --sortish_sampler --fp16_opt_level O1 --gpus 1
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6166/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6166/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6166",
"html_url": "https://github.com/huggingface/transformers/pull/6166",
"diff_url": "https://github.com/huggingface/transformers/pull/6166.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6166.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/6165 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6165/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6165/comments | https://api.github.com/repos/huggingface/transformers/issues/6165/events | https://github.com/huggingface/transformers/pull/6165 | 669,269,216 | MDExOlB1bGxSZXF1ZXN0NDU5NjYzMjAw | 6,165 | update min tf requirements | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello !\r\n\r\nWhich tests do not pass precisely? They are all green for me.\r\n\r\nFor the log that is not displayed this is because no logger is set in `benchmark_test.py`. I will review that for later, thanks!",
"> Which tests do not pass precisely? They are all green for me.\r\n\r\nMost (all?) tests. Try:\r\n```\r\npip install tensorflow==2.0.1\r\npytest -ra tests/test_benchmark.py \r\n```\r\n\r\nBut unrelated to tests if the runtime requires `foo>=x.y.z`, then the requirements/setup need to require that exact version. A user may (1) already have `foo` installed and then runtime fails (2) a different package may require a lower version of `foo` and thus `pip` won't upgrade it to the latest version available.\r\n\r\n> For the log that is not displayed this is because no logger is set in `benchmark_test.py`. I will review that for later, thanks!\r\n\r\nThat was just an example, as I said most, if not all tests fail with the same cryptic error.",
"OK, I see better now. Thanks!!\n\nI have fixed this in another PR by updating the piece of code you raised by an assert inside the `__init__` of the TFTrainer and now your example works fine. We do not want to fix the TensorFlow version for the entire lib but only for the trainer, at least for now.",
"I understand.\r\n\r\nPlease let me know which PR if you'd like me to re-test, or when it gets merged and I will re-test then.\r\n\r\nThank you, @jplu ",
"Now the fix is merged :)",
"Thank you for remembering to ping me. I re-tested with master and the tests now work with tensorflow==2.0.1 - thank you very much, @jplu "
] | 1,596 | 1,596 | 1,596 | CONTRIBUTOR | null | All of the test suite is failing w/o this update - need to re-run `pip install -e .[dev]`
note that the failing tests don't show `"You need to run the TensorFlow trainer with at least the version 2.2.0, your version is {` anywhere so perhaps need some extra tweaks for the test fixtures, but then since non-tf tests fail too, the problem is in the core.
Not sure if perhaps this code needs to be replaced with an assert - then the error message will always be there regardless where it's used.
```
if parse(tf.__version__).release < (2, 2, 0):
logger.info(
"You need to run the TensorFlow trainer with at least the version 2.2.0, your version is {}".format(
tf.__version__
)
)
sys.exit(1)
```
Currently, if I run **any** test, including pytorch-only tests, with tf < 2.2 I get:
```
____________________________________________________________ ERROR collecting tests/test_benchmark.py ____________________________________________________________
tests/test_benchmark.py:6: in <module>
from transformers import AutoConfig, is_torch_available
src/transformers/__init__.py:659: in <module>
from .trainer_tf import TFTrainer
src/transformers/trainer_tf.py:34: in <module>
sys.exit(1)
E SystemExit: 1
======================================================================== warnings summary ========================================================================
```
To reproduce:
```
pip install tensorflow==2.0.1
pytest -ra tests/test_benchmark.py
```
@jplu
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6165/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6165/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6165",
"html_url": "https://github.com/huggingface/transformers/pull/6165",
"diff_url": "https://github.com/huggingface/transformers/pull/6165.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6165.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/6164 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6164/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6164/comments | https://api.github.com/repos/huggingface/transformers/issues/6164/events | https://github.com/huggingface/transformers/issues/6164 | 669,251,445 | MDU6SXNzdWU2NjkyNTE0NDU= | 6,164 | RoBERTa ``tokenizer.decode`` does not produce the same sentence. | {
"login": "flyaway1217",
"id": 1570846,
"node_id": "MDQ6VXNlcjE1NzA4NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1570846?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flyaway1217",
"html_url": "https://github.com/flyaway1217",
"followers_url": "https://api.github.com/users/flyaway1217/followers",
"following_url": "https://api.github.com/users/flyaway1217/following{/other_user}",
"gists_url": "https://api.github.com/users/flyaway1217/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flyaway1217/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flyaway1217/subscriptions",
"organizations_url": "https://api.github.com/users/flyaway1217/orgs",
"repos_url": "https://api.github.com/users/flyaway1217/repos",
"events_url": "https://api.github.com/users/flyaway1217/events{/privacy}",
"received_events_url": "https://api.github.com/users/flyaway1217/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Seems to be an edge case from cleaning up tokenization on decoding :\r\n\r\nhttps://github.com/huggingface/transformers/blob/c0b93a1c7a961e30b30d02d641c9d22120ef5d73/src/transformers/tokenization_utils_base.py#L2688\r\n\r\n---\r\n\r\nFor this specific case, a work-around can be :\r\n\r\n`ss = tokenizer.decode(input_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)`\r\n\r\n---\r\n\r\nBut I think it's a bug. Is there any way to improve `clean_up_tokenization()` function ? \r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,602 | 1,602 | NONE | null | ## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-4.15.0-74-generic-x86_64-with-glibc2.27
- Python version: 3.8.0
- PyTorch version (GPU?): 1.5.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No.
- Using distributed or parallel set-up in script?: No.
### Who can help
@mfuntowicz
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
tensorflow: @jplu
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): RoBERTa
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
This code example should reproduce the issue:
```python3
from transformers import RobertaTokenizer
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
s = """Meanwhile, Tucci's 'straight guy', the emphatic doctor Seger, is not developed into a more interesting character, like the fallible 'straight guys' Cuddy and Wilson."""
outputs = tokenizer(s)
input_ids = outputs['input_ids']
ss = tokenizer.decode(input_ids, skip_special_tokens=True)
print('s='+s)
print('ss='+ss)
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I expect ``s`` and ``ss`` should be exactly the same. However, they are not. The outputs are:
```bash
s=Meanwhile, Tucci's 'straight guy', the emphatic doctor Seger, is not developed into a more interesting character, like the fallible 'straight guys' Cuddy and Wilson.
ss=Meanwhile, Tucci's'straight guy', the emphatic doctor Seger, is not developed into a more interesting character, like the fallible'straight guys' Cuddy and Wilson.
```
There are two spaces missing before ``'straight guy'``.
I am not sure if this behavior is expected or it is a bug.
The thing is I want to use the sentence produced by the ``decode`` function and I find the output is not exactly the same as the original sentence.
Thanks for the help!
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6164/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6164/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6163 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6163/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6163/comments | https://api.github.com/repos/huggingface/transformers/issues/6163/events | https://github.com/huggingface/transformers/pull/6163 | 669,219,987 | MDExOlB1bGxSZXF1ZXN0NDU5NjIxMDkz | 6,163 | correct the correction | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6163?src=pr&el=h1) Report\n> Merging [#6163](https://codecov.io/gh/huggingface/transformers/pull/6163?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a2f6d521c1d7ebd7e079bc62bee014c8d00b2547&el=desc) will **increase** coverage by `1.02%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6163?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6163 +/- ##\n==========================================\n+ Coverage 78.59% 79.61% +1.02% \n==========================================\n Files 146 146 \n Lines 26597 26597 \n==========================================\n+ Hits 20904 21176 +272 \n+ Misses 5693 5421 -272 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6163?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/benchmark/benchmark\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6163/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `69.50% <ø> (ø)` | |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6163/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `60.56% <0.00%> (-35.22%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6163/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.18% <0.00%> (-34.62%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6163/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.65% <0.00%> (-23.68%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6163/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `81.70% <0.00%> (-4.77%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6163/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.19% <0.00%> (+63.97%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6163/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6163?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6163?src=pr&el=footer). Last update [a2f6d52...4d3f303](https://codecov.io/gh/huggingface/transformers/pull/6163?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,596 | 1,597 | 1,596 | CONTRIBUTOR | null | proved to be a different file, so extra path corrections.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6163/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6163/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6163",
"html_url": "https://github.com/huggingface/transformers/pull/6163",
"diff_url": "https://github.com/huggingface/transformers/pull/6163.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6163.patch",
"merged_at": 1596146403000
} |
https://api.github.com/repos/huggingface/transformers/issues/6162 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6162/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6162/comments | https://api.github.com/repos/huggingface/transformers/issues/6162/events | https://github.com/huggingface/transformers/pull/6162 | 669,160,451 | MDExOlB1bGxSZXF1ZXN0NDU5NTY5MTA3 | 6,162 | typos | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,596 | 1,596 | 1,596 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6162/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6162/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6162",
"html_url": "https://github.com/huggingface/transformers/pull/6162",
"diff_url": "https://github.com/huggingface/transformers/pull/6162.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6162.patch",
"merged_at": 1596143907000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6161 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6161/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6161/comments | https://api.github.com/repos/huggingface/transformers/issues/6161/events | https://github.com/huggingface/transformers/issues/6161 | 669,152,961 | MDU6SXNzdWU2NjkxNTI5NjE= | 6,161 | Padding Strategy Code missing an else case (maybe?) | {
"login": "amanpreet692",
"id": 42522643,
"node_id": "MDQ6VXNlcjQyNTIyNjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/42522643?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amanpreet692",
"html_url": "https://github.com/amanpreet692",
"followers_url": "https://api.github.com/users/amanpreet692/followers",
"following_url": "https://api.github.com/users/amanpreet692/following{/other_user}",
"gists_url": "https://api.github.com/users/amanpreet692/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amanpreet692/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amanpreet692/subscriptions",
"organizations_url": "https://api.github.com/users/amanpreet692/orgs",
"repos_url": "https://api.github.com/users/amanpreet692/repos",
"events_url": "https://api.github.com/users/amanpreet692/events{/privacy}",
"received_events_url": "https://api.github.com/users/amanpreet692/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue also applies to the `truncation` parameter.\r\n\r\nI assumed the enums are supposed to be used directly because the release notes (https://github.com/huggingface/transformers/releases/tag/v3.0.0) explicitly mention the `TensorType` enum, which is defined right below the `PaddingStrategy` and `TruncationStrategy` enums.\r\n\r\nI agree that this is a problem that should be fixed, if the enums are meant to be used.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I think we should fix this, wdyt @LysandreJik ?",
"I believe this was already fixed by https://github.com/huggingface/transformers/pull/7610",
"Nice, thanks!"
] | 1,596 | 1,604 | 1,604 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 3.0.2
- Platform: macOS 10.15.5
- Python version: 3.7
- PyTorch version (GPU?): 1.5 GPU-Yes
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
tokenizers: @mfuntowicz
Summarization: @sshleifer
T5: @patrickvonplaten
## Information
Model I am using (T5 via Autotokenizer):
The problem arises when using:
`tokenizer([line], max_length=max_length, padding='max_length' if pad_to_max_length else False,
truncation=True, return_tensors=return_tensors, **extra_kw)`
In batch encoding, the latest code decides on a padding strategy:
`_get_padding_truncation_strategies(
self, padding=False, truncation=False, max_length=None, pad_to_multiple_of=None, verbose=True, **kwargs
):`
` elif padding is not False:
if padding is True:
padding_strategy = PaddingStrategy.LONGEST # Default to pad to the longest sequence in the batch
elif not isinstance(padding, PaddingStrategy):
padding_strategy = PaddingStrategy(padding)`
While calling the tokenizer, instead of 'max_length' I first gave the actual PaddingStrategy.MAX_LENGTH Enum as argument,
but the above code throws an error as 'padding_strategy' is not defined.
## To reproduce
Call the tokenizer as:
`tokenizer([line], max_length=max_length, padding=PaddingStrategy.MAX_LENGTH if pad_to_max_length else False,
truncation=True, return_tensors=return_tensors, **extra_kw)`
## Expected behavior
The PaddingStrategy enum should be assigned no issue.
##Suggested Solution
` elif padding is not False:
if padding is True:
padding_strategy = PaddingStrategy.LONGEST # Default to pad to the longest sequence in the batch
elif not isinstance(padding, PaddingStrategy):
padding_strategy = PaddingStrategy(padding)
else:
padding_strategy = padding`
It's a one line fix basically, I can raise a PR for the same, unless PaddingStrategy wasn't designed to be used directly?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6161/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6161/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6160 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6160/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6160/comments | https://api.github.com/repos/huggingface/transformers/issues/6160/events | https://github.com/huggingface/transformers/issues/6160 | 669,152,126 | MDU6SXNzdWU2NjkxNTIxMjY= | 6,160 | run_squad.py eval metrics meaning | {
"login": "batyas",
"id": 66080205,
"node_id": "MDQ6VXNlcjY2MDgwMjA1",
"avatar_url": "https://avatars.githubusercontent.com/u/66080205?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/batyas",
"html_url": "https://github.com/batyas",
"followers_url": "https://api.github.com/users/batyas/followers",
"following_url": "https://api.github.com/users/batyas/following{/other_user}",
"gists_url": "https://api.github.com/users/batyas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/batyas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/batyas/subscriptions",
"organizations_url": "https://api.github.com/users/batyas/orgs",
"repos_url": "https://api.github.com/users/batyas/repos",
"events_url": "https://api.github.com/users/batyas/events{/privacy}",
"received_events_url": "https://api.github.com/users/batyas/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"The model assign correctness probability to every answer it produce. if this probability crossing the threshold it means that the model predict their is no answer to the question. The threshold is picked as the threshold that achieve the best f1/exact score on the dev set. The best f1/exact is the result achieved with the best threshold found.\r\n\r\nUnfortunately, their is a [bug](https://github.com/huggingface/transformers/pull/7319) in run_squad.py code which was used for the training and evaluation of most of the models in the library and the results you see in their model cards are incorrect. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,596 | 1,614 | 1,614 | NONE | null | I am having difficulty understanding what exactly the best_f1 and best_exact scores that are outputted in the run_squad.py evaluation mean. (The scores are computed in the squad_metrics script, found [here](https://github.com/huggingface/transformers/blob/master/src/transformers/data/metrics/squad_metrics.py)).
What are the "scores" the best calculations are working with, what do the metrics represent, and when/is the best_threshold value employed during training?
Thank you! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6160/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6160/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6159 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6159/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6159/comments | https://api.github.com/repos/huggingface/transformers/issues/6159/events | https://github.com/huggingface/transformers/issues/6159 | 669,067,906 | MDU6SXNzdWU2NjkwNjc5MDY= | 6,159 | OSError: Unable to load weights from pytorch checkpoint file. | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi! Do you mind pasting your environment information here so that we may take a look?",
"Try to delete cache directory files.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hey, I trained my model on GPT2-small but I am not able to load it! It gives off the following error:\r\n\r\n Unable to load weights from pytorch checkpoint file for '{pretrained_model_name_or_path}' OSError: Unable to load weights \r\n from pytorch checkpoint file for '/mounted/models/train-on-test1/' at '/mounted/models/train-on-test1/pytorch_model.bin' If you \r\n tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.\r\n\r\n@leeivan @LysandreJik ",
"Please open a new issue with your specific problem, alongside all the information related to your environment as asked in the template. Thank you.",
"Same as <https://github.com/huggingface/transformers/issues/6620>, i guess it is because the checkpoint file is not a true checkpoint file: such as a git lfs file.",
"#6159, #6970, #6620 are all same issue. \r\nIn my case I cloned the checkpoint file using git lfs and issue was resolved. Earlier I had used pointer to avoid git lfs however it gave this error. \r\nFor some changing torch, transformer, tokenizer versions helped. \r\nAlso you can go through the docs and check some from_pretrained parameters like force_download, from_tf and try.",
"For futuer visitors: [check this](https://discuss.pytorch.org/t/getting-an-error-unpicklingerror-invalid-load-key-v-in-pytorch-model-deploying-in-streamlit/107768/4)"
] | 1,596 | 1,625 | 1,602 | NONE | null | Hello,
When I am trying to load the `Roberta-large` pre-trained model, I get the following error:
```python
model_RobertaForMultipleChoice = RobertaForMultipleChoice.from_pretrained('roberta-large', output_hidden_states = True)
OUT:
OSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
```
How can I solve this issue? Thank you, | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6159/reactions",
"total_count": 11,
"+1": 11,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6159/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6158 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6158/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6158/comments | https://api.github.com/repos/huggingface/transformers/issues/6158/events | https://github.com/huggingface/transformers/pull/6158 | 669,052,715 | MDExOlB1bGxSZXF1ZXN0NDU5NDcyMDY3 | 6,158 | Add CircleCI config to run TPU tests. | {
"login": "zcain117",
"id": 14796584,
"node_id": "MDQ6VXNlcjE0Nzk2NTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/14796584?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zcain117",
"html_url": "https://github.com/zcain117",
"followers_url": "https://api.github.com/users/zcain117/followers",
"following_url": "https://api.github.com/users/zcain117/following{/other_user}",
"gists_url": "https://api.github.com/users/zcain117/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zcain117/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zcain117/subscriptions",
"organizations_url": "https://api.github.com/users/zcain117/orgs",
"repos_url": "https://api.github.com/users/zcain117/repos",
"events_url": "https://api.github.com/users/zcain117/events{/privacy}",
"received_events_url": "https://api.github.com/users/zcain117/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6158?src=pr&el=h1) Report\n> Merging [#6158](https://codecov.io/gh/huggingface/transformers/pull/6158?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ec0267475c16a1913e64cb4f81fd54d153e3d815&el=desc) will **decrease** coverage by `1.02%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6158?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6158 +/- ##\n==========================================\n- Coverage 79.38% 78.36% -1.03% \n==========================================\n Files 146 146 \n Lines 26454 26454 \n==========================================\n- Hits 21001 20730 -271 \n- Misses 5453 5724 +271 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6158?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6158/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6158/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.75%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6158/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.60% <0.00%> (+1.16%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6158/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `95.23% <0.00%> (+10.71%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6158/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+12.87%)` | :arrow_up: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6158/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `93.80% <0.00%> (+25.66%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6158/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.32% <0.00%> (+29.90%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6158?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6158?src=pr&el=footer). Last update [ec02674...a662ce9](https://codecov.io/gh/huggingface/transformers/pull/6158?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"`check_code_quality` failed with:\r\n\r\n```\r\n torch from https://files.pythonhosted.org/packages/38/53/914885a93a44b96c0dd1c36f36ff10afe341f091230aad68f7228d61db1e/torch-1.6.0-cp36-cp36m-manylinux1_x86_64.whl#sha256=7669f4d923b5758e28b521ea749c795ed67ff24b45ba20296bc8cff706d08df8 (from transformers==3.0.2):\r\n Expected sha256 7669f4d923b5758e28b521ea749c795ed67ff24b45ba20296bc8cff706d08df8\r\n Got 36bbf4ab202de410d764b9156f3925b7d7037ad046f20690e576725a3826a2ac\r\n```\r\n\r\nI don't see how this latest commit could have caused this. I'll retry later",
"See https://github.com/huggingface/transformers/pull/6219"
] | 1,596 | 1,596 | 1,596 | CONTRIBUTOR | null | For every incoming commit, this PR will create a Docker image containing the commit's latest code and will run that Docker image on Google Kubernetes Engine on a TPU. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6158/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6158/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6158",
"html_url": "https://github.com/huggingface/transformers/pull/6158",
"diff_url": "https://github.com/huggingface/transformers/pull/6158.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6158.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/6157 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6157/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6157/comments | https://api.github.com/repos/huggingface/transformers/issues/6157/events | https://github.com/huggingface/transformers/pull/6157 | 669,042,387 | MDExOlB1bGxSZXF1ZXN0NDU5NDYyNjYx | 6,157 | Harmonize both Trainers API | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6157?src=pr&el=h1) Report\n> Merging [#6157](https://codecov.io/gh/huggingface/transformers/pull/6157?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ec0267475c16a1913e64cb4f81fd54d153e3d815&el=desc) will **increase** coverage by `0.40%`.\n> The diff coverage is `67.47%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6157?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6157 +/- ##\n==========================================\n+ Coverage 79.38% 79.79% +0.40% \n==========================================\n Files 146 146 \n Lines 26454 26607 +153 \n==========================================\n+ Hits 21001 21230 +229 \n+ Misses 5453 5377 -76 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6157?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/6157/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.24% <ø> (ø)` | |\n| [src/transformers/benchmark/benchmark\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6157/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `69.50% <ø> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6157/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.35% <ø> (ø)` | |\n| [src/transformers/tokenization\\_bert\\_japanese.py](https://codecov.io/gh/huggingface/transformers/pull/6157/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydF9qYXBhbmVzZS5weQ==) | `32.05% <0.00%> (+1.56%)` | :arrow_up: |\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6157/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `13.09% <ø> (-3.05%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6157/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.14% <23.94%> (-1.83%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6157/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.36% <84.34%> (+0.86%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6157/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.40% <88.88%> (+0.44%)` | :arrow_up: |\n| [src/transformers/hf\\_argparser.py](https://codecov.io/gh/huggingface/transformers/pull/6157/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9oZl9hcmdwYXJzZXIucHk=) | `69.23% <100.00%> (+2.96%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6157/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.83% <100.00%> (+1.39%)` | :arrow_up: |\n| ... and [11 more](https://codecov.io/gh/huggingface/transformers/pull/6157/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6157?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6157?src=pr&el=footer). Last update [603cd81...d95e283](https://codecov.io/gh/huggingface/transformers/pull/6157?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,596 | 1,596 | 1,596 | COLLABORATOR | null | As discussed after the latest rework of TFTrainer.
Also removed references to "master" processes in our API to go to main with deprecation warnings. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6157/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6157/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6157",
"html_url": "https://github.com/huggingface/transformers/pull/6157",
"diff_url": "https://github.com/huggingface/transformers/pull/6157.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6157.patch",
"merged_at": 1596203004000
} |
https://api.github.com/repos/huggingface/transformers/issues/6156 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6156/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6156/comments | https://api.github.com/repos/huggingface/transformers/issues/6156/events | https://github.com/huggingface/transformers/issues/6156 | 668,946,837 | MDU6SXNzdWU2Njg5NDY4Mzc= | 6,156 | should mBART-large-en-ro have decoder_start_token_id by default? | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 2009457320,
"node_id": "MDU6TGFiZWwyMDA5NDU3MzIw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/translation",
"name": "translation",
"color": "b2d2f4",
"default": false,
"description": "machine translation utilities and models"
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @sshleifer, I'd like to contribute and help out here if still needed. My thinking is to remove ```decoder_start_token_id``` from run_eval.py and generation_utils.py and change the following code:\r\n\r\nhttps://github.com/huggingface/transformers/blob/6028ed92bd9e5471e6f8a1d23cfd95a3a63018fb/src/transformers/generation_utils.py#L403-L409\r\n\r\nto:\r\n\r\n input_ids = torch.full(\r\n (effective_batch_size * num_beams, 1),\r\n 250020,\r\n dtype=torch.long,\r\n device=next(self.parameters()).device,\r\n )",
"I dont think that change will do anything since decoder_start_token_id = 250020.\r\n\r\nWhat I would do is change the 250020 to a bos_token_id (0, I think) or a pad_token_id (1) and see what the BLEU score is. \r\n",
"Ah yes that makes sense. I tried those two and the eos_token_id and got the following results:\r\n\r\nID | BLEU Score\r\n-- | --\r\neos_token_id (2) | 28.22\r\ndecoder_start_token_id (250020) | 28.06\r\npad_token_id (1) | 26.79\r\nbos_token_id (0) | 26.01\r\n\r\n",
"Super interesting, thanks for running that. It seems like I should change decoder_start_token_id in the mbart-large-en-ro config to 2. Do you have opinions on mbart-large-cc25?",
"No problem! Yes I think configuring decoder_start_token_id to 2 is a good idea. Unfortunately, I'm getting the same issues you're getting with mbart-large-cc25 (output's in English not Romanian and missing the first word when I use bos_token_id or 250020 and gibberish with eos/pad_token_id) and don't understand why that's the case. I'll investigate and post any useful findings.\r\n\r\n\r\n",
"I think I fixed this another way in #6526 \r\non master\r\n```\r\npython run_eval.py facebook/mbart-large-en-ro $ENRO_DIR/test.source eos_baseline_enro_test_generations.txt \\\r\n--reference_path $ENRO_DIR/test.target \\\r\n--score_path baseline_test_bleu_eos.json --bs 32 --task translation --fp16\r\n```\r\n=> {'bleu': 26.81}\r\n\r\n\r\n```\r\npython run_eval.py facebook/mbart-large-en-ro $ENRO_DIR/test.source \\\r\neos_baseline_enro_test_generations.txt --reference_path $ENRO_DIR/test.target \\\r\n--score_path baseline_test_bleu_eos.json --bs 32 --task translation --fp16 \\\r\n--decoder_start_token_id 2\r\n```\r\n{'bleu': 11.57} (and takes 40 mins!)\r\n\r\nin the original fairseq I get 26.83.",
"Gunna close this since the score is now basically the same as fairseq. Thanks for your help!"
] | 1,596 | 1,601 | 1,598 | CONTRIBUTOR | null | Hypothesis: since the argument `prepend_bos` is set to "False" in fairseq/examples/README.md, mbart-large-en-ro does not need `decoder_start_token_id`.
TODO:
- create branch that deletes `decoder_start_token_id`. Setting it to None in the config might not be enough.
- verify that decoder_start_token_id is in fact not being used by setting a breakpoint in `generate`.
- run_eval.py on wmt-en-ro/test and see if BLEU is >= 26.46, the score with decoder_start_token_id=250020.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6156/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6156/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6155 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6155/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6155/comments | https://api.github.com/repos/huggingface/transformers/issues/6155/events | https://github.com/huggingface/transformers/pull/6155 | 668,906,054 | MDExOlB1bGxSZXF1ZXN0NDU5MzM5NzQ2 | 6,155 | Model output test | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6155?src=pr&el=h1) Report\n> Merging [#6155](https://codecov.io/gh/huggingface/transformers/pull/6155?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/91cb95461e438dc57555c4f57f8ce95a56328036&el=desc) will **increase** coverage by `0.10%`.\n> The diff coverage is `75.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6155?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6155 +/- ##\n==========================================\n+ Coverage 78.35% 78.46% +0.10% \n==========================================\n Files 146 146 \n Lines 26454 26454 \n==========================================\n+ Hits 20729 20758 +29 \n+ Misses 5725 5696 -29 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6155?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `92.20% <ø> (ø)` | |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <66.66%> (-0.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `95.49% <100.00%> (-0.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.22% <0.00%> (-63.98%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/6155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `89.21% <0.00%> (-2.24%)` | :arrow_down: |\n| [src/transformers/modeling\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `81.55% <0.00%> (-0.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.71% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `81.75% <0.00%> (ø)` | |\n| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/6155/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6155?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6155?src=pr&el=footer). Last update [91cb954...33ebdb9](https://codecov.io/gh/huggingface/transformers/pull/6155?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,596 | 1,596 | 1,596 | COLLABORATOR | null | Step 2 of the strategy for the new model outputs as outlined on the [forum](https://discuss.huggingface.co/t/new-model-output-types/195/8).
Use the `return_dict` argument introduced in #6138 in all tests and remove all unpacking from the tests. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6155/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6155/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6155",
"html_url": "https://github.com/huggingface/transformers/pull/6155",
"diff_url": "https://github.com/huggingface/transformers/pull/6155.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6155.patch",
"merged_at": 1596203077000
} |
https://api.github.com/repos/huggingface/transformers/issues/6154 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6154/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6154/comments | https://api.github.com/repos/huggingface/transformers/issues/6154/events | https://github.com/huggingface/transformers/issues/6154 | 668,822,836 | MDU6SXNzdWU2Njg4MjI4MzY= | 6,154 | Hidden State Embedding-Transformers | {
"login": "DaniMlk",
"id": 28568281,
"node_id": "MDQ6VXNlcjI4NTY4Mjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/28568281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DaniMlk",
"html_url": "https://github.com/DaniMlk",
"followers_url": "https://api.github.com/users/DaniMlk/followers",
"following_url": "https://api.github.com/users/DaniMlk/following{/other_user}",
"gists_url": "https://api.github.com/users/DaniMlk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DaniMlk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DaniMlk/subscriptions",
"organizations_url": "https://api.github.com/users/DaniMlk/orgs",
"repos_url": "https://api.github.com/users/DaniMlk/repos",
"events_url": "https://api.github.com/users/DaniMlk/events{/privacy}",
"received_events_url": "https://api.github.com/users/DaniMlk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! Did you have a look at [this older issue](https://github.com/huggingface/transformers/issues/1950)?",
"> Hi! Did you have a look at [this older issue](https://github.com/huggingface/transformers/issues/1950)?\r\n\r\nYes I did, but my concern is that if I want to fine-tune it on my raw text data (language model with LM head) then how should I use it for sentence embedding? Can I just remove the LM head of it?",
"I think this is not necessary since, according to the [documentation](https://huggingface.co/transformers/model_doc/bert.html#bertformaskedlm), you can access every hidden states of your model without removing the LM head.",
"> I think this is not necessary since, according to the [documentation](https://huggingface.co/transformers/model_doc/bert.html#bertformaskedlm), you can access every hidden state of your model without removing the LM head.\r\n\r\nYes, thank you for your reply, I figured it out the language model network can be split into two parts which are \r\n```\r\nself.bert = BertModel(config)\r\nself.cls = BertOnlyMLMHead(config)\r\n```\r\nthen I just need to get the output from self.bert if I want to access the hidden states."
] | 1,596 | 1,596 | 1,596 | NONE | null | Hi everybody, I want to use Bert model to get the embedding for a sentence after I fine-tuned it with raw texts. I was wondering if is that possible or not and anybody can help me with that? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6154/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6154/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6153 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6153/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6153/comments | https://api.github.com/repos/huggingface/transformers/issues/6153/events | https://github.com/huggingface/transformers/pull/6153 | 668,613,073 | MDExOlB1bGxSZXF1ZXN0NDU5MDc3NjAw | 6,153 | readme m3hrdadfi/albert-fa-base-v2 | {
"login": "m3hrdadfi",
"id": 2601833,
"node_id": "MDQ6VXNlcjI2MDE4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2601833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/m3hrdadfi",
"html_url": "https://github.com/m3hrdadfi",
"followers_url": "https://api.github.com/users/m3hrdadfi/followers",
"following_url": "https://api.github.com/users/m3hrdadfi/following{/other_user}",
"gists_url": "https://api.github.com/users/m3hrdadfi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/m3hrdadfi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/m3hrdadfi/subscriptions",
"organizations_url": "https://api.github.com/users/m3hrdadfi/orgs",
"repos_url": "https://api.github.com/users/m3hrdadfi/repos",
"events_url": "https://api.github.com/users/m3hrdadfi/events{/privacy}",
"received_events_url": "https://api.github.com/users/m3hrdadfi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6153?src=pr&el=h1) Report\n> Merging [#6153](https://codecov.io/gh/huggingface/transformers/pull/6153?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d24ea708d742263efe4f4b8d525402f2d916c96c&el=desc) will **increase** coverage by `1.88%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6153?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6153 +/- ##\n==========================================\n+ Coverage 77.19% 79.08% +1.88% \n==========================================\n Files 146 146 \n Lines 26403 26403 \n==========================================\n+ Hits 20382 20880 +498 \n+ Misses 6021 5523 -498 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6153?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.22% <0.00%> (-63.98%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `71.21% <0.00%> (-12.88%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |\n| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/6153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `27.63% <0.00%> (+1.31%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `90.09% <0.00%> (+1.80%)` | :arrow_up: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+2.73%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.71% <0.00%> (+5.76%)` | :arrow_up: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `81.00% <0.00%> (+14.00%)` | :arrow_up: |\n| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `49.09% <0.00%> (+17.09%)` | :arrow_up: |\n| ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/6153/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6153?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6153?src=pr&el=footer). Last update [d24ea70...1adb2ce](https://codecov.io/gh/huggingface/transformers/pull/6153?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"That's great, thanks for sharing this very detailed model card 🤗 \r\n\r\n➡️ **[model page](https://huggingface.co/m3hrdadfi/albert-fa-base-v2)**\r\n\r\nWould you like to add sample inputs for Persian, either to https://github.com/huggingface/widgets-server/blob/master/DefaultWidget.ts (open a Pull request) or to your specific model card?",
"> That's great, thanks for sharing this very detailed model card 🤗\r\n> \r\n> ➡️ **[model page](https://huggingface.co/m3hrdadfi/albert-fa-base-v2)**\r\n> \r\n> Would you like to add sample inputs for Persian, either to https://github.com/huggingface/widgets-server/blob/master/DefaultWidget.ts (open a Pull request) or to your specific model card?\r\n\r\nYes, sure, why not! I have added a couple of samples to `DefaultWidget.ts` and opened a PL!"
] | 1,596 | 1,596 | 1,596 | CONTRIBUTOR | null | model_card readme for m3hrdadfi/albert-fa-base-v2 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6153/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6153/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6153",
"html_url": "https://github.com/huggingface/transformers/pull/6153",
"diff_url": "https://github.com/huggingface/transformers/pull/6153.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6153.patch",
"merged_at": 1596190747000
} |
https://api.github.com/repos/huggingface/transformers/issues/6152 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6152/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6152/comments | https://api.github.com/repos/huggingface/transformers/issues/6152/events | https://github.com/huggingface/transformers/issues/6152 | 668,546,527 | MDU6SXNzdWU2Njg1NDY1Mjc= | 6,152 | Using BertWordPiece Tokenizer | {
"login": "bhavaygg",
"id": 43617111,
"node_id": "MDQ6VXNlcjQzNjE3MTEx",
"avatar_url": "https://avatars.githubusercontent.com/u/43617111?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavaygg",
"html_url": "https://github.com/bhavaygg",
"followers_url": "https://api.github.com/users/bhavaygg/followers",
"following_url": "https://api.github.com/users/bhavaygg/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavaygg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavaygg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavaygg/subscriptions",
"organizations_url": "https://api.github.com/users/bhavaygg/orgs",
"repos_url": "https://api.github.com/users/bhavaygg/repos",
"events_url": "https://api.github.com/users/bhavaygg/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavaygg/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi! Are you trying to load a BERT vocabulary in a RoBERTa tokenizer? This unfortunately won't work, as the mechanisms between the WordPiece and Byte level BPE are inherently different.",
"Hi, I could not find documentation for it so I thought id try. Another thing I wanted to ask is that I am working on a dataset of recipes so I have a list of ingredients in order. I now remove random ingredients and predict them using various models. I have tried Seq2Seq models and Roberta to solve this problem but both give poor results. In my opinion, this problem is somewhat similar to NLP problems but significantly different because tokenizing like BERT does not give any advantage and creates more problems. Do you have any architecture in mind that might be more suited to tackle this problem?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,602 | 1,602 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
The Bert WordPiece tokenizer only saves a vocab file on saving the model while other tokenizers such as Byte Level BPE also save a merges file. When I try to call the model after saving it `RobertaTokenizerFast.from_pretrained("./EsperBERTo_italian", max_len=512)` I get the following error
`OSError: Model name './EsperBERTo_italian' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed './EsperBERTo_italian' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.
`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6152/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6152/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6151 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6151/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6151/comments | https://api.github.com/repos/huggingface/transformers/issues/6151/events | https://github.com/huggingface/transformers/pull/6151 | 668,454,633 | MDExOlB1bGxSZXF1ZXN0NDU4OTQwMTA0 | 6,151 | Add Pytorch Native AMP support in Trainer | {
"login": "prajjwal1",
"id": 24690051,
"node_id": "MDQ6VXNlcjI0NjkwMDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/24690051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prajjwal1",
"html_url": "https://github.com/prajjwal1",
"followers_url": "https://api.github.com/users/prajjwal1/followers",
"following_url": "https://api.github.com/users/prajjwal1/following{/other_user}",
"gists_url": "https://api.github.com/users/prajjwal1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prajjwal1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prajjwal1/subscriptions",
"organizations_url": "https://api.github.com/users/prajjwal1/orgs",
"repos_url": "https://api.github.com/users/prajjwal1/repos",
"events_url": "https://api.github.com/users/prajjwal1/events{/privacy}",
"received_events_url": "https://api.github.com/users/prajjwal1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,596 | 1,596 | 1,596 | CONTRIBUTOR | null | Pytorch 1.6 introduces Native AMP support. This eliminates the need to build and install Apex and provides improvements over problems highlighted in [Apex #818](https://github.com/NVIDIA/apex/issues/818) and flexibility. This is the recommended way to use AMP.
With this PR, Trainer will automatically use Pytorch's native AMP if 1.6 version is installed, otherwise, it will use Apex.
This PR will close [#6115](https://github.com/huggingface/transformers/issues/6115). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6151/reactions",
"total_count": 23,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 7,
"confused": 0,
"heart": 6,
"rocket": 5,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6151/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6151",
"html_url": "https://github.com/huggingface/transformers/pull/6151",
"diff_url": "https://github.com/huggingface/transformers/pull/6151.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6151.patch",
"merged_at": 1596183810000
} |
https://api.github.com/repos/huggingface/transformers/issues/6150 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6150/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6150/comments | https://api.github.com/repos/huggingface/transformers/issues/6150/events | https://github.com/huggingface/transformers/issues/6150 | 668,370,397 | MDU6SXNzdWU2NjgzNzAzOTc= | 6,150 | 🐛 T5 Tokenizer ignores \n \t characters and more than one whitespace together | {
"login": "misrasaurabh1",
"id": 1271289,
"node_id": "MDQ6VXNlcjEyNzEyODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1271289?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/misrasaurabh1",
"html_url": "https://github.com/misrasaurabh1",
"followers_url": "https://api.github.com/users/misrasaurabh1/followers",
"following_url": "https://api.github.com/users/misrasaurabh1/following{/other_user}",
"gists_url": "https://api.github.com/users/misrasaurabh1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/misrasaurabh1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/misrasaurabh1/subscriptions",
"organizations_url": "https://api.github.com/users/misrasaurabh1/orgs",
"repos_url": "https://api.github.com/users/misrasaurabh1/repos",
"events_url": "https://api.github.com/users/misrasaurabh1/events{/privacy}",
"received_events_url": "https://api.github.com/users/misrasaurabh1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@TevenLeScao @sshleifer @patrickvonplaten This looks like a serious problem with the T5 Tokenizer. Is this behavior expected?",
"Closing this issue as sentencepiece for T5 removes more than one whitespace as a standard https://github.com/google-research/text-to-text-transfer-transformer/issues/390#issuecomment-688417703"
] | 1,596 | 1,599 | 1,599 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2 (master)
- Platform: Linux-4.9.0-12-amd64-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.1+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@patrickvonplaten @sshleifer
## Information
T5 Tokenizer is based out of SentencePiece and in sentencepiece Whitespace is treated as a basic symbol. But huggingface tokenizers just ignores more than one whitespace.
Consider all the following examples tokenize to the same thing.
```
from transformers import T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained("t5-base")
print(tokenizer.tokenize("Hi there I'm good"))
>> ['▁Hi', '▁there', '▁I', "'", 'm', '▁good']
print(tokenizer.tokenize("Hi there I'm good"))
>> ['▁Hi', '▁there', '▁I', "'", 'm', '▁good']
print(tokenizer.tokenize("Hi there I'm good\n"))
>> ['▁Hi', '▁there', '▁I', "'", 'm', '▁good']
print(tokenizer.tokenize("Hi there \n I'm good\n"))
>> ['▁Hi', '▁there', '▁I', "'", 'm', '▁good']
print(tokenizer.tokenize("Hi there \n I'm good\n"))
>> ['▁Hi', '▁there', '▁I', "'", 'm', '▁good']
print(tokenizer.tokenize("Hi there \n \t I'm good\n"))
>> ['▁Hi', '▁there', '▁I', "'", 'm', '▁good']
print(tokenizer.tokenize("Hi there\nI'm good"))
>> ['▁Hi', '▁there', '▁I', "'", 'm', '▁good']
```
All these examples should tokenize to different representations. Also ignoring newline outright means that all applications that use newlines fail.
Model I am using (Bert, XLNet ...): T5
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
All the whitespaces have different tokenizations | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6150/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6150/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6149 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6149/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6149/comments | https://api.github.com/repos/huggingface/transformers/issues/6149/events | https://github.com/huggingface/transformers/pull/6149 | 668,323,175 | MDExOlB1bGxSZXF1ZXN0NDU4ODM2MjA2 | 6,149 | [s2s] add support for overriding config params | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6149?src=pr&el=h1) Report\n> Merging [#6149](https://codecov.io/gh/huggingface/transformers/pull/6149?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/54f9fbeff822ec0547fd23d0338654456925f6b7&el=desc) will **increase** coverage by `1.32%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6149?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6149 +/- ##\n==========================================\n+ Coverage 78.35% 79.68% +1.32% \n==========================================\n Files 146 146 \n Lines 26403 26403 \n==========================================\n+ Hits 20689 21039 +350 \n+ Misses 5714 5364 -350 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6149?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6149/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.22% <0.00%> (-63.98%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6149/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `60.56% <0.00%> (-35.22%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6149/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.42% <0.00%> (-29.91%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6149/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.71% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6149/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.20% <0.00%> (-0.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6149/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+34.61%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6149/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6149?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6149?src=pr&el=footer). Last update [54f9fbe...5476a9e](https://codecov.io/gh/huggingface/transformers/pull/6149?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"The code looks perfect.",
"> I'm surprised you didn't need to change the `CHEAP_ARGS` constant in the tests.\r\n\r\nbecause the new args are optional? Unless you mean something else.\r\n\r\n...Working on the tests. ",
"Added tests as suggested. ",
"good alias\r\n```bash\r\nsty () {\r\n\tmake style\r\n\tflake8 examples templates tests src utils\r\n}\r\n```"
] | 1,596 | 1,596 | 1,596 | CONTRIBUTOR | null | add support for overriding model params:
```
python finetune.py --encoder_layerdrop 0.1 --decoder_layerdrop 0.1 --dropout 0.1 --attention_dropout 0.1
```
as requested at https://github.com/huggingface/transformers/issues/6018
`README.md` seems to be mostly the editor removing superfluous whitespace - not sure why github shows it - normally it doesn't. The only added doc section is https://github.com/stas00/transformers/blob/seq2seq-train_params-1/examples/seq2seq/README.md#finetuning-training-params
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6149/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6149/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6149",
"html_url": "https://github.com/huggingface/transformers/pull/6149",
"diff_url": "https://github.com/huggingface/transformers/pull/6149.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6149.patch",
"merged_at": 1596085787000
} |
https://api.github.com/repos/huggingface/transformers/issues/6148 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6148/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6148/comments | https://api.github.com/repos/huggingface/transformers/issues/6148/events | https://github.com/huggingface/transformers/issues/6148 | 668,320,961 | MDU6SXNzdWU2NjgzMjA5NjE= | 6,148 | tokenize cache for examples/language-modeling | {
"login": "Jiaxin-Wen",
"id": 48146603,
"node_id": "MDQ6VXNlcjQ4MTQ2NjAz",
"avatar_url": "https://avatars.githubusercontent.com/u/48146603?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jiaxin-Wen",
"html_url": "https://github.com/Jiaxin-Wen",
"followers_url": "https://api.github.com/users/Jiaxin-Wen/followers",
"following_url": "https://api.github.com/users/Jiaxin-Wen/following{/other_user}",
"gists_url": "https://api.github.com/users/Jiaxin-Wen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jiaxin-Wen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jiaxin-Wen/subscriptions",
"organizations_url": "https://api.github.com/users/Jiaxin-Wen/orgs",
"repos_url": "https://api.github.com/users/Jiaxin-Wen/repos",
"events_url": "https://api.github.com/users/Jiaxin-Wen/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jiaxin-Wen/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"as it takes me about 7 minutes to tokenize a train set (size is 400w) every time I start training. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,602 | 1,602 | NONE | null | # 🚀 Feature request
I find transformer already has cache for tokenized result in examples/token-classification.
I think language-modeling which with a much larger dataset also need that
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6148/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6148/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6147 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6147/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6147/comments | https://api.github.com/repos/huggingface/transformers/issues/6147/events | https://github.com/huggingface/transformers/issues/6147 | 668,315,884 | MDU6SXNzdWU2NjgzMTU4ODQ= | 6,147 | the documents for transformer don't work | {
"login": "dutyhong",
"id": 7098332,
"node_id": "MDQ6VXNlcjcwOTgzMzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7098332?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dutyhong",
"html_url": "https://github.com/dutyhong",
"followers_url": "https://api.github.com/users/dutyhong/followers",
"following_url": "https://api.github.com/users/dutyhong/following{/other_user}",
"gists_url": "https://api.github.com/users/dutyhong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dutyhong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dutyhong/subscriptions",
"organizations_url": "https://api.github.com/users/dutyhong/orgs",
"repos_url": "https://api.github.com/users/dutyhong/repos",
"events_url": "https://api.github.com/users/dutyhong/events{/privacy}",
"received_events_url": "https://api.github.com/users/dutyhong/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"The link works, but the documentation was not properly done at this version. You should check a more recent version.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,602 | 1,602 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## i find the new document for transformer can't work, https://huggingface.co/transformers/v2.5.0/model_doc/bert.html#bertmodel ; the class link don't work
<!-- Description of your issue -->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6147/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6147/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6146 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6146/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6146/comments | https://api.github.com/repos/huggingface/transformers/issues/6146/events | https://github.com/huggingface/transformers/issues/6146 | 668,290,839 | MDU6SXNzdWU2NjgyOTA4Mzk= | 6,146 | 🌟 Mirostat decoding algorithm | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,684 | 1,602 | CONTRIBUTOR | null | # 🌟 Mirostat: A Perplexity-Controlled Neural Text Decoding Algorithm
## Description
Paper : https://arxiv.org/pdf/2007.14966.pdf
Abstract :
> [...] We use this analysis to design a feedbackbased adaptive top-k text decoding algorithm called mirostat that generates text (of any length) with a redetermined value of perplexity, and thereby high-quality text without any tuning. [...] Mirostat avoids both traps: experiments show that cross-entropy has a near-linear relation with repetition in generated text. This relation is almost independent of the sampling method but slightly dependent on the model used. Hence, for a given language model, control over perplexity also gives control over repetitions.
## Open source status
* [x] the model implementation is available: https://github.com/basusourya/mirostat
* [x] the model weights are available: _Not applicable_
* [x] who are the authors: @basusourya
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6146/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6146/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6145 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6145/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6145/comments | https://api.github.com/repos/huggingface/transformers/issues/6145/events | https://github.com/huggingface/transformers/issues/6145 | 668,266,622 | MDU6SXNzdWU2NjgyNjY2MjI= | 6,145 | TOKENIZER: truncation not working for batch | {
"login": "PyAntony",
"id": 24689636,
"node_id": "MDQ6VXNlcjI0Njg5NjM2",
"avatar_url": "https://avatars.githubusercontent.com/u/24689636?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PyAntony",
"html_url": "https://github.com/PyAntony",
"followers_url": "https://api.github.com/users/PyAntony/followers",
"following_url": "https://api.github.com/users/PyAntony/following{/other_user}",
"gists_url": "https://api.github.com/users/PyAntony/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PyAntony/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PyAntony/subscriptions",
"organizations_url": "https://api.github.com/users/PyAntony/orgs",
"repos_url": "https://api.github.com/users/PyAntony/repos",
"events_url": "https://api.github.com/users/PyAntony/events{/privacy}",
"received_events_url": "https://api.github.com/users/PyAntony/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I have updated to version 3.0.2 now and the tokenizer is working properly. I am closing this issue."
] | 1,596 | 1,596 | 1,596 | NONE | null | ## Environment info
- `transformers` version: 2.11.0
- Platform: Linux-4.15.0-106-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.4
- PyTorch version (GPU?): 1.5.1 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
### Who can help
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
examples/distillation: @VictorSanh
## Information
Model I am using: DistilBertForSequenceClassification.
The tokenizer does not truncate when I pass a list of strings. It only works when I pass a single string.
## To reproduce
Copy/paste (or just read) code below.
```python
from transformers import DistilBertTokenizer
tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased")
sentence = "Submit a bug report to help us improve transformers."
output = tokenizer(sentence, padding=True, truncation=True, max_length=4)
print(output['input_ids'])
# 4 tokens as expected from *max_length=4*
# out: [101, 12040, 1037, 102]
# now let's test with multiple sentences
sentences = [
"Submit a bug report to help us improve transformers.",
"Benchmark a part of this library and share your results"
]
output = tokenizer(sentences, padding=True, truncation=True, max_length=4)
print(output['input_ids'])
# output is returning all tokens, it is not truncating to max_length!
# out: [[101, 12040, 1037, 11829, 3189, 2000, 2393, 2149, 5335, 19081, 1012, 102, 0],
# [101, 6847, 10665, 1037, 2112, 1997, 2023, 3075, 1998, 3745, 2115, 3463, 102]]
```
## Expected
```python
# output truncated to max_length (4 as in the example)
# out: [[101, 12040, 1037, 102],
# [101, 6847, 10665, 102]]
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6145/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6145/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6144 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6144/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6144/comments | https://api.github.com/repos/huggingface/transformers/issues/6144/events | https://github.com/huggingface/transformers/issues/6144 | 668,188,239 | MDU6SXNzdWU2NjgxODgyMzk= | 6,144 | Question-Answering pipeline doesn't work anymore with long text | {
"login": "dipanjanS",
"id": 3448263,
"node_id": "MDQ6VXNlcjM0NDgyNjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3448263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dipanjanS",
"html_url": "https://github.com/dipanjanS",
"followers_url": "https://api.github.com/users/dipanjanS/followers",
"following_url": "https://api.github.com/users/dipanjanS/following{/other_user}",
"gists_url": "https://api.github.com/users/dipanjanS/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dipanjanS/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dipanjanS/subscriptions",
"organizations_url": "https://api.github.com/users/dipanjanS/orgs",
"repos_url": "https://api.github.com/users/dipanjanS/repos",
"events_url": "https://api.github.com/users/dipanjanS/events{/privacy}",
"received_events_url": "https://api.github.com/users/dipanjanS/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
] | [
"Also if I look back at my code,\r\n\r\n```\r\n!pip install transformers==2.11.0\r\n```\r\n\r\n\r\n\r\n\r\nStill works for me with a larger context (same code as above). Any idea which is the default model being used there and if that would still work for transfomers 3.x ?\r\n\r\n",
"@LysandreJik , @sshleifer would be great if you could look into this, assign this to the right folks.",
"Assigned @mfuntowicz, the master of pipelines. He's in holidays right now, so I'll try to look into it in the coming days.",
"It isn't just long contexts. I was running some QA on SQuAD2.0 and came across an instance where I received that error for a given context and question but the context is not that long. \r\n```\r\nfrom transformers import pipeline\r\n\r\nmodel_path = \"twmkn9/distilbert-base-uncased-squad2\"\r\n\r\nhfreader = pipeline('question-answering', model=model_path, tokenizer=model_path, device=0)\r\n\r\ncontext = \"\"\"\r\nThe Norman dynasty had a major political, cultural and military impact on \r\nmedieval Europe and even the Near East. The Normans were famed for their \r\nmartial spirit and eventually for their Christian piety, becoming exponents of \r\nthe Catholic orthodoxy into which they assimilated. They adopted the \r\nGallo-Romance language of the Frankish land they settled, their dialect \r\nbecoming known as Norman, Normaund or Norman French, an important literary \r\nlanguage. The Duchy of Normandy, which they formed by treaty with the French \r\ncrown, was a great fief of medieval France, and under Richard I of Normandy was \r\nforged into a cohesive and formidable principality in feudal tenure. The \r\nNormans are noted both for their culture, such as their unique Romanesque \r\narchitecture and musical traditions, and for their significant military \r\naccomplishments and innovations. Norman adventurers founded the Kingdom of \r\nSicily under Roger II after conquering southern Italy on the Saracens and \r\nByzantines, and an expedition on behalf of their duke, William the Conqueror, \r\nled to the Norman conquest of England at the Battle of Hastings in 1066. Norman \r\ncultural and military influence spread from these new European centres to the \r\nCrusader states of the Near East, where their prince Bohemond I founded the \r\nPrincipality of Antioch in the Levant, to Scotland and Wales in Great Britain, \r\nto Ireland, and to the coasts of north Africa and the Canary Islands.\r\n\"\"\"\r\n\r\nquestion2 = \"Who assimilted the Roman language?\"\r\n\r\nhfreader(question=question2, context=context)\r\n\r\n```\r\n\r\n### Error Message:\r\n```\r\n---------------------------------------------------------------------------\r\nKeyError Traceback (most recent call last)\r\n<ipython-input-144-45135f680e80> in <module>()\r\n----> 1 hfreader(question=question2, context=context)\r\n\r\n1 frames\r\n/usr/local/lib/python3.6/dist-packages/transformers/pipelines.py in <listcomp>(.0)\r\n 1314 ),\r\n 1315 }\r\n-> 1316 for s, e, score in zip(starts, ends, scores)\r\n 1317 ]\r\n 1318 \r\n\r\nKeyError: 0\r\n```\r\n\r\n\r\nBut if I changed the question and keep the same context, the pipeline completes the execution. \r\n```\r\nquestion1 = \"Who was famed for their Christian spirit?\"\r\nhfreader(question=question1, context=context)\r\n```\r\n\r\n### Output\r\n```\r\n{'answer': 'Normans', 'end': 127, 'score': 0.5337043597899815, 'start': 120}\r\n```",
"Thanks @melaniebeck for this, even I encountered this just earlier today. Would definitely be great if the team can figure out how these could be resolved in v3.x for transformers.",
"i also encountered this issue (keyerror : 0)\r\n\r\nit's not even long text (about 8-12 words length)\r\n\r\nsometime it occured when i'm changing some word in the question with oov word\r\n\r\n```\r\n rv = self.dispatch_request()\r\n0|QA | File \"/home/samsul/.local/lib/python3.6/site-packages/flask/app.py\", line 1935, in dispatch_request\r\n0|QA | return self.view_functions[rule.endpoint](**req.view_args)\r\n0|QA | File \"/home/samsul/question-answering/app.py\", line 23, in search\r\n0|QA | answer = nlp({'question': question,'context': context})\r\n0|QA | File \"/home/samsul/.local/lib/python3.6/site-packages/transformers/pipelines.py\", line 1316, in __call__\r\n0|QA | for s, e, score in zip(starts, ends, scores)\r\n0|QA | File \"/home/samsul/.local/lib/python3.6/site-packages/transformers/pipelines.py\", line 1316, in <listcomp>\r\n0|QA | for s, e, score in zip(starts, ends, scores)\r\n0|QA | KeyError: 0\r\n```",
"Hello! There has been a few fixes on the pipelines since version v3.0.2 came out. I can reproduce this issue on v3.0.1 and v3.0.2, but not on the master branch, as it has probably been fixed already.\r\n\r\nCould you try installing from source (`pip install git+https://github.com/huggingface/transformers`) and let me know if that fixes your issue?",
"hi @LysandreJik \r\n\r\nseems the problem still occurred but now its keyerror 17\r\n\r\n**input**\r\n```\r\n!pip install git+https://github.com/huggingface/transformers\r\nfrom transformers import pipeline\r\n\r\nnlp = pipeline('question-answering',model='a-ware/xlmroberta-squadv2',device=0)\r\nnlp({'question': \"siapa istri samsul?\",'context': \"nama saya samsul, saya adalah suami raisa\"})\r\n```\r\n\r\n**Error**\r\n```\r\n/usr/local/lib/python3.6/dist-packages/transformers/pipelines.py in __call__(self, *args, **kwargs)\r\n 1676 ),\r\n 1677 }\r\n-> 1678 for s, e, score in zip(starts, ends, scores)\r\n 1679 ]\r\n 1680 \r\n\r\n/usr/local/lib/python3.6/dist-packages/transformers/pipelines.py in <listcomp>(.0)\r\n 1676 ),\r\n 1677 }\r\n-> 1678 for s, e, score in zip(starts, ends, scores)\r\n 1679 ]\r\n 1680 \r\n\r\nKeyError: 17\r\n```\r\n\r\ni also try the case from @dipanjanS (the first post)\r\n\r\nstill got some error:\r\n```\r\n/usr/local/lib/python3.6/dist-packages/transformers/pipelines.py in <dictcomp>(.0)\r\n 1636 with torch.no_grad():\r\n 1637 # Retrieve the score for the context tokens only (removing question tokens)\r\n-> 1638 fw_args = {k: torch.tensor(v, device=self.device) for (k, v) in fw_args.items()}\r\n 1639 start, end = self.model(**fw_args)[:2]\r\n 1640 start, end = start.cpu().numpy(), end.cpu().numpy()\r\n\r\nValueError: expected sequence of length 384 at dim 1 (got 317)\r\n```\r\n\r\n",
"https://github.com/huggingface/transformers/blob/f6cb0f806efecb64df40c946dacaad0adad33d53/src/transformers/pipelines.py#L1618 is causing this issue. Padding to max_length solves this problem.\r\nCurrently, if the text is long, the final span is not padded to the max_seq_len of the model.",
"Yes agreed I think that is related to the recent code push based on the PR\nlinked earlier. Would be great if this could be looked into HF team!\n\n\n\nOn Tue, Aug 11, 2020 at 11:18 PM Binoy Dalal <[email protected]>\nwrote:\n\n>\n> https://github.com/huggingface/transformers/blob/f6cb0f806efecb64df40c946dacaad0adad33d53/src/transformers/pipelines.py#L1618\n> <https://mailtrack.io/trace/link/26fa516997f20e87e713b4c04065c74bbadf3226?url=https%3A%2F%2Fgithub.com%2Fhuggingface%2Ftransformers%2Fblob%2Ff6cb0f806efecb64df40c946dacaad0adad33d53%2Fsrc%2Ftransformers%2Fpipelines.py%23L1618&userId=3535544&signature=c1f087ce57177138>\n> is causing this issue. Padding to max_length solves this problem.\n> Currently, if the text is long, the final span is not padded to the\n> max_seq_len of the model.\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://mailtrack.io/trace/link/4b6aa40826e8c36d7aebe9207d4f60b6bd245a74?url=https%3A%2F%2Fgithub.com%2Fhuggingface%2Ftransformers%2Fissues%2F6144%23issuecomment-672130943&userId=3535544&signature=a98099ac20ab30b6>,\n> or unsubscribe\n> <https://mailtrack.io/trace/link/2613e10aaae39303a4e72607615d815ac84ac486?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAA2J3R3U2QJ6XORN26GC5JTSAF767ANCNFSM4PMDZHVQ&userId=3535544&signature=aef7c99cd5c66eaa>\n> .\n>\n",
"Solved by https://github.com/huggingface/transformers/issues/6875",
"Awesome thanks folks!"
] | 1,596 | 1,599 | 1,598 | NONE | null | Transformers version: 3.0.2
The question-answering models don't seem to work anymore with long text, any reason why this is happening? I have tried with the default model in `pipeline` as well as with specific models.
e.g
__Sample Code:__
```
from transformers import pipeline
nlp_qa = pipeline('question-answering') # 1st try
nlp_qa = pipeline('question-answering', model='deepset/roberta-base-squad2') # 2nd try
context = """
Coronaviruses are a large family of viruses which may cause illness in animals or humans.
In humans, several coronaviruses are known to cause respiratory infections ranging from the
common cold to more severe diseases such as Middle East Respiratory Syndrome (MERS) and Severe Acute Respiratory Syndrome (SARS).
The most recently discovered coronavirus causes coronavirus disease COVID-19.
COVID-19 is the infectious disease caused by the most recently discovered coronavirus.
This new virus and disease were unknown before the outbreak began in Wuhan, China, in December 2019.
COVID-19 is now a pandemic affecting many countries globally.
The most common symptoms of COVID-19 are fever, dry cough, and tiredness.
Other symptoms that are less common and may affect some patients include aches
and pains, nasal congestion, headache, conjunctivitis, sore throat, diarrhea,
loss of taste or smell or a rash on skin or discoloration of fingers or toes.
These symptoms are usually mild and begin gradually.
Some people become infected but only have very mild symptoms.
Most people (about 80%) recover from the disease without needing hospital treatment.
Around 1 out of every 5 people who gets COVID-19 becomes seriously ill and develops difficulty breathing.
Older people, and those with underlying medical problems like high blood pressure, heart and lung problems,
diabetes, or cancer, are at higher risk of developing serious illness.
However, anyone can catch COVID-19 and become seriously ill.
People of all ages who experience fever and/or cough associated with difficulty breathing/shortness of breath,
chest pain/pressure, or loss of speech or movement should seek medical attention immediately.
If possible, it is recommended to call the health care provider or facility first,
so the patient can be directed to the right clinic.
People can catch COVID-19 from others who have the virus.
The disease spreads primarily from person to person through small droplets from the nose or mouth,
which are expelled when a person with COVID-19 coughs, sneezes, or speaks.
These droplets are relatively heavy, do not travel far and quickly sink to the ground.
People can catch COVID-19 if they breathe in these droplets from a person infected with the virus.
This is why it is important to stay at least 1 meter) away from others.
These droplets can land on objects and surfaces around the person such as tables, doorknobs and handrails.
People can become infected by touching these objects or surfaces, then touching their eyes, nose or mouth.
This is why it is important to wash your hands regularly with soap and water or clean with alcohol-based hand rub.
Practicing hand and respiratory hygiene is important at ALL times and is the best way to protect others and yourself.
When possible maintain at least a 1 meter distance between yourself and others.
This is especially important if you are standing by someone who is coughing or sneezing.
Since some infected persons may not yet be exhibiting symptoms or their symptoms may be mild,
maintaining a physical distance with everyone is a good idea if you are in an area where COVID-19 is circulating.
"""
nlp_qa(context=context, question='What is a coronavirus ?')
```
__Error Message:__
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-15-ddac1f9cb68e> in <module>()
----> 1 nlp_qa(context=context, question='What is a coronavirus ?')
1 frames
/usr/local/lib/python3.6/dist-packages/transformers/pipelines.py in <listcomp>(.0)
1314 ),
1315 }
-> 1316 for s, e, score in zip(starts, ends, scores)
1317 ]
1318
KeyError: 0
```
This used to work before version 3 I remember, would really appreciate some help on this. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6144/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6144/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6143 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6143/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6143/comments | https://api.github.com/repos/huggingface/transformers/issues/6143/events | https://github.com/huggingface/transformers/pull/6143 | 668,133,926 | MDExOlB1bGxSZXF1ZXN0NDU4Njc4NjY2 | 6,143 | Tf trainer cleanup | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6143?src=pr&el=h1) Report\n> Merging [#6143](https://codecov.io/gh/huggingface/transformers/pull/6143?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/54f9fbeff822ec0547fd23d0338654456925f6b7&el=desc) will **increase** coverage by `1.02%`.\n> The diff coverage is `25.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6143?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6143 +/- ##\n==========================================\n+ Coverage 78.35% 79.38% +1.02% \n==========================================\n Files 146 146 \n Lines 26403 26416 +13 \n==========================================\n+ Hits 20689 20970 +281 \n+ Misses 5714 5446 -268 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6143?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6143/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `16.14% <25.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6143/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.42% <0.00%> (-29.91%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6143/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `68.14% <0.00%> (-25.67%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6143/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-23.43%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6143/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.20% <0.00%> (-0.29%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6143/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.96% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6143/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6143?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6143?src=pr&el=footer). Last update [54f9fbe...46b6cb4](https://codecov.io/gh/huggingface/transformers/pull/6143?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,596 | 1,596 | 1,596 | COLLABORATOR | null | New version of #6015
Will harmonize the public customization hooks and document everything properly once this is merged. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6143/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6143/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6143",
"html_url": "https://github.com/huggingface/transformers/pull/6143",
"diff_url": "https://github.com/huggingface/transformers/pull/6143.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6143.patch",
"merged_at": 1596114797000
} |
https://api.github.com/repos/huggingface/transformers/issues/6142 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6142/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6142/comments | https://api.github.com/repos/huggingface/transformers/issues/6142/events | https://github.com/huggingface/transformers/pull/6142 | 668,069,895 | MDExOlB1bGxSZXF1ZXN0NDU4NjI0MTcz | 6,142 | Fix FlauBERT GPU test | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6142?src=pr&el=h1) Report\n> Merging [#6142](https://codecov.io/gh/huggingface/transformers/pull/6142?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/54f9fbeff822ec0547fd23d0338654456925f6b7&el=desc) will **decrease** coverage by `0.03%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6142?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6142 +/- ##\n==========================================\n- Coverage 78.35% 78.32% -0.04% \n==========================================\n Files 146 146 \n Lines 26403 26403 \n==========================================\n- Hits 20689 20679 -10 \n- Misses 5714 5724 +10 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6142?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6142/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF1YmVydC5weQ==) | `86.61% <100.00%> (ø)` | |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6142/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-2.26%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6142/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.20% <0.00%> (-0.29%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6142?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6142?src=pr&el=footer). Last update [54f9fbe...7034997](https://codecov.io/gh/huggingface/transformers/pull/6142?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,596 | 1,596 | 1,596 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6142/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6142/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6142",
"html_url": "https://github.com/huggingface/transformers/pull/6142",
"diff_url": "https://github.com/huggingface/transformers/pull/6142.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6142.patch",
"merged_at": 1596121908000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6141 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6141/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6141/comments | https://api.github.com/repos/huggingface/transformers/issues/6141/events | https://github.com/huggingface/transformers/issues/6141 | 668,058,851 | MDU6SXNzdWU2NjgwNTg4NTE= | 6,141 | Bug in language_modeling.py calling tokenizer.num_special_tokens_to_add | {
"login": "frarito",
"id": 930259,
"node_id": "MDQ6VXNlcjkzMDI1OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/930259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/frarito",
"html_url": "https://github.com/frarito",
"followers_url": "https://api.github.com/users/frarito/followers",
"following_url": "https://api.github.com/users/frarito/following{/other_user}",
"gists_url": "https://api.github.com/users/frarito/gists{/gist_id}",
"starred_url": "https://api.github.com/users/frarito/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frarito/subscriptions",
"organizations_url": "https://api.github.com/users/frarito/orgs",
"repos_url": "https://api.github.com/users/frarito/repos",
"events_url": "https://api.github.com/users/frarito/events{/privacy}",
"received_events_url": "https://api.github.com/users/frarito/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I was using a wrong approch, it works if I train the tokenizer, save de params, and load into a FastTokenizer impl"
] | 1,596 | 1,596 | 1,596 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Linux-4.4.0-1081-aws-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101 (True)
- Tensorflow version (GPU?): 2.1.0 (True)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
tokenizers: @mfuntowicz
-->
## Information
Model I am using GPT-2
The class TextDataset call tokenizer.num_special_tokens_to_add(pair=False) but the correct argument is called is_pair. I assume the bugfix corresponds to transformers repo.
## To reproduce
https://github.com/huggingface/transformers/blob/e49393c3617e877f0370f7bad7c7e823808c5bfb/src/transformers/data/datasets/language_modeling.py#L27
https://github.com/huggingface/tokenizers/blob/master/bindings/python/tokenizers/implementations/base_tokenizer.py#L20
I'm using transformers 3.0.2 and tokenizers 0.8.1rc1 (try to update but says its incompatible)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6141/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6141/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6140 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6140/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6140/comments | https://api.github.com/repos/huggingface/transformers/issues/6140/events | https://github.com/huggingface/transformers/issues/6140 | 668,031,823 | MDU6SXNzdWU2NjgwMzE4MjM= | 6,140 | Copyright date and owner not filled out in LICENSE file | {
"login": "Meadosc",
"id": 18249206,
"node_id": "MDQ6VXNlcjE4MjQ5MjA2",
"avatar_url": "https://avatars.githubusercontent.com/u/18249206?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Meadosc",
"html_url": "https://github.com/Meadosc",
"followers_url": "https://api.github.com/users/Meadosc/followers",
"following_url": "https://api.github.com/users/Meadosc/following{/other_user}",
"gists_url": "https://api.github.com/users/Meadosc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Meadosc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Meadosc/subscriptions",
"organizations_url": "https://api.github.com/users/Meadosc/orgs",
"repos_url": "https://api.github.com/users/Meadosc/repos",
"events_url": "https://api.github.com/users/Meadosc/events{/privacy}",
"received_events_url": "https://api.github.com/users/Meadosc/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Thanks @Meadosc for raising this, we're not sure if this is a requirement of the license in order to add the copyright date and owner.\r\n\r\nWe only saw this issue now but think it would be good to reopen the issue (as it was auto-closed by the bot) to be addressed by the repository maintainers if possible!"
] | 1,596 | 1,607 | 1,601 | NONE | null | In transformers/LICENSE the copyright date and owner is not filled out.
https://github.com/huggingface/transformers/blob/master/LICENSE#L179-L190 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6140/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6140/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6139 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6139/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6139/comments | https://api.github.com/repos/huggingface/transformers/issues/6139/events | https://github.com/huggingface/transformers/issues/6139 | 668,000,980 | MDU6SXNzdWU2NjgwMDA5ODA= | 6,139 | Applying hugging face transformer in sequence labeling problem | {
"login": "Michael95-m",
"id": 64765786,
"node_id": "MDQ6VXNlcjY0NzY1Nzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/64765786?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Michael95-m",
"html_url": "https://github.com/Michael95-m",
"followers_url": "https://api.github.com/users/Michael95-m/followers",
"following_url": "https://api.github.com/users/Michael95-m/following{/other_user}",
"gists_url": "https://api.github.com/users/Michael95-m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Michael95-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Michael95-m/subscriptions",
"organizations_url": "https://api.github.com/users/Michael95-m/orgs",
"repos_url": "https://api.github.com/users/Michael95-m/repos",
"events_url": "https://api.github.com/users/Michael95-m/events{/privacy}",
"received_events_url": "https://api.github.com/users/Michael95-m/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"you just use the ner code for POS tagging problem, the only difference is the set of target classes"
] | 1,596 | 1,596 | 1,596 | NONE | null | Hello.. Thanks for your great framework. Btw, what I'd like to know is that can I apply these hugging face transformer models in **sequence labeling** problems like **part of speech tagging** and **word segmentation**(because I see only **ner** model in example folder). If I can, **how** can I do that?? Can I get some helps like **example scripts** about how to apply these transformers for sequence labeling problems?? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6139/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6139/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6138 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6138/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6138/comments | https://api.github.com/repos/huggingface/transformers/issues/6138/events | https://github.com/huggingface/transformers/pull/6138 | 667,998,750 | MDExOlB1bGxSZXF1ZXN0NDU4NTY3NDUw | 6,138 | Switch from return_tuple to return_dict | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6138?src=pr&el=h1) Report\n> Merging [#6138](https://codecov.io/gh/huggingface/transformers/pull/6138?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8a8ae27617e3c4dafb34bcbbaadf4ceee28583bd&el=desc) will **increase** coverage by `0.99%`.\n> The diff coverage is `71.96%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6138?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6138 +/- ##\n==========================================\n+ Coverage 78.49% 79.48% +0.99% \n==========================================\n Files 146 146 \n Lines 26335 26441 +106 \n==========================================\n+ Hits 20671 21017 +346 \n+ Misses 5664 5424 -240 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6138?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/6138/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.24% <ø> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6138/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.48% <ø> (ø)` | |\n| [src/transformers/modeling\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/6138/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jYW1lbWJlcnQucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/modeling\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6138/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `92.20% <ø> (ø)` | |\n| [src/transformers/modeling\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6138/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG1fcm9iZXJ0YS5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6138/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `40.96% <ø> (-0.04%)` | :arrow_down: |\n| [src/transformers/training\\_args\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6138/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzX3RmLnB5) | `47.45% <0.00%> (ø)` | |\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6138/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `16.54% <8.66%> (+0.06%)` | :arrow_up: |\n| [src/transformers/modeling\\_mmbt.py](https://codecov.io/gh/huggingface/transformers/pull/6138/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tbWJ0LnB5) | `23.47% <11.11%> (-0.63%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6138/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `80.56% <68.08%> (-1.65%)` | :arrow_down: |\n| ... and [40 more](https://codecov.io/gh/huggingface/transformers/pull/6138/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6138?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6138?src=pr&el=footer). Last update [8a8ae27...60928b0](https://codecov.io/gh/huggingface/transformers/pull/6138?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@sgugger thanks very much for this PR!\r\n\r\n`return_dict` seems to work with the `from_pretrained()` method for models, but what if I didn't want to use `from_pretrained()` and simply instantiated the model from scratch as follows:\r\n\r\n```\r\nconfig_class = GPT2Config\r\nmodel_class = GPT2DoubleHeadsModel\r\nconfig = config_class.from_pretrained(\"gpt2\")\r\nmodel = model_class(config)\r\n```\r\n\r\nI still want to be able to use `return_dict`. How would I go about doing that?\r\n\r\nIt looks like I could pass `return_dict` explicitly in the `forward()` for the from-scratch case. However, I want the `forward()` call in my code to be consistent across the from-scratch and the `from_pretrained()` settings, in order to decouple the model instantiation from the actual trainer loop.\r\n\r\nHow should this be handled?\r\n\r\nWould the solution be something like this:\r\n\r\n```\r\nconfig_class = GPT2Config\r\nmodel_class = GPT2DoubleHeadsModel\r\nconfig = config_class.from_pretrained(\"gpt2\", use_return_dict=True)\r\nmodel = model_class(config)\r\n```\r\n\r\nI tried this solution but it didn't work, it gave me the following error:\r\n\r\n```\r\n>>> from transformers import GPT2Config\r\n>>> config = GPT2Config.from_pretrained(\"gpt2\", use_return_dict=True)\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py\", line 312, in from_pretrained\r\n return cls.from_dict(config_dict, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py\", line 406, in from_dict\r\n setattr(config, key, value)\r\nAttributeError: can't set attribute\r\n```",
"The right line is:\r\n```\r\nconfig = config_class.from_pretrained(\"gpt2\", return_dict=True)\r\n```\r\n`use_return_dict` is an inner attribute that combines `return_dict` and `torchscript` (since torchscript is incompatible with `return_dict=True`)"
] | 1,596 | 1,600 | 1,596 | COLLABORATOR | null | This is the first step in the change of model outputs as described on [the forum](https://discuss.huggingface.co/t/new-model-output-types/195/8).
This PR removes the argument `return_tuple` and introduces `return_dict` (that works the other way round) and all models now return tuple by default (100% full backward compatibility) unless you opt-in the new model output types with `return_dict=True`. The model output class is changed to the dict-like one that should work equally well for TensorFlow.
I have normally updated all examples in the docs to instantiate the model with `return_dict=True` but more docs will follow in other PRs. For the tests, I have set `return_dict=True` in one of the common tests just to make sure it actually works. Step 2 (in a follow-up PR) will be to use it in all tests.
Step 3 is then going to update the TensorFlow models to use this `ModelOutput`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6138/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6138/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6138",
"html_url": "https://github.com/huggingface/transformers/pull/6138",
"diff_url": "https://github.com/huggingface/transformers/pull/6138.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6138.patch",
"merged_at": 1596115020000
} |
https://api.github.com/repos/huggingface/transformers/issues/6137 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6137/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6137/comments | https://api.github.com/repos/huggingface/transformers/issues/6137/events | https://github.com/huggingface/transformers/issues/6137 | 667,984,796 | MDU6SXNzdWU2Njc5ODQ3OTY= | 6,137 | StopIteration error when using HuggingFace Transformer models | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,601 | 1,601 | NONE | null | Hello,
I am trying to use the RobertaForMultipleChoice model, and when I try to compute the mc_loss, the following StopIteration error is generated:
```python
>>> mc_loss = model(input_ids = input_ids, attention_mask = attention_mask, labels = mc_labels)[0]
Traceback (most recent call last):
File "STAT946_final_project_code_v4.py", line 625, in <module>
success_rate_list_diag_normal = main_function_diag_normal('/home/ec2-user/test.txt', 'test_ans_num.txt', num_iter, log_interval)
File "STAT946_final_project_code_v4.py", line 415, in main_function_diag_normal
best_model_RobertaForMultipleChoice_diag_normal = train_loop(model_RobertaForMultipleChoice, tokenizer, optimizer_1, scheduler_1, log_interval, svi_diag_normal, guide_diag_normal, best_model_RobertaForMultipleChoice_diag_normal)
File "STAT946_final_project_code_v4.py", line 342, in train_loop
optimizer, scheduler, log_interval, svi, guide, epoch)
File "STAT946_final_project_code_v4.py", line 237, in train_mc_head
mc_loss = model(input_ids = input_ids, attention_mask = attention_mask, labels = mc_labels)[0]
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/pyro/nn/module.py", line 413, in __call__
return super().__call__(*args, **kwargs)
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/transformers/modeling_roberta.py", line 441, in forward
output_hidden_states=output_hidden_states,
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/pyro/nn/module.py", line 413, in __call__
return super().__call__(*args, **kwargs)
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/transformers/modeling_bert.py", line 732, in forward
extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape, device)
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/transformers/modeling_utils.py", line 228, in get_extended_attention_mask
extended_attention_mask = extended_attention_mask.to(dtype=self.dtype) # fp16 compatibility
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/transformers/modeling_utils.py", line 159, in dtype
first_tuple = next(gen)
StopIteration
```
The error seem to be generated from the HuggingFace code below:
```python
@property
def device(self) -> device:
try:
return next(self.parameters()).device
except StopIteration:
# For nn.DataParallel compatibility in PyTorch 1.5
def find_tensor_attributes(module: nn.Module) -> List[Tuple[str, Tensor]]:
tuples = [(k, v) for k, v in module.__dict__.items() if torch.is_tensor(v)]
return tuples
gen = self._named_members(get_members_fn=find_tensor_attributes)
first_tuple = next(gen)
return first_tuple[1].device
```
What is the cause of this error? and how can I fix it?
Thank you | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6137/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6137/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6136 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6136/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6136/comments | https://api.github.com/repos/huggingface/transformers/issues/6136/events | https://github.com/huggingface/transformers/issues/6136 | 667,974,524 | MDU6SXNzdWU2Njc5NzQ1MjQ= | 6,136 | frequent checkpoints have worse performance | {
"login": "wyin-Salesforce",
"id": 53835505,
"node_id": "MDQ6VXNlcjUzODM1NTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/53835505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wyin-Salesforce",
"html_url": "https://github.com/wyin-Salesforce",
"followers_url": "https://api.github.com/users/wyin-Salesforce/followers",
"following_url": "https://api.github.com/users/wyin-Salesforce/following{/other_user}",
"gists_url": "https://api.github.com/users/wyin-Salesforce/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wyin-Salesforce/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wyin-Salesforce/subscriptions",
"organizations_url": "https://api.github.com/users/wyin-Salesforce/orgs",
"repos_url": "https://api.github.com/users/wyin-Salesforce/repos",
"events_url": "https://api.github.com/users/wyin-Salesforce/events{/privacy}",
"received_events_url": "https://api.github.com/users/wyin-Salesforce/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"hi, can you post the link on stackoverflow\r\n\r\nBtw, I also face this issue when working with an RTE dataset and have raised an issue here..[https://github.com/huggingface/transformers/issues/5863](url). My dev values after each epoch don't match up when the total number of epoch changes. Now its making me wonder if its RTE specific. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,602 | 1,602 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
Hi All, I often notice an issue when training a model and evaluate on dev set.
Usually we may evaluate on the dev after each epoch, let's call this as setting A;
But we often want to check the system more often on the dev set, so we may evaluate for example 1/5 epoch; let's call this as setting B.
What I noticed is that A and B will get totally different performance in the end. Since B checks more often, I supposed that B can get the same or at least very close performance with A then evaluate at 5/5 of the training set, 10/5 of the training set, etc. But they are very different. For example, when I train textual entailment model on RTE dataset, A can give me about 86% accuracy on dev, but B can only give about 80%.
What's the issue here? thanks
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6136/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6136/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6135 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6135/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6135/comments | https://api.github.com/repos/huggingface/transformers/issues/6135/events | https://github.com/huggingface/transformers/issues/6135 | 667,970,558 | MDU6SXNzdWU2Njc5NzA1NTg= | 6,135 | How to combine the encoded representations of two transformers | {
"login": "jlim13",
"id": 36393441,
"node_id": "MDQ6VXNlcjM2MzkzNDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/36393441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jlim13",
"html_url": "https://github.com/jlim13",
"followers_url": "https://api.github.com/users/jlim13/followers",
"following_url": "https://api.github.com/users/jlim13/following{/other_user}",
"gists_url": "https://api.github.com/users/jlim13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jlim13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jlim13/subscriptions",
"organizations_url": "https://api.github.com/users/jlim13/orgs",
"repos_url": "https://api.github.com/users/jlim13/repos",
"events_url": "https://api.github.com/users/jlim13/events{/privacy}",
"received_events_url": "https://api.github.com/users/jlim13/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,601 | 1,601 | NONE | null | Say if I have two transformers models operating on two different domains, what is a good way to combine the features? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6135/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6135/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6134 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6134/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6134/comments | https://api.github.com/repos/huggingface/transformers/issues/6134/events | https://github.com/huggingface/transformers/pull/6134 | 667,964,979 | MDExOlB1bGxSZXF1ZXN0NDU4NTM5MjU5 | 6,134 | Fix TF CTRL model naming | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6134?src=pr&el=h1) Report\n> Merging [#6134](https://codecov.io/gh/huggingface/transformers/pull/6134?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/641b873c1341f553b40fd82c990b80884b585f0b&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6134?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6134 +/- ##\n==========================================\n- Coverage 78.64% 78.63% -0.01% \n==========================================\n Files 146 146 \n Lines 26326 26333 +7 \n==========================================\n+ Hits 20704 20708 +4 \n- Misses 5622 5625 +3 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6134?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6134/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `97.84% <100.00%> (+0.05%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6134/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.20% <0.00%> (-2.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6134/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `83.44% <0.00%> (-0.65%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6134/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `95.23% <0.00%> (+10.71%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6134?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6134?src=pr&el=footer). Last update [641b873...1bc31c0](https://codecov.io/gh/huggingface/transformers/pull/6134?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,596 | 1,600 | 1,596 | CONTRIBUTOR | null | This PR fixes an issue with the naming of some layers in the TensorFlow version of CTRL. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6134/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6134/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6134",
"html_url": "https://github.com/huggingface/transformers/pull/6134",
"diff_url": "https://github.com/huggingface/transformers/pull/6134.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6134.patch",
"merged_at": 1596039601000
} |
https://api.github.com/repos/huggingface/transformers/issues/6133 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6133/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6133/comments | https://api.github.com/repos/huggingface/transformers/issues/6133/events | https://github.com/huggingface/transformers/pull/6133 | 667,932,585 | MDExOlB1bGxSZXF1ZXN0NDU4NTEyNjA4 | 6,133 | bart-large-mnli-yahoo-answers model card | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6133?src=pr&el=h1) Report\n> Merging [#6133](https://codecov.io/gh/huggingface/transformers/pull/6133?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6c002853a68906a5b1c2dd2ebb416770f1fc322b&el=desc) will **increase** coverage by `0.08%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6133?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6133 +/- ##\n==========================================\n+ Coverage 77.77% 77.86% +0.08% \n==========================================\n Files 146 146 \n Lines 26326 26326 \n==========================================\n+ Hits 20476 20499 +23 \n+ Misses 5850 5827 -23 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6133?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6133/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.42% <0.00%> (-29.91%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6133/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `80.95% <0.00%> (-5.27%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6133/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.20% <0.00%> (-0.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6133/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6133?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6133?src=pr&el=footer). Last update [6c00285...5c85f49](https://codecov.io/gh/huggingface/transformers/pull/6133?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Feel free to merge whenever ready"
] | 1,596 | 1,598 | 1,596 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6133/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6133/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6133",
"html_url": "https://github.com/huggingface/transformers/pull/6133",
"diff_url": "https://github.com/huggingface/transformers/pull/6133.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6133.patch",
"merged_at": 1596207393000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6132 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6132/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6132/comments | https://api.github.com/repos/huggingface/transformers/issues/6132/events | https://github.com/huggingface/transformers/issues/6132 | 667,898,600 | MDU6SXNzdWU2Njc4OTg2MDA= | 6,132 | MBartTokenizerTrimmed to support truncated embeddings | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
},
{
"id": 2009457320,
"node_id": "MDU6TGFiZWwyMDA5NDU3MzIw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/translation",
"name": "translation",
"color": "b2d2f4",
"default": false,
"description": "machine translation utilities and models"
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,596 | 1,614 | 1,614 | CONTRIBUTOR | null | Motivation:
The embeddings table for MBART is huge, but only ~40K of the entries are used/finetuned for most wmt tasks. See https://github.com/pytorch/fairseq/issues/2120
- needs vocab.json (fairseq Dictionary)
- needs to call `encode_as_pieces` with restricted vocabulary.
I will take this. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6132/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6131 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6131/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6131/comments | https://api.github.com/repos/huggingface/transformers/issues/6131/events | https://github.com/huggingface/transformers/pull/6131 | 667,874,074 | MDExOlB1bGxSZXF1ZXN0NDU4NDYzODYw | 6,131 | Enable ONNX/ONNXRuntime optimizations through converter script | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @tianleiwu @yufenglee 💪 ",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6131?src=pr&el=h1) Report\n> Merging [#6131](https://codecov.io/gh/huggingface/transformers/pull/6131?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6c002853a68906a5b1c2dd2ebb416770f1fc322b&el=desc) will **increase** coverage by `0.71%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6131?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6131 +/- ##\n==========================================\n+ Coverage 77.77% 78.49% +0.71% \n==========================================\n Files 146 146 \n Lines 26326 26326 \n==========================================\n+ Hits 20476 20664 +188 \n+ Misses 5850 5662 -188 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6131?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6131/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6131/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `60.56% <0.00%> (-35.22%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6131/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.42% <0.00%> (-29.91%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6131/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `79.69% <0.00%> (-6.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6131/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.09% <0.00%> (-4.88%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6131/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.02% <0.00%> (-1.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6131/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `88.05% <0.00%> (-1.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6131/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6131/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6131?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6131?src=pr&el=footer). Last update [6c00285...7cb55ae](https://codecov.io/gh/huggingface/transformers/pull/6131?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,596 | 1,596 | 1,596 | MEMBER | null | Introduce `--optimize` CLI argument and `optimize()` method to allow ONNXRuntime to operates all the possible optimizations on the raw ONNX IR.
Added documentation for this parameter in the ONNX/ONNXRuntime section on the doc. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6131/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6131/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6131",
"html_url": "https://github.com/huggingface/transformers/pull/6131",
"diff_url": "https://github.com/huggingface/transformers/pull/6131.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6131.patch",
"merged_at": 1596181513000
} |
https://api.github.com/repos/huggingface/transformers/issues/6130 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6130/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6130/comments | https://api.github.com/repos/huggingface/transformers/issues/6130/events | https://github.com/huggingface/transformers/pull/6130 | 667,869,851 | MDExOlB1bGxSZXF1ZXN0NDU4NDYwMzE5 | 6,130 | Use google style to document properties | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6130?src=pr&el=h1) Report\n> Merging [#6130](https://codecov.io/gh/huggingface/transformers/pull/6130?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6c002853a68906a5b1c2dd2ebb416770f1fc322b&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6130?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6130 +/- ##\n=======================================\n Coverage 77.77% 77.78% \n=======================================\n Files 146 146 \n Lines 26326 26328 +2 \n=======================================\n+ Hits 20476 20478 +2 \n Misses 5850 5850 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6130?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6130/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.96% <ø> (ø)` | |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6130/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `98.62% <100.00%> (+<0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6130/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.35% <100.00%> (+0.02%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6130/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `70.32% <0.00%> (-26.66%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6130/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.20% <0.00%> (-0.29%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6130/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6130/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6130?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6130?src=pr&el=footer). Last update [6c00285...0c01513](https://codecov.io/gh/huggingface/transformers/pull/6130?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,596 | 1,596 | 1,596 | COLLABORATOR | null | It's cleaner this way and avoid redundancy. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6130/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6130/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6130",
"html_url": "https://github.com/huggingface/transformers/pull/6130",
"diff_url": "https://github.com/huggingface/transformers/pull/6130.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6130.patch",
"merged_at": 1596040093000
} |
https://api.github.com/repos/huggingface/transformers/issues/6129 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6129/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6129/comments | https://api.github.com/repos/huggingface/transformers/issues/6129/events | https://github.com/huggingface/transformers/pull/6129 | 667,825,766 | MDExOlB1bGxSZXF1ZXN0NDU4NDIzMjA3 | 6,129 | Add new pre-trained models BERTweet and PhoBERT | {
"login": "datquocnguyen",
"id": 2412555,
"node_id": "MDQ6VXNlcjI0MTI1NTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2412555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/datquocnguyen",
"html_url": "https://github.com/datquocnguyen",
"followers_url": "https://api.github.com/users/datquocnguyen/followers",
"following_url": "https://api.github.com/users/datquocnguyen/following{/other_user}",
"gists_url": "https://api.github.com/users/datquocnguyen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/datquocnguyen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/datquocnguyen/subscriptions",
"organizations_url": "https://api.github.com/users/datquocnguyen/orgs",
"repos_url": "https://api.github.com/users/datquocnguyen/repos",
"events_url": "https://api.github.com/users/datquocnguyen/events{/privacy}",
"received_events_url": "https://api.github.com/users/datquocnguyen/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"> I'd like to add pre-trained [BERTweet](https://github.com/VinAIResearch/BERTweet/) and [PhoBERT](https://github.com/VinAIResearch/PhoBERT/) models to the `transformers` library.\r\n> \r\n> Users now can use these models directly from `transformers`. E.g:\r\n> \r\n> ```\r\n> bertweettokenizer = BertweetTokenizer.from_pretrained(\"vinai/bertweet-base\")\r\n> bertweetmodel = BertweetModel.from_pretrained(\"vinai/bertweet-base\")\r\n> \r\n> phoberttokenizer = PhobertTokenizer.from_pretrained(\"vinai/phobert-large\")\r\n> phobertmodel = PhobertModel.from_pretrained(\"vinai/phobert-large\")\r\n> ```\r\n> \r\n> [BERTweet: A pre-trained language model for English Tweets](https://github.com/VinAIResearch/BERTweet/)\r\n> [PhoBERT: Pre-trained language models for Vietnamese](https://github.com/VinAIResearch/PhoBERT/)\r\n\r\nWhether I can get any support from huggingface w.r.t. this pull request @julien-c ? Thanks.",
"Hello @datquocnguyen ! As you've said, BERTweet and PhoBERT reimplement the RoBERTa model without adding any special behavior. I don't think it's necessary to reimplement them then, is it? Uploading them on the hub should be enough to load them into RoBERTa architectures, right?",
"Hi @LysandreJik \r\nThey use different tokenizers (i.e. fastBPE), so we cannot load their tokenizers using RoBERTa. \r\nPlease see a loading example using RoBERTa: https://github.com/VinAIResearch/BERTweet#transformers \r\nAn issue related to this is at: #5965 \r\n\r\n\r\n",
"I hope both BERTweet and PhoBERT could be incorporated into `transformers` in a similar manner to as their counterparts (e.g. CamemBERT and FlauBERT). @LysandreJik Please let me know what I can do for this. Thanks.",
"Yes, I understand, that makes sense. There shouldn't be any issue in incorporating them into `transformers`.",
"I've taken a quick look at it, and it looks very cool! Something that we can maybe do better, is regarding the tokenizers:\r\n\r\n- They're currently untested, but they're the main contribution of this PR so they definitely should be tested.\r\n- If possible, we would like not to add an additional dependency (in this case FastBPE). It would be great to leverage the already existing library `huggingface/tokenizers`\r\n- On that front, given it's a BPE tokenizer, it should be easy enough to leverage the OpenAI GPT (not GPT-2) tokenizer, which seems very similar. It might even be possible to load the vocab/merge files directly in `OpenAIGPTTokenizer`.\r\n\r\nLet me know what you think!",
"Haven't tried it directly, but as seen with @n1t0, since you're not doing any fancy pre-processing it might be as simple as the following:\r\n\r\n```py\r\nclass PhobertTokenizerFast(PreTrainedTokenizerFast):\r\n vocab_files_names = VOCAB_FILES_NAMES\r\n pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP\r\n max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES\r\n model_input_names = [\"attention_mask\"]\r\n def __init__(self, vocab_file, merges_file, unk_token=\"<unk>\", **kwargs):\r\n kwargs.setdefault(\"unk_token\", unk_token)\r\n super().__init__(\r\n CharBPETokenizer(vocab_file=vocab_file, merges_file=merges_file, unk_token=unk_token, lowercase=False, bert_normalizer=False, split_on_whitespace_only=True),\r\n **kwargs,\r\n )\r\n```",
"Thanks very much @LysandreJik I will revise the code following your comments and inform you as soon as I complete it. ",
"@datquocnguyen Yeah, these models are cool. Lovin' it. I think we can try to figure out how to convert `fastBPE` formats to our compatible format before adding it directly to our dependency (I believe `XLM` uses `fastBPE`). so would you hold on a little when we try to figure it out? We have to be cautious when adding dependencies! Thanks!\r\ncc @LysandreJik ",
"Yes. Thanks @JetRunner ",
"some tokenizer function (decode, convert_ids_to_tokens) hasn't implemented for PhoBertTokenizer right?",
"@datquocnguyen Thank you for this pull request. I tried the Bertweet model and met a problem that the tokenizer encoded special symbols like \"\\<pad\\>\" not as a whole token. Instead, it would split the string into characters like \"< p a d >\". I fixed the problem by modifying the code at `` as below:\r\n```python\r\n--- a/BERTweet/transformers/tokenization_bertweet.py\r\n+++ b/BERTweet/transformers/tokenization_bertweet.py\r\n@@ -242,9 +242,14 @@ class BertweetTokenizer(PreTrainedTokenizer):\r\n text = self.normalizeTweet(text)\r\n return self.bpe.apply([text])[0].split()\r\n\r\n- def convert_tokens_to_ids(self, tokens):\r\n- \"\"\" Converts a list of str tokens into a list of ids using the vocab.\"\"\"\r\n- return self.vocab.encode_line(\" \".join(tokens), append_eos=False, add_if_not_exist=False).long().tolist()\r\n+ def _convert_token_to_id(self, token):\r\n+ #\"\"\" Converts a list of str tokens into a list of ids using the vocab.\"\"\"\r\n+ #return self.vocab.encode_line(\" \".join(tokens), append_eos=False, add_if_not_exist=False).long().tolist()\r\n+ return self.vocab.encode_line(token, append_eos=False, add_if_not_exist=False).long().tolist()[0]\r\n+\r\n+ @property\r\n+ def vocab_size(self) -> int:\r\n+ return len(self.vocab)\r\n```\r\nFrom my understanding, to encode a sentence, the order of the interfaces called in this case are `PreTrainedTokenizerBase::encode`\r\n->`PreTrainedTokenizer::_encode_plus`\r\n->`PreTrainedTokenizer::convert_tokens_to_ids`\r\n->`PreTrainedTokenizer::_convert_token_to_id_with_added_voc`\r\n->`BertweetTokenizer::_convert_token_to_id` for non-special tokens or `PreTrainedTokenizer::added_tokens_encoder` for special tokens.\r\nSo in the class `BertweetTokenizer`, it should implement the interface `_convert_token_to_id` rather than `convert_tokens_to_ids`.",
"I will have a look soon. Thanks @Miopas.",
"**I have just tried \"BertweetTokenizer\" and got this error:**\r\n\r\n\"ImportError: cannot import name 'BertweetTokenizer' from 'transformers' (/home/apps/anaconda3/lib/python3.7/site-packages/transformers/__init__.py)\"\r\n\r\n**Is there any solution to it?**\r\n\r\n**I have also tried:**\r\n\r\ntokenizer2 = BertTokenizer.from_pretrained(\"vinai/bertweet-base\")\r\ntrained = tokenizer2.encode(\"oops!! pelosi & dems admit numbers submitted to cbo are false! someurl #tcot #tlot #sgp #hcr #p2\")\r\n\r\nand got:\r\ntrained = [None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None]\r\n\r\n**Is there any solution to it?**\r\n\r\nthks!",
"Hi @LysandreJik @JetRunner \r\nIt's for your information both PhoBERT and BERTweet are now can be used in the Auto mode and without an external dependency fastBPE. Please help review this pull request (7/8 successful checks). Thanks a lot. \r\n\r\n@justinphan3110 @Miopas @SergioBarretoJr Please update the repository. Both models should work now. Thanks.",
"[run_tests_torch_and_tf.output.txt](https://github.com/huggingface/transformers/files/5174353/run_tests_torch_and_tf.output.txt)\r\n\r\nHi @LysandreJik @JetRunner @julien-c @sshleifer \r\nI am wondering whether I can get a support from huggingface to incorporate BERTweet and PhoBERT into the `transformers` master branch ?\r\nThere is only a failed test of `FAILED tests/test_modeling_marian.py::TestMarian_en_ROMANCE::test_pipeline - ...` for `run_tests_torch_and_tf` which is not related to BERTweet and PhoBERT, thus out of my control. So my pull request could not pass this test (please see details in the attachment file). Please could you help review my pull request?\r\nThank you very much.\r\n",
"Dear @LysandreJik, \r\n\r\nPlease can you kindly help to add the PhoBERT model as I really want to use it with your great `transformers` tool in a Vietnamese text challenge? \r\n",
"@datquocnguyen Thanks for your contribution. We discussed internally and given that the modeling part of both BERTweet and PhoBERT is out-of-the-box RoBERTa, we would like to avoid duplicating model files, and instead support them by leveraging https://github.com/huggingface/transformers/pull/6995\r\n\r\ni.e. we would only need to add files for the tokenizers (and associated tests)\r\n\r\nPotentially we could also help to make those new tokenizers more general/configurable. What do you think?\r\n",
"@julien-c That sounds a nice idea. \r\nPlease inform me when the new configuration type is integrated into the master branch. \r\nI will then adapt our tokenizers & config files for it. Thanks!",
"Hi @datquocnguyen, the PR @julien-c linked is now merged!\r\n\r\nThis should greatly simplify your PR, in that you only need to contribute your tokenizers as well as their tests. Let us know if you can make the change!",
"Hi @LysandreJik thanks for your information. \r\nYes, I will make the change soon (it should be done early next week).",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6129?src=pr&el=h1) Report\n> Merging [#6129](https://codecov.io/gh/huggingface/transformers/pull/6129?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b0cbcdb05b39e6c81db049d2b4d7dfc5d823210d?el=desc) will **decrease** coverage by `0.24%`.\n> The diff coverage is `71.14%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6129?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6129 +/- ##\n==========================================\n- Coverage 80.32% 80.08% -0.25% \n==========================================\n Files 168 170 +2 \n Lines 32285 32642 +357 \n==========================================\n+ Hits 25932 26140 +208 \n- Misses 6353 6502 +149 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6129?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_bertweet.py](https://codecov.io/gh/huggingface/transformers/pull/6129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydHdlZXQucHk=) | `63.18% <63.18%> (ø)` | |\n| [src/transformers/tokenization\\_phobert.py](https://codecov.io/gh/huggingface/transformers/pull/6129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGhvYmVydC5weQ==) | `83.45% <83.45%> (ø)` | |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/6129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.34% <100.00%> (+<0.01%)` | :arrow_up: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `92.06% <100.00%> (+0.26%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `26.05% <0.00%> (-63.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.84% <0.00%> (-23.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/6129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `67.10% <0.00%> (-12.67%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: |\n| ... and [21 more](https://codecov.io/gh/huggingface/transformers/pull/6129/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6129?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6129?src=pr&el=footer). Last update [b0cbcdb...257b9f1](https://codecov.io/gh/huggingface/transformers/pull/6129?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@datquocnguyen can you also upload your model files on https://huggingface.co/vinai/bertweet-base\r\n\r\nI still get this error: \r\n\r\n> ⚠️ Model name 'vinai/bertweet-base' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'vinai/bertweet-base' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\r\n",
"@datquocnguyen I looked a the PR and looking forward to this merge. I have a few suggestions:\r\n\r\n1. I find the Phobert and Bertweet models to be quite similar. This makes the tokenizers also similar so we should not need a seperate tokenizer for both. Given that both these tokenizers just load fastBPE tokenizer data format, we can simply call them fastBPETokenizer. \r\n\r\n2. Looking at this other code which also uses fastBPE <sup>[1]</sup> can't we just follow it to convert the fastBPE tokenizer files to the huggingface format. \r\n\r\n - You can easily convert your `bpe.codes` into `merges.txt` file and then use the Roberta tokenizer. \r\n - The format is the same and you only need to drop the 3rd column in your BPE.codes and add a top line for comment. \r\n - In your code you are not even using the last column values. \r\n - Your `merges.txt` can have the following as the first line `#version: 1` (look at merges.txt file of Roberta <sup>[2]</sup>)\r\n\r\n[1]: https://github.com/huggingface/transformers/blob/b23d3a5ad4aa08decd10671f85be5950767dd052/model_cards/allegro/herbert-klej-cased-v1/README.md\r\n[2]: https://huggingface.co/roberta-base#list-files",
"Hi @napsternxg The model had been already uploaded to https://huggingface.co/vinai/bertweet-base. For now, you would have to install `transformers` from our development branch (as it has not merged to the master branch of `transformers` yet). Did you try the following commands?\r\n\r\n - Python version >= 3.6\r\n - [PyTorch](http://pytorch.org/) version >= 1.4.0\r\n - Install `transformers` from our development branch:\r\n - `git clone https://github.com/datquocnguyen/transformers.git`\r\n - `cd transformers`\r\n - `pip install --upgrade .`\r\n - Install `emoji`: `pip3 install emoji`\r\n\r\nThanks for your suggestions. BertweetTokenizer is specifically designed to work on Tweet data, incorporating a TwitterTokenizer while PhoBERT does not. Note that both our `vocab.txt` and `bpe.codes` are also used in loading our models in `fairseq`. So I would prefer to keep them intact rather than converting them into another format. ",
"Btw, I should mention that BERTweet is accepted as an EMNLP-2020 demo paper while PhoBERT gets a slot in the Findings of EMNLP-2020 volume. Please help review this pull request so that others might benefit from using them directly from the master branch of `transformers`. Thanks. @LysandreJik @JetRunner @julien-c\r\nAll checks have passed and you only need to merge files for the tokenizers and associated tests.",
"Thanks that makes sense. \r\n@datquocnguyen I was trying to use it from the models website. \r\nMy suggestion on the bpe.codes file was not to remove it but to generate the merges.txt file from it, which will make it compatible with the huggingface tokenizer. ",
"@napsternxg Please remove your \"transformers\" cache folder from `~/.cache/torch` and reinstall `transformers` from our development branch. I am sure that `bertweet` would work smoothly:\r\n\r\n```python\r\nimport torch\r\nfrom transformers import AutoModel, AutoTokenizer\r\n\r\nbertweet = AutoModel.from_pretrained(\"vinai/bertweet-base\")\r\ntokenizer = AutoTokenizer.from_pretrained(\"vinai/bertweet-base\")\r\n\r\n# INPUT TWEET IS ALREADY NORMALIZED!\r\nline = \"SC has first two presumptive cases of coronavirus , DHEC confirms HTTPURL via @USER :cry:\"\r\n\r\ninput_ids = torch.tensor([tokenizer.encode(line)])\r\n\r\nwith torch.no_grad():\r\n features = bertweet(input_ids) # Models outputs are now tuples\r\n```",
"@datquocnguyen great work and I am looking forward to seeing the PR gets merged so that I can use the models directly from the huggingface transformers.",
"Will merge today unless @julien-c, @JetRunner have comments."
] | 1,596 | 1,605 | 1,600 | CONTRIBUTOR | null | I'd like to add pre-trained [BERTweet](https://github.com/VinAIResearch/BERTweet/) and [PhoBERT](https://github.com/VinAIResearch/PhoBERT/) models to the `transformers` library.
Users now can use these models directly from `transformers`. E.g:
bertweettokenizer = BertweetTokenizer.from_pretrained("vinai/bertweet-base")
bertweetmodel = BertweetModel.from_pretrained("vinai/bertweet-base")
phoberttokenizer = PhobertTokenizer.from_pretrained("vinai/phobert-large")
phobertmodel = PhobertModel.from_pretrained("vinai/phobert-large")
[BERTweet: A pre-trained language model for English Tweets](https://github.com/VinAIResearch/BERTweet/)
[PhoBERT: Pre-trained language models for Vietnamese](https://github.com/VinAIResearch/PhoBERT/)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6129/reactions",
"total_count": 18,
"+1": 9,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 9,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6129/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6129",
"html_url": "https://github.com/huggingface/transformers/pull/6129",
"diff_url": "https://github.com/huggingface/transformers/pull/6129.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6129.patch",
"merged_at": 1600449404000
} |
https://api.github.com/repos/huggingface/transformers/issues/6128 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6128/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6128/comments | https://api.github.com/repos/huggingface/transformers/issues/6128/events | https://github.com/huggingface/transformers/pull/6128 | 667,800,275 | MDExOlB1bGxSZXF1ZXN0NDU4NDAxNzA1 | 6,128 | add deepset/xlm-roberta-large-squad2 model card | {
"login": "Timoeller",
"id": 3264870,
"node_id": "MDQ6VXNlcjMyNjQ4NzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3264870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Timoeller",
"html_url": "https://github.com/Timoeller",
"followers_url": "https://api.github.com/users/Timoeller/followers",
"following_url": "https://api.github.com/users/Timoeller/following{/other_user}",
"gists_url": "https://api.github.com/users/Timoeller/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Timoeller/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Timoeller/subscriptions",
"organizations_url": "https://api.github.com/users/Timoeller/orgs",
"repos_url": "https://api.github.com/users/Timoeller/repos",
"events_url": "https://api.github.com/users/Timoeller/events{/privacy}",
"received_events_url": "https://api.github.com/users/Timoeller/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,596 | 1,596 | 1,596 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6128/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6128/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6128",
"html_url": "https://github.com/huggingface/transformers/pull/6128",
"diff_url": "https://github.com/huggingface/transformers/pull/6128.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6128.patch",
"merged_at": 1596036857000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6127 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6127/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6127/comments | https://api.github.com/repos/huggingface/transformers/issues/6127/events | https://github.com/huggingface/transformers/issues/6127 | 667,780,751 | MDU6SXNzdWU2Njc3ODA3NTE= | 6,127 | Initializing XLMRobertaTokenizer using pretrained tokenizer expects serialized vocab | {
"login": "aoxolotl",
"id": 53764708,
"node_id": "MDQ6VXNlcjUzNzY0NzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/53764708?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aoxolotl",
"html_url": "https://github.com/aoxolotl",
"followers_url": "https://api.github.com/users/aoxolotl/followers",
"following_url": "https://api.github.com/users/aoxolotl/following{/other_user}",
"gists_url": "https://api.github.com/users/aoxolotl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aoxolotl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aoxolotl/subscriptions",
"organizations_url": "https://api.github.com/users/aoxolotl/orgs",
"repos_url": "https://api.github.com/users/aoxolotl/repos",
"events_url": "https://api.github.com/users/aoxolotl/events{/privacy}",
"received_events_url": "https://api.github.com/users/aoxolotl/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! The XLM-R tokenizer only accepts SentencePiece files, which cannot be created yet with the `tokenizers` library (soon!). You should use the official SentencePiece library for that."
] | 1,596 | 1,596 | 1,596 | NONE | null | Hi,
I am training an XLMRoberta model from scratch on Hindi. I am using a sentencepiece tokenizer trained exclusively on monolingual data following the steps mentioned in the [tokenizers repository](https://github.com/huggingface/tokenizers/tree/704cf3fdd2f607ead58a561b892b510b49c301db/bindings/python#using-the-provided-tokenizers). This results in the creation of `vocab.json` and `merges.txt`.
However when I try to initialize the tokenizer using `XLMRobertaTokenizer.from_pretrained` I get an error saying
```assumed 'models/sentencepiece' was a path, a model identifier, or url to a directory containing vocabulary files named ['sentencepiece.bpe.model'] but couldn't find such vocabulary files at this path or url. ```
I am assuming this is a serialized file based on [huggingface.co model](https://s3.amazonaws.com/models.huggingface.co/bert/xlm-roberta-base-sentencepiece.bpe.model) but don't know how to serialize my vocab.json file. I have already tried using `pickle` and `numpy`
Versions used:
transformers: 2.9.1
tokenizers: 0.7.0 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6127/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6126 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6126/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6126/comments | https://api.github.com/repos/huggingface/transformers/issues/6126/events | https://github.com/huggingface/transformers/issues/6126 | 667,746,197 | MDU6SXNzdWU2Njc3NDYxOTc= | 6,126 | Add decoding inputs to generate | {
"login": "guyeyal",
"id": 3502557,
"node_id": "MDQ6VXNlcjM1MDI1NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3502557?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guyeyal",
"html_url": "https://github.com/guyeyal",
"followers_url": "https://api.github.com/users/guyeyal/followers",
"following_url": "https://api.github.com/users/guyeyal/following{/other_user}",
"gists_url": "https://api.github.com/users/guyeyal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guyeyal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guyeyal/subscriptions",
"organizations_url": "https://api.github.com/users/guyeyal/orgs",
"repos_url": "https://api.github.com/users/guyeyal/repos",
"events_url": "https://api.github.com/users/guyeyal/events{/privacy}",
"received_events_url": "https://api.github.com/users/guyeyal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,596 | 1,596 | 1,596 | NONE | null | # 🚀 Feature request
Add decoding inputs to generate
## Motivation
When generating with encoder-decoder, one may want to insert context for the decoder.
I'm currently working on summarization given that I know some parts of the gt. But other ideas can come to mind.
## Your contribution
```
@torch.no_grad()
def generate(
self,
input_ids: Optional[torch.LongTensor] = None,
max_length: Optional[int] = None,
min_length: Optional[int] = None,
do_sample: Optional[bool] = None,
early_stopping: Optional[bool] = None,
num_beams: Optional[int] = None,
temperature: Optional[float] = None,
top_k: Optional[int] = None,
top_p: Optional[float] = None,
repetition_penalty: Optional[float] = None,
bad_words_ids: Optional[Iterable[int]] = None,
bos_token_id: Optional[int] = None,
pad_token_id: Optional[int] = None,
eos_token_id: Optional[int] = None,
length_penalty: Optional[float] = None,
no_repeat_ngram_size: Optional[int] = None,
num_return_sequences: Optional[int] = None,
attention_mask: Optional[torch.LongTensor] = None,
decoder_start_token_id: Optional[int] = None,
decoder_input_ids: Optional[torch.LongTensor] = None,
decoder_attention_mask: Optional[torch.LongTensor] = None,
use_cache: Optional[bool] = None,
**model_specific_kwargs
) -> torch.LongTensor:
```
r""" Generates sequences for models with a LM head. The method currently supports greedy decoding, beam-search decoding, sampling with temperature, sampling with top-k or nucleus sampling.
Adapted in part from Facebook's XLM beam search code_.
.. _Facebook's XLM beam search code:
https://github.com/facebookresearch/XLM/blob/9e6f6814d17be4fe5b15f2e6c43eb2b2d76daeb4/src/model/transformer.py#L529
Parameters:
input_ids: (`optional`) `torch.LongTensor` of shape `(batch_size, sequence_length)`
The sequence used as a prompt for the generation. If `None` the method initializes
it as an empty `torch.LongTensor` of shape `(1,)`.
max_length: (`optional`) int
The max length of the sequence to be generated. Between `min_length` and infinity. Default to 20.
min_length: (`optional`) int
The min length of the sequence to be generated. Between 0 and infinity. Default to 0.
do_sample: (`optional`) bool
If set to `False` greedy decoding is used. Otherwise sampling is used. Defaults to `False` as defined in `configuration_utils.PretrainedConfig`.
early_stopping: (`optional`) bool
if set to `True` beam search is stopped when at least `num_beams` sentences finished per batch. Defaults to `False` as defined in `configuration_utils.PretrainedConfig`.
num_beams: (`optional`) int
Number of beams for beam search. Must be between 1 and infinity. 1 means no beam search. Default to 1.
temperature: (`optional`) float
The value used to module the next token probabilities. Must be strictly positive. Default to 1.0.
top_k: (`optional`) int
The number of highest probability vocabulary tokens to keep for top-k-filtering. Between 1 and infinity. Default to 50.
top_p: (`optional`) float
The cumulative probability of parameter highest probability vocabulary tokens to keep for nucleus sampling. Must be between 0 and 1. Default to 1.
repetition_penalty: (`optional`) float
The parameter for repetition penalty. Between 1.0 and infinity. 1.0 means no penalty. Default to 1.0.
pad_token_id: (`optional`) int
Padding token. Default to specicic model pad_token_id or None if it does not exist.
bos_token_id: (`optional`) int
BOS token. Defaults to `bos_token_id` as defined in the models config.
eos_token_id: (`optional`) int
EOS token. Defaults to `eos_token_id` as defined in the models config.
length_penalty: (`optional`) float
Exponential penalty to the length. Default to 1.
no_repeat_ngram_size: (`optional`) int
If set to int > 0, all ngrams of size `no_repeat_ngram_size` can only occur once.
bad_words_ids: (`optional`) list of lists of int
`bad_words_ids` contains tokens that are not allowed to be generated. In order to get the tokens of the words that should not appear in the generated text, use `tokenizer.encode(bad_word, add_prefix_space=True)`.
num_return_sequences: (`optional`) int
The number of independently computed returned sequences for each element in the batch. Default to 1.
attention_mask (`optional`) obj: `torch.LongTensor` of same shape as `input_ids`
Mask to avoid performing attention on padding token indices.
Mask values selected in ``[0, 1]``:
``1`` for tokens that are NOT MASKED, ``0`` for MASKED tokens.
Defaults to `None`.
`What are attention masks? <../glossary.html#attention-mask>`__
decoder_start_token_id=None: (`optional`) int
If an encoder-decoder model starts decoding with a different token than BOS.
Defaults to `None` and is changed to `BOS` later.
use_cache: (`optional`) bool
If `use_cache` is True, past key values are used to speed up decoding if applicable to model. Defaults to `True`.
model_specific_kwargs: (`optional`) dict
Additional model specific kwargs will be forwarded to the `forward` function of the model.
Return:
output: `torch.LongTensor` of shape `(batch_size * num_return_sequences, sequence_length)`
sequence_length is either equal to max_length or shorter if all batches finished early due to the `eos_token_id`
Examples::
tokenizer = AutoTokenizer.from_pretrained('distilgpt2') # Initialize tokenizer
model = AutoModelWithLMHead.from_pretrained('distilgpt2') # Download model and configuration from S3 and cache.
outputs = model.generate(max_length=40) # do greedy decoding
print('Generated: {}'.format(tokenizer.decode(outputs[0], skip_special_tokens=True)))
tokenizer = AutoTokenizer.from_pretrained('openai-gpt') # Initialize tokenizer
model = AutoModelWithLMHead.from_pretrained('openai-gpt') # Download model and configuration from S3 and cache.
input_context = 'The dog'
input_ids = tokenizer.encode(input_context, return_tensors='pt') # encode input context
outputs = model.generate(input_ids=input_ids, num_beams=5, num_return_sequences=3, temperature=1.5) # generate 3 independent sequences using beam search decoding (5 beams) with sampling from initial context 'The dog'
for i in range(3): # 3 output sequences were generated
print('Generated {}: {}'.format(i, tokenizer.decode(outputs[i], skip_special_tokens=True)))
tokenizer = AutoTokenizer.from_pretrained('distilgpt2') # Initialize tokenizer
model = AutoModelWithLMHead.from_pretrained('distilgpt2') # Download model and configuration from S3 and cache.
input_context = 'The dog'
input_ids = tokenizer.encode(input_context, return_tensors='pt') # encode input context
outputs = model.generate(input_ids=input_ids, max_length=40, temperature=0.7, num_return_sequences=3) # 3 generate sequences using by sampling
for i in range(3): # 3 output sequences were generated
print('Generated {}: {}'.format(i, tokenizer.decode(outputs[i], skip_special_tokens=True)))
tokenizer = AutoTokenizer.from_pretrained('ctrl') # Initialize tokenizer
model = AutoModelWithLMHead.from_pretrained('ctrl') # Download model and configuration from S3 and cache.
input_context = 'Legal My neighbor is' # "Legal" is one of the control codes for ctrl
input_ids = tokenizer.encode(input_context, return_tensors='pt') # encode input context
outputs = model.generate(input_ids=input_ids, max_length=50, temperature=0.7, repetition_penalty=1.2) # generate sequences
print('Generated: {}'.format(tokenizer.decode(outputs[0], skip_special_tokens=True)))
tokenizer = AutoTokenizer.from_pretrained('gpt2') # Initialize tokenizer
model = AutoModelWithLMHead.from_pretrained('gpt2') # Download model and configuration from S3 and cache.
input_context = 'My cute dog' # "Legal" is one of the control codes for ctrl
bad_words_ids = [tokenizer.encode(bad_word, add_prefix_space=True) for bad_word in ['idiot', 'stupid', 'shut up']]
input_ids = tokenizer.encode(input_context, return_tensors='pt') # encode input context
outputs = model.generate(input_ids=input_ids, max_length=100, do_sample=True, bad_words_ids=bad_words_ids) # generate sequences without allowing bad_words to be generated
"""
# We cannot generate if the model does not have a LM head
if self.get_output_embeddings() is None:
raise AttributeError(
"You tried to generate sequences with a model that does not have a LM Head."
"Please use another model class (e.g. `OpenAIGPTLMHeadModel`, `XLNetLMHeadModel`, `GPT2LMHeadModel`, `CTRLLMHeadModel`, `T5WithLMHeadModel`, `TransfoXLLMHeadModel`, `XLMWithLMHeadModel`, `BartForConditionalGeneration` )"
)
max_length = max_length if max_length is not None else self.config.max_length
min_length = min_length if min_length is not None else self.config.min_length
do_sample = do_sample if do_sample is not None else self.config.do_sample
early_stopping = early_stopping if early_stopping is not None else self.config.early_stopping
use_cache = use_cache if use_cache is not None else self.config.use_cache
num_beams = num_beams if num_beams is not None else self.config.num_beams
temperature = temperature if temperature is not None else self.config.temperature
top_k = top_k if top_k is not None else self.config.top_k
top_p = top_p if top_p is not None else self.config.top_p
repetition_penalty = repetition_penalty if repetition_penalty is not None else self.config.repetition_penalty
bos_token_id = bos_token_id if bos_token_id is not None else self.config.bos_token_id
pad_token_id = pad_token_id if pad_token_id is not None else self.config.pad_token_id
eos_token_id = eos_token_id if eos_token_id is not None else self.config.eos_token_id
length_penalty = length_penalty if length_penalty is not None else self.config.length_penalty
no_repeat_ngram_size = (
no_repeat_ngram_size if no_repeat_ngram_size is not None else self.config.no_repeat_ngram_size
)
bad_words_ids = bad_words_ids if bad_words_ids is not None else self.config.bad_words_ids
num_return_sequences = (
num_return_sequences if num_return_sequences is not None else self.config.num_return_sequences
)
decoder_start_token_id = (
decoder_start_token_id if decoder_start_token_id is not None else self.config.decoder_start_token_id
)
if input_ids is not None:
batch_size = input_ids.shape[0] # overriden by the input batch_size
else:
batch_size = 1
assert isinstance(max_length, int) and max_length > 0, "`max_length` should be a strictly positive integer."
assert isinstance(min_length, int) and min_length >= 0, "`min_length` should be a positive integer."
assert isinstance(do_sample, bool), "`do_sample` should be a boolean."
assert isinstance(early_stopping, bool), "`early_stopping` should be a boolean."
assert isinstance(use_cache, bool), "`use_cache` should be a boolean."
assert isinstance(num_beams, int) and num_beams > 0, "`num_beams` should be a strictly positive integer."
assert temperature > 0, "`temperature` should be strictly positive."
assert isinstance(top_k, int) and top_k >= 0, "`top_k` should be a positive integer."
assert 0 <= top_p <= 1, "`top_p` should be between 0 and 1."
assert repetition_penalty >= 1.0, "`repetition_penalty` should be >= 1."
assert input_ids is not None or (
isinstance(bos_token_id, int) and bos_token_id >= 0
), "If input_ids is not defined, `bos_token_id` should be a positive integer."
assert pad_token_id is None or (
isinstance(pad_token_id, int) and (pad_token_id >= 0)
), "`pad_token_id` should be a positive integer."
assert (eos_token_id is None) or (
isinstance(eos_token_id, int) and (eos_token_id >= 0)
), "`eos_token_id` should be a positive integer."
assert length_penalty > 0, "`length_penalty` should be strictly positive."
assert (
isinstance(no_repeat_ngram_size, int) and no_repeat_ngram_size >= 0
), "`no_repeat_ngram_size` should be a positive integer."
assert (
isinstance(num_return_sequences, int) and num_return_sequences > 0
), "`num_return_sequences` should be a strictly positive integer."
assert (
bad_words_ids is None or isinstance(bad_words_ids, list) and isinstance(bad_words_ids[0], list)
), "`bad_words_ids` is either `None` or a list of lists of tokens that should not be generated"
if input_ids is None:
assert isinstance(bos_token_id, int) and bos_token_id >= 0, (
"you should either supply a context to complete as `input_ids` input "
"or a `bos_token_id` (integer >= 0) as a first token to start the generation."
)
input_ids = torch.full(
(batch_size, 1), bos_token_id, dtype=torch.long, device=next(self.parameters()).device,
)
else:
assert input_ids.dim() == 2, "Input prompt should be of shape (batch_size, sequence length)."
# not allow to duplicate outputs when greedy decoding
if do_sample is False:
if num_beams == 1:
# no_beam_search greedy generation conditions
assert (
num_return_sequences == 1
), "Greedy decoding will always produce the same output for num_beams == 1 and num_return_sequences > 1. Please set num_return_sequences = 1"
else:
# beam_search greedy generation conditions
assert (
num_beams >= num_return_sequences
), "Greedy beam search decoding cannot return more sequences than it has beams. Please set num_beams >= num_return_sequences"
# create attention mask if necessary
# TODO (PVP): this should later be handled by the forward fn() in each model in the future see PR 3140
if (attention_mask is None) and (pad_token_id is not None) and (pad_token_id in input_ids):
attention_mask = input_ids.ne(pad_token_id).long()
elif attention_mask is None:
attention_mask = input_ids.new_ones(input_ids.shape)
# set pad_token_id to eos_token_id if not set. Important that this is done after
# attention_mask is created
if pad_token_id is None and eos_token_id is not None:
logger.warning(
"Setting `pad_token_id` to {} (first `eos_token_id`) to generate sequence".format(eos_token_id)
)
pad_token_id = eos_token_id
# current position and vocab size
if hasattr(self.config, "vocab_size"):
vocab_size = self.config.vocab_size
elif (
self.config.is_encoder_decoder
and hasattr(self.config, "decoder")
and hasattr(self.config.decoder, "vocab_size")
):
vocab_size = self.config.decoder.vocab_size
# set effective batch size and effective batch multiplier according to do_sample
if do_sample:
effective_batch_size = batch_size * num_return_sequences
effective_batch_mult = num_return_sequences
else:
effective_batch_size = batch_size
effective_batch_mult = 1
if self.config.is_encoder_decoder:
if decoder_start_token_id is None:
decoder_start_token_id = bos_token_id
assert (
decoder_start_token_id is not None
), "decoder_start_token_id or bos_token_id has to be defined for encoder-decoder generation"
assert hasattr(self, "get_encoder"), "{} should have a 'get_encoder' function defined".format(self)
assert callable(self.get_encoder), "{} should be a method".format(self.get_encoder)
# get encoder and store encoder outputs
encoder = self.get_encoder()
encoder_outputs: tuple = encoder(input_ids, attention_mask=attention_mask)
# Expand input ids if num_beams > 1 or num_return_sequences > 1
if self.config.is_encoder_decoder:
if decoder_input_ids is not None:
input_ids = decoder_input_ids
else:
# create empty decoder_input_ids
input_ids = torch.full(
(effective_batch_size * num_beams, 1),
decoder_start_token_id,
dtype=torch.long,
device=next(self.parameters()).device,
)
cur_len = 1
assert (
batch_size == encoder_outputs[0].shape[0]
), f"expected encoder_outputs[0] to have 1st dimension bs={batch_size}, got {encoder_outputs[0].shape[0]} "
# expand batch_idx to assign correct encoder output for expanded input_ids (due to num_beams > 1 and num_return_sequences > 1)
expanded_batch_idxs = (
torch.arange(batch_size)
.view(-1, 1)
.repeat(1, num_beams * effective_batch_mult)
.view(-1)
.to(input_ids.device)
)
# expand encoder_outputs
encoder_outputs = (encoder_outputs[0].index_select(0, expanded_batch_idxs), *encoder_outputs[1:])
else:
encoder_outputs = None
cur_len = input_ids.shape[-1]
assert (
cur_len < max_length
), f"The context has {cur_len} number of tokens, but `max_length` is only {max_length}. Please make sure that `max_length` is bigger than the number of tokens, by setting either `generate(max_length=...,...)` or `config.max_length = ...`"
if num_return_sequences > 1 or num_beams > 1:
input_ids_len = input_ids.shape[-1]
input_ids = input_ids.unsqueeze(1).expand(batch_size, effective_batch_mult * num_beams, input_ids_len)
attention_mask = attention_mask.unsqueeze(1).expand(
batch_size, effective_batch_mult * num_beams, attention_mask.shape[-1]
)
input_ids = input_ids.contiguous().view(
effective_batch_size * num_beams, input_ids_len
) # shape: (batch_size * num_return_sequences * num_beams, cur_len)
attention_mask = attention_mask.contiguous().view(
effective_batch_size * num_beams, attention_mask.shape[-1]
) # shape: (batch_size * num_return_sequences * num_beams, cur_len)
if num_beams > 1:
output = self._generate_beam_search(
input_ids,
cur_len=cur_len,
max_length=max_length,
min_length=min_length,
do_sample=do_sample,
early_stopping=early_stopping,
temperature=temperature,
top_k=top_k,
top_p=top_p,
repetition_penalty=repetition_penalty,
no_repeat_ngram_size=no_repeat_ngram_size,
bad_words_ids=bad_words_ids,
pad_token_id=pad_token_id,
eos_token_id=eos_token_id,
batch_size=effective_batch_size,
num_return_sequences=num_return_sequences,
length_penalty=length_penalty,
num_beams=num_beams,
vocab_size=vocab_size,
encoder_outputs=encoder_outputs,
attention_mask=attention_mask,
use_cache=use_cache,
decoder_attention_mask=decoder_attention_mask,
model_specific_kwargs=model_specific_kwargs,
)
else:
output = self._generate_no_beam_search(
input_ids,
cur_len=cur_len,
max_length=max_length,
min_length=min_length,
do_sample=do_sample,
temperature=temperature,
top_k=top_k,
top_p=top_p,
repetition_penalty=repetition_penalty,
no_repeat_ngram_size=no_repeat_ngram_size,
bad_words_ids=bad_words_ids,
pad_token_id=pad_token_id,
eos_token_id=eos_token_id,
batch_size=effective_batch_size,
encoder_outputs=encoder_outputs,
attention_mask=attention_mask,
use_cache=use_cache,
decoder_attention_mask=decoder_attention_mask,
model_specific_kwargs=model_specific_kwargs,
)
return output
def _generate_no_beam_search(
self,
input_ids,
cur_len,
max_length,
min_length,
do_sample,
temperature,
top_k,
top_p,
repetition_penalty,
no_repeat_ngram_size,
bad_words_ids,
pad_token_id,
eos_token_id,
batch_size,
encoder_outputs,
attention_mask,
use_cache,
decoder_attention_mask,
model_specific_kwargs,
):
""" Generate sequences for each example without beam search (num_beams == 1).
All returned sequence are generated independantly.
"""
# length of generated sentences / unfinished sentences
unfinished_sents = input_ids.new(batch_size).fill_(1)
sent_lengths = input_ids.new(batch_size).fill_(max_length)
past = (encoder_outputs, None) if encoder_outputs is not None else None
while cur_len < max_length:
model_inputs = self.prepare_inputs_for_generation(
input_ids, past=past, attention_mask=attention_mask, use_cache=use_cache, **model_specific_kwargs
)
model_inputs['decoder_attention_mask'] = decoder_attention_mask
outputs = self(**model_inputs)
next_token_logits = outputs[0][:, -1, :]
scores = self.postprocess_next_token_scores(
scores=next_token_logits,
input_ids=input_ids,
no_repeat_ngram_size=no_repeat_ngram_size,
bad_words_ids=bad_words_ids,
cur_len=cur_len,
min_length=min_length,
max_length=max_length,
eos_token_id=eos_token_id,
repetition_penalty=repetition_penalty,
batch_size=batch_size,
num_beams=1,
)
# if model has past, then set the past variable to speed up decoding
if self._use_cache(outputs, use_cache):
past = outputs[1]
if do_sample:
# Temperature (higher temperature => more likely to sample low probability tokens)
if temperature != 1.0:
scores = scores / temperature
# Top-p/top-k filtering
next_token_logscores = top_k_top_p_filtering(scores, top_k=top_k, top_p=top_p)
# Sample
probs = F.softmax(next_token_logscores, dim=-1)
next_token = torch.multinomial(probs, num_samples=1).squeeze(1)
else:
# Greedy decoding
next_token = torch.argmax(next_token_logits, dim=-1)
# update generations and finished sentences
if eos_token_id is not None:
# pad finished sentences if eos_token_id exist
tokens_to_add = next_token * unfinished_sents + (pad_token_id) * (1 - unfinished_sents)
else:
tokens_to_add = next_token
# add token and increase length by one
input_ids = torch.cat([input_ids, tokens_to_add.unsqueeze(-1)], dim=-1)
cur_len = cur_len + 1
if eos_token_id is not None:
eos_in_sents = tokens_to_add == eos_token_id
# if sentence is unfinished and the token to add is eos, sent_lengths is filled with current length
is_sents_unfinished_and_token_to_add_is_eos = unfinished_sents.mul(eos_in_sents.long()).bool()
sent_lengths.masked_fill_(is_sents_unfinished_and_token_to_add_is_eos, cur_len)
# unfinished_sents is set to zero if eos in sentence
unfinished_sents.mul_((~eos_in_sents).long())
# stop when there is a </s> in each sentence, or if we exceed the maximul length
if unfinished_sents.max() == 0:
break
# extend attention_mask for new generated input if only decoder
if self.config.is_encoder_decoder is False:
attention_mask = torch.cat(
[attention_mask, attention_mask.new_ones((attention_mask.shape[0], 1))], dim=-1
)
return input_ids
def _generate_beam_search(
self,
input_ids,
cur_len,
max_length,
min_length,
do_sample,
early_stopping,
temperature,
top_k,
top_p,
repetition_penalty,
no_repeat_ngram_size,
bad_words_ids,
pad_token_id,
eos_token_id,
batch_size,
num_return_sequences,
length_penalty,
num_beams,
vocab_size,
encoder_outputs,
attention_mask,
use_cache,
decoder_attention_mask,
model_specific_kwargs,
):
""" Generate sequences for each example with beam search.
"""
# generated hypotheses
generated_hyps = [
BeamHypotheses(num_beams, max_length, length_penalty, early_stopping=early_stopping)
for _ in range(batch_size)
]
# scores for each sentence in the beam
beam_scores = torch.zeros((batch_size, num_beams), dtype=torch.float, device=input_ids.device)
# for greedy decoding it is made sure that only tokens of the first beam are considered to avoid sampling the exact same tokens three times
if do_sample is False:
beam_scores[:, 1:] = -1e9
beam_scores = beam_scores.view(-1) # shape (batch_size * num_beams,)
# cache compute states
past = (encoder_outputs, None) if encoder_outputs is not None else None
# done sentences
done = [False for _ in range(batch_size)]
while cur_len < max_length:
model_inputs = self.prepare_inputs_for_generation(
input_ids, past=past, attention_mask=attention_mask, use_cache=use_cache, **model_specific_kwargs
)
model_inputs['decoder_attention_mask'] = decoder_attention_mask
outputs = self(**model_inputs) # (batch_size * num_beams, cur_len, vocab_size)
next_token_logits = outputs[0][:, -1, :] # (batch_size * num_beams, vocab_size)
# if model has past, then set the past variable to speed up decoding
if self._use_cache(outputs, use_cache):
past = outputs[1]
if self.config.is_encoder_decoder and do_sample is False:
# TODO (PVP) still a bit hacky here - there might be a better solution
next_token_logits = self.adjust_logits_during_generation(
next_token_logits, cur_len=cur_len, max_length=max_length
)
scores = F.log_softmax(next_token_logits, dim=-1) # (batch_size * num_beams, vocab_size)
scores = self.postprocess_next_token_scores(
scores=scores,
input_ids=input_ids,
no_repeat_ngram_size=no_repeat_ngram_size,
bad_words_ids=bad_words_ids,
cur_len=cur_len,
min_length=min_length,
max_length=max_length,
eos_token_id=eos_token_id,
repetition_penalty=repetition_penalty,
batch_size=batch_size,
num_beams=num_beams,
)
assert scores.shape == (batch_size * num_beams, vocab_size), "Shapes of scores: {} != {}".format(
scores.shape, (batch_size * num_beams, vocab_size)
)
if do_sample:
_scores = scores + beam_scores[:, None].expand_as(scores) # (batch_size * num_beams, vocab_size)
# Temperature
if temperature != 1.0:
_scores = _scores / temperature
# Top-p/top-k filtering
_scores = top_k_top_p_filtering(
_scores, top_k=top_k, top_p=top_p, min_tokens_to_keep=2
) # (batch_size * num_beams, vocab_size)
# re-organize to group the beam together to sample from all beam_idxs
_scores = _scores.contiguous().view(
batch_size, num_beams * vocab_size
) # (batch_size, num_beams * vocab_size)
# Sample 2 next tokens for each beam (so we have some spare tokens and match output of greedy beam search)
probs = F.softmax(_scores, dim=-1)
next_tokens = torch.multinomial(probs, num_samples=2 * num_beams) # (batch_size, num_beams * 2)
# Compute next scores
next_scores = torch.gather(_scores, -1, next_tokens) # (batch_size, num_beams * 2)
# sort the sampled vector to make sure that the first num_beams samples are the best
next_scores, next_scores_indices = torch.sort(next_scores, descending=True, dim=1)
next_tokens = torch.gather(next_tokens, -1, next_scores_indices) # (batch_size, num_beams * 2)
else:
next_scores = scores + beam_scores[:, None].expand_as(scores) # (batch_size * num_beams, vocab_size)
# re-organize to group the beam together (we are keeping top hypothesis accross beams)
next_scores = next_scores.view(
batch_size, num_beams * vocab_size
) # (batch_size, num_beams * vocab_size)
next_scores, next_tokens = torch.topk(next_scores, 2 * num_beams, dim=1, largest=True, sorted=True)
assert next_scores.size() == next_tokens.size() == (batch_size, 2 * num_beams)
# next batch beam content
next_batch_beam = []
# for each sentence
for batch_idx in range(batch_size):
# if we are done with this sentence, add a pad token
if done[batch_idx]:
assert (
len(generated_hyps[batch_idx]) >= num_beams
), "Batch can only be done if at least {} beams have been generated".format(num_beams)
assert (
eos_token_id is not None and pad_token_id is not None
), "generated beams >= num_beams -> eos_token_id and pad_token have to be defined"
next_batch_beam.extend([(0, pad_token_id, 0)] * num_beams) # pad the batch
continue
# next sentence beam content, this will get added to next_batch_beam
next_sent_beam = []
# next tokens for this sentence
for beam_token_rank, (beam_token_id, beam_token_score) in enumerate(
zip(next_tokens[batch_idx], next_scores[batch_idx])
):
# get beam and token IDs
beam_id = beam_token_id // vocab_size
token_id = beam_token_id % vocab_size
effective_beam_id = batch_idx * num_beams + beam_id
# add to generated hypotheses if end of sentence
if (eos_token_id is not None) and (token_id.item() == eos_token_id):
# if beam_token does not belong to top num_beams tokens, it should not be added
is_beam_token_worse_than_top_num_beams = beam_token_rank >= num_beams
if is_beam_token_worse_than_top_num_beams:
continue
generated_hyps[batch_idx].add(
input_ids[effective_beam_id].clone(), beam_token_score.item(),
)
else:
# add next predicted token since it is not eos_token
next_sent_beam.append((beam_token_score, token_id, effective_beam_id))
# once the beam for next step is full, don't add more tokens to it.
if len(next_sent_beam) == num_beams:
break
# Check if we are done so that we can save a pad step if all(done)
done[batch_idx] = done[batch_idx] or generated_hyps[batch_idx].is_done(
next_scores[batch_idx].max().item(), cur_len
)
# update next beam content
assert len(next_sent_beam) == num_beams, "Beam should always be full"
next_batch_beam.extend(next_sent_beam)
assert len(next_batch_beam) == num_beams * (batch_idx + 1), "We should have added num_beams each step"
# stop when we are done with each sentence
if all(done):
break
# sanity check / prepare next batch
assert len(next_batch_beam) == batch_size * num_beams
beam_scores = beam_scores.new([x[0] for x in next_batch_beam])
beam_tokens = input_ids.new([x[1] for x in next_batch_beam])
beam_idx = input_ids.new([x[2] for x in next_batch_beam])
# re-order batch and update current length
input_ids = input_ids[beam_idx, :]
input_ids = torch.cat([input_ids, beam_tokens.unsqueeze(1)], dim=-1)
cur_len = cur_len + 1
# re-order internal states
if past is not None:
past = self._reorder_cache(past, beam_idx)
# extend attention_mask for new generated input if only decoder
if self.config.is_encoder_decoder is False:
attention_mask = torch.cat(
[attention_mask, attention_mask.new_ones((attention_mask.shape[0], 1))], dim=-1
)
# finalize all open beam hypotheses and add to generated hypotheses
for batch_idx in range(batch_size):
if done[batch_idx]:
continue
# test that beam scores match previously calculated scores if not eos and batch_idx not done
if eos_token_id is not None and all(
(token_id % vocab_size).item() != eos_token_id for token_id in next_tokens[batch_idx]
):
assert torch.all(
next_scores[batch_idx, :num_beams] == beam_scores.view(batch_size, num_beams)[batch_idx]
), "If batch_idx is not done, final next scores: {} have to equal to accumulated beam_scores: {}".format(
next_scores[:, :num_beams][batch_idx], beam_scores.view(batch_size, num_beams)[batch_idx],
)
# need to add best num_beams hypotheses to generated hyps
for beam_id in range(num_beams):
effective_beam_id = batch_idx * num_beams + beam_id
final_score = beam_scores[effective_beam_id].item()
final_tokens = input_ids[effective_beam_id]
generated_hyps[batch_idx].add(final_tokens, final_score)
# depending on whether greedy generation is wanted or not define different output_batch_size and output_num_return_sequences_per_batch
output_batch_size = batch_size if do_sample else batch_size * num_return_sequences
output_num_return_sequences_per_batch = 1 if do_sample else num_return_sequences
# select the best hypotheses
sent_lengths = input_ids.new(output_batch_size)
best = []
# retrieve best hypotheses
for i, hypotheses in enumerate(generated_hyps):
sorted_hyps = sorted(hypotheses.beams, key=lambda x: x[0])
for j in range(output_num_return_sequences_per_batch):
effective_batch_idx = output_num_return_sequences_per_batch * i + j
best_hyp = sorted_hyps.pop()[1]
sent_lengths[effective_batch_idx] = len(best_hyp)
best.append(best_hyp)
# shorter batches are padded
if sent_lengths.min().item() != sent_lengths.max().item():
assert pad_token_id is not None, "`Pad_token_id` has to be defined"
sent_max_len = min(sent_lengths.max().item() + 1, max_length)
decoded = input_ids.new(output_batch_size, sent_max_len).fill_(pad_token_id)
# fill with hypothesis and eos_token_id if necessary
for i, hypo in enumerate(best):
decoded[i, : sent_lengths[i]] = hypo
if sent_lengths[i] < max_length:
decoded[i, sent_lengths[i]] = eos_token_id
else:
# none of the hypotheses have an eos_token
assert (len(hypo) == max_length for hypo in best)
decoded = torch.stack(best).type(torch.long).to(next(self.parameters()).device)
return decoded
` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6126/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6126/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6125 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6125/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6125/comments | https://api.github.com/repos/huggingface/transformers/issues/6125/events | https://github.com/huggingface/transformers/issues/6125 | 667,744,938 | MDU6SXNzdWU2Njc3NDQ5Mzg= | 6,125 | problem about geting hidden_states using TFBertModel | {
"login": "jianrui1995",
"id": 20520524,
"node_id": "MDQ6VXNlcjIwNTIwNTI0",
"avatar_url": "https://avatars.githubusercontent.com/u/20520524?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jianrui1995",
"html_url": "https://github.com/jianrui1995",
"followers_url": "https://api.github.com/users/jianrui1995/followers",
"following_url": "https://api.github.com/users/jianrui1995/following{/other_user}",
"gists_url": "https://api.github.com/users/jianrui1995/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jianrui1995/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jianrui1995/subscriptions",
"organizations_url": "https://api.github.com/users/jianrui1995/orgs",
"repos_url": "https://api.github.com/users/jianrui1995/repos",
"events_url": "https://api.github.com/users/jianrui1995/events{/privacy}",
"received_events_url": "https://api.github.com/users/jianrui1995/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello!\r\n\r\nI should be solved here https://github.com/huggingface/transformers/pull/5468. Nevertheless, you have to avoid using boolean tensors for `output_hidden_states` and `output_attentions` otherwise it won't work.",
"Hello!\r\n\r\ncould you tell me more details about how to avoid using boolean tensors? or how to achieve this goal in version 3.0.2 ? \r\nHow could I got your fixed code? or How long time we can see your fixed code in new version? \r\nthanks for your answers.",
"Fix has been merged in master!",
"Should be fixed on `master`",
"@jianrui1995 Hi, could you tell me how you solved the discrepancy in the behaviour with or without @tf.function, I am facing the same issue, tf version: 2.3.0 and transformers 3.0.2\r\n\r\n```\r\nclass WrappedModel(tf.Module):\r\n\tdef __init__(self):\r\n\t\tsuper(WrappedModel, self).__init__()\r\n\t\tself.model = TFDistilBertModel.from_pretrained('distilbert-base-uncased', output_hidden_states=True)\r\n\t# @tf.function\r\n\tdef __call__(self, x):\r\n\t\treturn self.model(x)\r\n```",
"I didn't solved this problem in version 3.0.2 but I got the bug code in **line 765 modeling_tf_tuils.py** \r\ncode as follow:\r\n```\r\ndef cast_bool_to_primitive(bool_variable, default_tensor_to_true=False):\r\n \"\"\"Function arguments can be inserted as boolean tensor\r\n and bool variables to cope with keras serialization\r\n we need to cast `output_attentions` to correct bool\r\n if it is a tensor\r\n\r\n Args:\r\n default_tensor_to_true: bool, if tensor should default to True\r\n in case tensor has no numpy attribute\r\n \"\"\"\r\n # if bool variable is tensor and has numpy value\r\n if tf.is_tensor(bool_variable):\r\n if hasattr(bool_variable, \"numpy\"):\r\n return bool(bool_variable.numpy())\r\n elif default_tensor_to_true:\r\n return True\r\n\r\n # else variable is bool\r\n return bool_variable\r\n```\r\nthe code witch calling the method in **line 407 in modeling_tf_bert.py** \r\n```\r\n if cast_bool_to_primitive(output_hidden_states) is True:\r\n all_hidden_states = all_hidden_states + (hidden_states,)\r\n```\r\naccording to tensorflow2's document, if you add **tf.tfunction**, tf will change the model from enger to graph by Autograph. The method's return True will change to bool tensor. it is different from true.\r\n\r\naccording the jplu's answer, this problem has been solved in master. you colud get the master version.",
"@jianrui1995 Thanks for your response, yes I was able to do it with the updated code in master. Was more curious about the version of transformers that has the support for this, since maintainability is an issue.",
"This part is still a work in progress, what there is in master is just a tiny workaround and doesn't work for several cases. We will push an update when we will have found a proper solution."
] | 1,596 | 1,598 | 1,596 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Pycharm
- Python version: 3.6
- PyTorch version (GPU?): None
- Tensorflow version (GPU?): 2.2.0 GPU
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
tensorflow: @jplu
documentation: @sgugger
-->
@LysandreJik @jplu
## Information
Model I am using Bert: I want to the outpts of **TFBertModel** class inclue **hidden_states**. unfortunately, I use two method provided by transformers' document north can achieve this goal.
The first method: **output_hidden_states=True** is passed to call(). code as follow:
```
class Model(tf.keras.Model):
def __init__(self,init):
super(Model,self).__init__()
conf = transformers.BertConfig.from_json_file("model/chinese_L-12_H-768_A-12/config.json")
self.bertmodel = transformers.TFBertModel.from_pretrained("bert-base-chinese")
@tf.function
def call(self, inputs, training=None, mask=None):
out_bert = self.bertmodel(inputs,output_hidden_states=True)
return out_bert
if __name__ == "__main__":
tokenizer = transformers.BertTokenizer("model/chinese_L-12_H-768_A-12/vocab.txt")
text_2 = tokenizer.batch_encode_plus(["你买啊,买了就是成都人", "你来啊,来了就是深圳人"], max_length=20, pad_to_max_length=True)
print(text_2)
model = Model()
out = model([tf.convert_to_tensor(text_2["input_ids"]),tf.convert_to_tensor(text_2['attention_mask'])])
print("out",out)
```
the print in consol only two tensor :**last_hidden_state** and **pooler_output**
the second method: set **config.output_hidden_states=True**. code as follow:
config.json:
```
{
"attention_probs_dropout_prob": 0.1,
"directionality": "bidi",
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"max_position_embeddings": 512,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pooler_fc_size": 768,
"pooler_num_attention_heads": 12,
"pooler_num_fc_layers": 3,
"pooler_size_per_head": 128,
"pooler_type": "first_token_transform",
"type_vocab_size": 2,
"vocab_size": 21128,
"output_hidden_states": true
}
```
I set **"output_hidden_states": true** at last line.
code:
```
class Model(tf.keras.Model):
def __init__(self,init):
super(Model,self).__init__()
conf = transformers.BertConfig.from_json_file("model/chinese_L-12_H-768_A-12/config.json")
self.bertmodel = transformers.TFBertModel.from_pretrained("bert-base-chinese",config=conf)
@tf.function
def call(self, inputs, training=None, mask=None):
out_bert = self.bertmodel(inputs)
return out_bert
if __name__ == "__main__":
tokenizer = transformers.BertTokenizer("model/chinese_L-12_H-768_A-12/vocab.txt")
text_2 = tokenizer.batch_encode_plus(["你买啊,买了就是成都人", "你来啊,来了就是深圳人"], max_length=20, pad_to_max_length=True)
print(text_2)
model = Model()
out = model([tf.convert_to_tensor(text_2["input_ids"]),tf.convert_to_tensor(text_2['attention_mask'])])
print("out",out)
```
the print in consol is same as method one.
but if I cancel **@tf.function** in front of call(). the print is what I expect.
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6125/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6125/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6124 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6124/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6124/comments | https://api.github.com/repos/huggingface/transformers/issues/6124/events | https://github.com/huggingface/transformers/issues/6124 | 667,704,753 | MDU6SXNzdWU2Njc3MDQ3NTM= | 6,124 | convert_pytorch_checkpoint_to_tf2.py AttributeError: cls.seq_relationship.weight not found in PyTorch model | {
"login": "DenisStenyushkin",
"id": 32551417,
"node_id": "MDQ6VXNlcjMyNTUxNDE3",
"avatar_url": "https://avatars.githubusercontent.com/u/32551417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DenisStenyushkin",
"html_url": "https://github.com/DenisStenyushkin",
"followers_url": "https://api.github.com/users/DenisStenyushkin/followers",
"following_url": "https://api.github.com/users/DenisStenyushkin/following{/other_user}",
"gists_url": "https://api.github.com/users/DenisStenyushkin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DenisStenyushkin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DenisStenyushkin/subscriptions",
"organizations_url": "https://api.github.com/users/DenisStenyushkin/orgs",
"repos_url": "https://api.github.com/users/DenisStenyushkin/repos",
"events_url": "https://api.github.com/users/DenisStenyushkin/events{/privacy}",
"received_events_url": "https://api.github.com/users/DenisStenyushkin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @DenisStenyushkin,\r\n\r\nIt seems like you have the PyTorch model trained with our library, so you can simply do:\r\n\r\n```python\r\nfrom transformers import TFBertModel\r\nmodel = TFBertModel.from_pretrained(\"./rubert-base-cased-pt\", from_pt=True)\r\n\r\nmodel.save(\"./rubert-base-cased\") # this adds a TF model file (tf_model.h5) to your directory\r\n```\r\n\r\nLet me know if this does not solve your problem!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,603 | 1,603 | NONE | null | Hi all. I'm trying to convert a pre-trained BERT model from PyTorch into TF2 format and facing a problem.
Model being converted: DeepPavlov/rubert-base-cased. I downloaded all files into a local folder.
File convert_pytorch_checkpoint_to_tf2.py from master branch - I downloaded it separately.
The command I'm running:
`python convert_pytorch_checkpoint_to_tf2.py --tf_dump_path ./rubert-base-cased_tf2/ --model_type bert --pytorch_checkpoint_path ./rubert-base-cased-pt/pytorch_model.bin --config_file ./rubert-base-cased-pt/config.json`
And the stacktrace:
```
Traceback (most recent call last):
File "convert_pytorch_checkpoint_to_tf2.py", line 364, in <module>
convert_all_pt_checkpoints_to_tf(
File "convert_pytorch_checkpoint_to_tf2.py", line 298, in convert_all_pt_checkpoints_to_tf
convert_pt_checkpoint_to_tf(
File "convert_pytorch_checkpoint_to_tf2.py", line 209, in convert_pt_checkpoint_to_tf
tf_model = load_pytorch_checkpoint_in_tf2_model(tf_model, pytorch_checkpoint_path)
File "/Users/denisstenyushkin/.virtualenvs/tf2_pytorch/lib/python3.8/site-packages/transformers/modeling_tf_pytorch_utils.py", line 92, in load_pytorch_checkpoint_in_tf2_model
return load_pytorch_weights_in_tf2_model(
File "/Users/denisstenyushkin/.virtualenvs/tf2_pytorch/lib/python3.8/site-packages/transformers/modeling_tf_pytorch_utils.py", line 166, in load_pytorch_weights_in_tf2_model
raise AttributeError("{} not found in PyTorch model".format(name))
AttributeError: cls.seq_relationship.weight not found in PyTorch model
```
Versions:
- `transformers` version: 3.0.2
- Platform: macOS-10.15.5-x86_64-i386-64bit
- Python version: 3.8.5
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6124/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6124/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6123 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6123/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6123/comments | https://api.github.com/repos/huggingface/transformers/issues/6123/events | https://github.com/huggingface/transformers/pull/6123 | 667,683,365 | MDExOlB1bGxSZXF1ZXN0NDU4MzAzODQ4 | 6,123 | Create README.md | {
"login": "AMontgomerie",
"id": 7648722,
"node_id": "MDQ6VXNlcjc2NDg3MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7648722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AMontgomerie",
"html_url": "https://github.com/AMontgomerie",
"followers_url": "https://api.github.com/users/AMontgomerie/followers",
"following_url": "https://api.github.com/users/AMontgomerie/following{/other_user}",
"gists_url": "https://api.github.com/users/AMontgomerie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AMontgomerie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AMontgomerie/subscriptions",
"organizations_url": "https://api.github.com/users/AMontgomerie/orgs",
"repos_url": "https://api.github.com/users/AMontgomerie/repos",
"events_url": "https://api.github.com/users/AMontgomerie/events{/privacy}",
"received_events_url": "https://api.github.com/users/AMontgomerie/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Thanks!"
] | 1,596 | 1,596 | 1,596 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6123/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6123/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6123",
"html_url": "https://github.com/huggingface/transformers/pull/6123",
"diff_url": "https://github.com/huggingface/transformers/pull/6123.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6123.patch",
"merged_at": 1596577374000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6122 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6122/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6122/comments | https://api.github.com/repos/huggingface/transformers/issues/6122/events | https://github.com/huggingface/transformers/pull/6122 | 667,681,262 | MDExOlB1bGxSZXF1ZXN0NDU4MzAyMTE5 | 6,122 | [T5Tokenizer] add prepare_seq2seq_batch method | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"@sshleifer , @sgugger I have made changes regarding the suggestions. Thanks !",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6122?src=pr&el=h1) Report\n> Merging [#6122](https://codecov.io/gh/huggingface/transformers/pull/6122?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/92f8ce2ed65f23f91795ce6eafb8cce1e226cd38&el=desc) will **increase** coverage by `0.08%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6122?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6122 +/- ##\n==========================================\n+ Coverage 78.51% 78.59% +0.08% \n==========================================\n Files 146 146 \n Lines 26326 26347 +21 \n==========================================\n+ Hits 20669 20708 +39 \n+ Misses 5657 5639 -18 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6122?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6122/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `96.73% <100.00%> (+0.96%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6122/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.42% <0.00%> (-29.91%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6122/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-23.43%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6122/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.09% <0.00%> (-4.88%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6122/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `88.05% <0.00%> (-1.26%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6122/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.20% <0.00%> (-0.29%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6122/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.71% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6122/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6122/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+12.87%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6122/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6122?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6122?src=pr&el=footer). Last update [92f8ce2...a84bb5b](https://codecov.io/gh/huggingface/transformers/pull/6122?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@sshleifer , @patrickvonplaten , all green :)"
] | 1,596 | 1,597 | 1,597 | MEMBER | null | This PR adds `prepare_seq2seq_batch` method to `T5Tokenizer` as per the proposal in #6080
@sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6122/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6122/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6122",
"html_url": "https://github.com/huggingface/transformers/pull/6122",
"diff_url": "https://github.com/huggingface/transformers/pull/6122.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6122.patch",
"merged_at": 1597687040000
} |
https://api.github.com/repos/huggingface/transformers/issues/6121 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6121/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6121/comments | https://api.github.com/repos/huggingface/transformers/issues/6121/events | https://github.com/huggingface/transformers/pull/6121 | 667,675,017 | MDExOlB1bGxSZXF1ZXN0NDU4Mjk3MTQ5 | 6,121 | XLNet PLM Readme | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6121?src=pr&el=h1) Report\n> Merging [#6121](https://codecov.io/gh/huggingface/transformers/pull/6121?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/92f8ce2ed65f23f91795ce6eafb8cce1e226cd38&el=desc) will **decrease** coverage by `0.73%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6121?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6121 +/- ##\n==========================================\n- Coverage 78.51% 77.77% -0.74% \n==========================================\n Files 146 146 \n Lines 26326 26326 \n==========================================\n- Hits 20669 20476 -193 \n- Misses 5657 5850 +193 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6121?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6121/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6121/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6121/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6121/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+12.87%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6121/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6121?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6121?src=pr&el=footer). Last update [92f8ce2...f465894](https://codecov.io/gh/huggingface/transformers/pull/6121?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,596 | 1,596 | 1,596 | MEMBER | null | Add information on XLNet and its PLM objective in the language-modeling README. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6121/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6121",
"html_url": "https://github.com/huggingface/transformers/pull/6121",
"diff_url": "https://github.com/huggingface/transformers/pull/6121.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6121.patch",
"merged_at": 1596037096000
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.