url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/8825
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8825/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8825/comments
https://api.github.com/repos/huggingface/transformers/issues/8825/events
https://github.com/huggingface/transformers/pull/8825
752,480,127
MDExOlB1bGxSZXF1ZXN0NTI4Nzk0Mzc3
8,825
Model parallel tests should return, not pass in non model parallel se…
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,606
1,606
1,606
MEMBER
null
…ttings. `pass` does not skip the test; `return` does.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8825/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8825/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8825", "html_url": "https://github.com/huggingface/transformers/pull/8825", "diff_url": "https://github.com/huggingface/transformers/pull/8825.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8825.patch", "merged_at": 1606513290000 }
https://api.github.com/repos/huggingface/transformers/issues/8824
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8824/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8824/comments
https://api.github.com/repos/huggingface/transformers/issues/8824/events
https://github.com/huggingface/transformers/pull/8824
752,466,838
MDExOlB1bGxSZXF1ZXN0NTI4Nzg0MDQw
8,824
suggest a numerical limit of 50MB for determining @slow
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,606
1,606
1,606
CONTRIBUTOR
null
This is a follow up to https://github.com/huggingface/transformers/issues/7250 which adds a guideline to when making a test `@slow` based on the download requirements if any. The suggested value is >50MB, and we can adjust it later if it's too large or small. Fixes: https://github.com/huggingface/transformers/issues/7250 @LysandreJik, @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8824/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8824/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8824", "html_url": "https://github.com/huggingface/transformers/pull/8824", "diff_url": "https://github.com/huggingface/transformers/pull/8824.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8824.patch", "merged_at": 1606511095000 }
https://api.github.com/repos/huggingface/transformers/issues/8823
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8823/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8823/comments
https://api.github.com/repos/huggingface/transformers/issues/8823/events
https://github.com/huggingface/transformers/pull/8823
752,451,873
MDExOlB1bGxSZXF1ZXN0NTI4NzcxODk5
8,823
[s2s trainer] fix DP mode
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "moving the discussion out of the review commentary as it disappears as soon as it's resolved, so it's best to discuss it in the normal comments as this is what this PR is trying to solve.\r\n\r\n-------------\r\n\r\nOh, I see - thank you for catching that. So I didn't solve the actual problem, but had a luck of hiding it under the carpet.\r\n\r\nThe problem is that the `distributed=...` is wrong here - it is currently coded to expect ddp when `distributed==True` and not dp. dp doesn't have `get_world_size()`/etc and so it fails, so should that arg be called `dpp` instead of `distributed`? But in any case the correct solution is then:\r\n```\r\n self.train_dataset.make_sortish_sampler(\r\n self.args.per_device_train_batch_size, distributed=self.args.local_rank != -1)\r\n```\r\nor re-coded to handle dp too? I don't know the initial intention - should it support `sortish_sampler` under dp or not?\r\n\r\nwe need to know whether to:\r\n\r\n1. recode `make_sortish_sampler` to support dp (can't use `get_world_size()`/etc)\r\n2. recode `make_sortish_sampler` to change its `distributed` arg to `dpp`, so that it only does the special case for dpp.\r\n\r\nAnd somewhat unrelated to the actual bug, I'd like to repeat the request at https://github.com/huggingface/transformers/issues/8822 - let's have a simple flag so that the downstream code knows which mode it is under and not via checking ranks and n_gpus which is very confusing and error-prone.", "Here is where the problem happens with dp:\r\n\r\nhttps://github.com/huggingface/transformers/blob/9995a341c9d68a9963d86c506d17330b3ad813f9/examples/seq2seq/utils.py#L361-L368\r\n\r\nSo `dist.is_available()` returns `True` under `dp`, but `dist.get_world_size()` fails, since it only works under `dpp` and requires `torch.distributed.init_process_group()` which doesn't get called under `dp`.", "In `DataParallel` mode, you don't need to do anything to your datalaoder (only in DistributedDataParallel where you need to split the batches across the various processes somehow) so you should make a regular datalaoder in that case.\r\nIn general, the only proper way to detect if you are in distributed data parallel is to look at the test `local_rank != -1` as `torch.distributed` can give you false information there. I agree it would all be much easier if the training arguments contained something that directly gives the distributed environment. ", "> In `DataParallel` mode, you don't need to do anything to your datalaoder (only in DistributedDataParallel where you need to split the batches across the various processes somehow) so you should make a regular datalaoder in that case.\r\n\r\nGreat, so then should we change the signature to make it clear ddp is wanted and not any distributed:\r\n\r\n```\r\n- def make_sortish_sampler(self, batch_size, distributed=False, shuffle=True, **kwargs):\r\n+ def make_sortish_sampler(self, batch_size, ddp=False, shuffle=True, **kwargs):\r\n```\r\n\r\nand adjust the invocations accordingly?\r\n\r\n> In general, the only proper way to detect if you are in distributed data parallel is to look at the test `local_rank != -1` as `torch.distributed` can give you false information there. I agree it would all be much easier if the training arguments contained something that directly gives the distributed environment.\r\n\r\nGreat. Should we create a feature request for that?\r\n", "I think there is a misunderstanding on the terminology: `DataParallel` is not distributed: distributed means launching several processes with the same script. The package `torch.distributed` does not return anything useful for `DataParallel` and ddp stands for *distributed* data parallel, so leaving that argument as distributed seems better to me.\r\n\r\n> Great. Should we create a feature request for that?\r\n\r\nWe can do that, yes.", "If you stick to the specific implementation, yes, dpp is the only distributed mode. But logically it doesn't make sense. DP is just as distributed as DPP, just isn't using the `torch.distributed`, so it's not a very clear distinction and will lead to such confusions all over. \r\n\r\nAs an example if you look at this function usage pattern it's mostly `dataset.make_sortish_sampler(batch_size, distributed=self.hparams.gpus > 1)` which clearly implies for any multi gpu mode (and erroneously so).", "I disagree, in the sense that code use PyTorch should stick with the PyTorch naming conventions. They chose to have a not distributed `DataParallel`, so we should honor that in our naming as well. In Distributed data parallel, you have to use a `DistributedSampler` (but not in `DataParallel`) etc. Those are all *parallel* modes (as you're training with multiple GPUs) but only one is *distributed*.", "That is a reasonable choice to follow. I'm only flagging how this leads to coding errors when a developer assumes that n_gpu> 1 == ddp. So perhaps some extra support is needed there.", "Let's see how it goes once we add the \"distributed_env\" to `TrainingArguments`!", "@sgugger, please kindly review at your convenience - I addressed all the issues you have raised - all should be good - CI failures are unrelated. Thank you!", "> Perfect, thanks a lot for humoring me and my annoying comments :-)\r\n\r\nOn the contrary, your comments were excellent and to the point. \r\n\r\nI was just slow on getting your point of view since in my mind if we solve a problem on multiple gpus it's distributed across multiple-gpus, regardless of the way it's implemented. But here distributed means distributed across multiple processes. Different semantics.", "So this is probably wrong too:\r\n\r\n```\r\n# examples/seq2seq/finetune.py: \r\nsampler = dataset.make_sortish_sampler(batch_size, distributed=self.hparams.gpus > 1)\r\n```\r\n\r\nBut that's code base on PL.\r\n\r\n@patil-suraj, may be you could have a look when you start working at this one? I suspect that it should do a different check for distributed and not check the number of gpus. Let me know if you prefer that I open a separate issue.\r\n", "Dunno how PL works.", "> Let's see how it goes once we add the \"distributed_env\" to `TrainingArguments`!\r\n\r\nAdded a feature request: https://github.com/huggingface/transformers/issues/8858", "Thank you HuggingFace Team and @stas00 , I cannot express how much I appreciate your efforts. " ]
1,606
1,606
1,606
CONTRIBUTOR
null
This PR: * [x] fixes https://github.com/huggingface/transformers/issues/8822 which currently crashes under multigpu and w/o an explicit ddp mode * [x] adds tests * [x] makes `finetune_trainer.py` executable/runnable @patrickvonplaten, @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8823/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8823/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8823", "html_url": "https://github.com/huggingface/transformers/pull/8823", "diff_url": "https://github.com/huggingface/transformers/pull/8823.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8823.patch", "merged_at": 1606769756000 }
https://api.github.com/repos/huggingface/transformers/issues/8822
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8822/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8822/comments
https://api.github.com/repos/huggingface/transformers/issues/8822/events
https://github.com/huggingface/transformers/issues/8822
752,451,561
MDU6SXNzdWU3NTI0NTE1NjE=
8,822
[s2s finetune_trainer] a mess around distributed
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, thanks @stas00 , I would be grateful to integrate this fix, I am currently dealing with this issue and using this script. thanks.", "Once https://github.com/huggingface/transformers/pull/8823 is merged (hopefully Monday), it will be in master, but feel free to use that branch until then.\r\n", "Awesome, I am really thankful, it helps me a lot.\n\nOn Sun, Nov 29, 2020 at 1:48 AM Stas Bekman <[email protected]>\nwrote:\n\n> Once #8823 <https://github.com/huggingface/transformers/pull/8823> is\n> merged (hopefully Monday), it will be in master, but feel free to use that\n> branch until then.\n>\n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/8822#issuecomment-735311547>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABP4ZCDF7U67LXQTURXHFRTSSGK6TANCNFSM4UFIIMLA>\n> .\n>\n", "@rabeehk, here is a quick update - as @sgugger pointed out my fix wasn't correct if you wanted the sortish_sampler and we are trying to figure out how to fix it correctly. If all you want is to use sortish_sample with dpp only then the correct fix is most likely this:\r\n```\r\n self.train_dataset.make_sortish_sampler(\r\n self.args.per_device_train_batch_size, distributed=self.args.local_rank != -1)\r\n```\r\nplease watch the development in https://github.com/huggingface/transformers/pull/8823", "OK, the right fix has been merged into master https://github.com/huggingface/transformers/pull/8823 so just update the master and you should have it working, @rabeehk \r\n" ]
1,606
1,606
1,606
CONTRIBUTOR
null
Currently `examples/seq2seq/finetune_trainer.py` bails with multigpu and w/o an explicit ddp mode invoked w/ `-m torch.distributed.launch` - it tries to get the world size thinking it's under ddp, when it's actually under dp. ``` Traceback (most recent call last): File "finetune_trainer.py", line 310, in <module> main() File "finetune_trainer.py", line 254, in main trainer.train( File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer.py", line 595, in train train_dataloader = self.get_train_dataloader() File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer.py", line 390, in get_train_dataloader train_sampler = self._get_train_sampler() File "/mnt/nvme1/code/huggingface/transformers-deepspeed/examples/seq2seq/seq2seq_trainer.py", line 124, in _get_train_sampler self.train_dataset.make_sortish_sampler( File "/mnt/nvme1/code/huggingface/transformers-deepspeed/examples/seq2seq/utils.py", line 156, in make_sortish_sampler return DistributedSortishSampler(self, batch_size, shuffle=shuffle, **kwargs) File "/mnt/nvme1/code/huggingface/transformers-deepspeed/examples/seq2seq/utils.py", line 368, in __init__ num_replicas = dist.get_world_size() File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 671, in get_world_size return _get_group_size(group) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 233, in _get_group_size default_pg = _check_default_pg() File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 225, in _check_default_pg raise RuntimeError("Default process group is not initialized") RuntimeError: Default process group is not initialized ``` The problem is that the HF trainer doesn't have a very clear way about the different dist modes. There is a bunch of different checks at different places and no simple single flag to tell the downstream code which mode it is in, leading to such bugs. I sent a fix PR, with: ``` distributed=(self.args.n_gpu > 1 and self.args.local_rank != -1), ``` but it just shows how fragile the downstream code is because there is no loud and clear flag :( I propose to set a new attribute`self.distributed_mode={None|dp|ddp}` perhaps in `_setup_devices` in `training_args.py`? @patrickvonplaten, @sgugger, @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8822/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8822/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8821
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8821/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8821/comments
https://api.github.com/repos/huggingface/transformers/issues/8821/events
https://github.com/huggingface/transformers/issues/8821
752,446,020
MDU6SXNzdWU3NTI0NDYwMjA=
8,821
Shared vocabulary with EncoderDecoderModel
{ "login": "bayanbatn", "id": 605959, "node_id": "MDQ6VXNlcjYwNTk1OQ==", "avatar_url": "https://avatars.githubusercontent.com/u/605959?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bayanbatn", "html_url": "https://github.com/bayanbatn", "followers_url": "https://api.github.com/users/bayanbatn/followers", "following_url": "https://api.github.com/users/bayanbatn/following{/other_user}", "gists_url": "https://api.github.com/users/bayanbatn/gists{/gist_id}", "starred_url": "https://api.github.com/users/bayanbatn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bayanbatn/subscriptions", "organizations_url": "https://api.github.com/users/bayanbatn/orgs", "repos_url": "https://api.github.com/users/bayanbatn/repos", "events_url": "https://api.github.com/users/bayanbatn/events{/privacy}", "received_events_url": "https://api.github.com/users/bayanbatn/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I believe the `EncoderDecoderConfig` class has an argument `tie_encoder_decoder` which can be used to share weights between the encoder and decoder.\r\n\r\nIs this what you're looking for?\r\n\r\n@patrickvonplaten ", "Thanks @LysandreJik; it's close enough to what I'm looking for. Closing this issue.", "Hey @bayanbatn, \r\n\r\nWe are currently working on a function (see #8224) that automatically ties encoder and decoder word embeddings only in an automatic fashion...for now what one can do is to simply set it yourself via\r\n\r\n```python\r\nmodel.decoder.word_embeddings = model.encoder.word_embeddings\r\n```" ]
1,606
1,606
1,606
NONE
null
# 🚀 Feature request Currently, two separate models are instantiated as encoder/decoder when using this model class. It would be useful in a lot of fine-tuning applications (i.e. summarization) to share the same embeddings between the encoder / decoder classes -- is this something that could be supported with the library?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8821/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8821/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8820
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8820/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8820/comments
https://api.github.com/repos/huggingface/transformers/issues/8820/events
https://github.com/huggingface/transformers/pull/8820
752,435,382
MDExOlB1bGxSZXF1ZXN0NTI4NzU4MDk4
8,820
Update README.md
{ "login": "moniquebm", "id": 60358442, "node_id": "MDQ6VXNlcjYwMzU4NDQy", "avatar_url": "https://avatars.githubusercontent.com/u/60358442?v=4", "gravatar_id": "", "url": "https://api.github.com/users/moniquebm", "html_url": "https://github.com/moniquebm", "followers_url": "https://api.github.com/users/moniquebm/followers", "following_url": "https://api.github.com/users/moniquebm/following{/other_user}", "gists_url": "https://api.github.com/users/moniquebm/gists{/gist_id}", "starred_url": "https://api.github.com/users/moniquebm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/moniquebm/subscriptions", "organizations_url": "https://api.github.com/users/moniquebm/orgs", "repos_url": "https://api.github.com/users/moniquebm/repos", "events_url": "https://api.github.com/users/moniquebm/events{/privacy}", "received_events_url": "https://api.github.com/users/moniquebm/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "[https://drive.google.com/file/d/1DDIs0MsvmpJU402o1v7eM-8BS3ACbcKV/view?usp=drivesdk]() ", "#\r\nDuplicate of #" ]
1,606
1,607
1,607
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8820/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8820/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8820", "html_url": "https://github.com/huggingface/transformers/pull/8820", "diff_url": "https://github.com/huggingface/transformers/pull/8820.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8820.patch", "merged_at": 1607696661000 }
https://api.github.com/repos/huggingface/transformers/issues/8819
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8819/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8819/comments
https://api.github.com/repos/huggingface/transformers/issues/8819/events
https://github.com/huggingface/transformers/pull/8819
752,294,184
MDExOlB1bGxSZXF1ZXN0NTI4NjQ1NzQy
8,819
[Examples] fix few typos in help messages and arguments
{ "login": "baeseongsu", "id": 32122993, "node_id": "MDQ6VXNlcjMyMTIyOTkz", "avatar_url": "https://avatars.githubusercontent.com/u/32122993?v=4", "gravatar_id": "", "url": "https://api.github.com/users/baeseongsu", "html_url": "https://github.com/baeseongsu", "followers_url": "https://api.github.com/users/baeseongsu/followers", "following_url": "https://api.github.com/users/baeseongsu/following{/other_user}", "gists_url": "https://api.github.com/users/baeseongsu/gists{/gist_id}", "starred_url": "https://api.github.com/users/baeseongsu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/baeseongsu/subscriptions", "organizations_url": "https://api.github.com/users/baeseongsu/orgs", "repos_url": "https://api.github.com/users/baeseongsu/repos", "events_url": "https://api.github.com/users/baeseongsu/events{/privacy}", "received_events_url": "https://api.github.com/users/baeseongsu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @baeseongsu \r\nthanks for opening this. I am actually working on a fairly big PR to revamp these distillation scripts. I'll integrate your modifications directly there to have everything in a single place!\r\nI **hope** to do this by end of week\r\nVictor" ]
1,606
1,606
1,606
CONTRIBUTOR
null
# What does this PR do? - fix typos in help message - consistently use `gpus`, instead of `n_gpu` (followed by #6315) (it's not working because not fully converted) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. @VictorSanh
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8819/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8819/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8819", "html_url": "https://github.com/huggingface/transformers/pull/8819", "diff_url": "https://github.com/huggingface/transformers/pull/8819.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8819.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8818
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8818/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8818/comments
https://api.github.com/repos/huggingface/transformers/issues/8818/events
https://github.com/huggingface/transformers/issues/8818
752,285,880
MDU6SXNzdWU3NTIyODU4ODA=
8,818
Slower training time per batch for increasing dataset size
{ "login": "mattivi", "id": 1651448, "node_id": "MDQ6VXNlcjE2NTE0NDg=", "avatar_url": "https://avatars.githubusercontent.com/u/1651448?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mattivi", "html_url": "https://github.com/mattivi", "followers_url": "https://api.github.com/users/mattivi/followers", "following_url": "https://api.github.com/users/mattivi/following{/other_user}", "gists_url": "https://api.github.com/users/mattivi/gists{/gist_id}", "starred_url": "https://api.github.com/users/mattivi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mattivi/subscriptions", "organizations_url": "https://api.github.com/users/mattivi/orgs", "repos_url": "https://api.github.com/users/mattivi/repos", "events_url": "https://api.github.com/users/mattivi/events{/privacy}", "received_events_url": "https://api.github.com/users/mattivi/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Mmm, this looks like a problem in Datasets, @lhoestq @thomwolf ?", "Could you check if the same speed differences appear if you're iterating through the `tokenized_datasets` ? \r\nFor example doing\r\n```python\r\nfor i in range(0, len(tokenized_datasets), batch_size):\r\n batch = tokenized_datasets[i:i + batch_size]\r\n```\r\nIf so, please open an issue on the Datasets repo so we can investigate", "I have run the following code for both dataset 10M rows and 100M rows, and the speed is a bit slower for the 100M dataset compared to the 10M dataset. \r\nHowever, when training BERT, I have much higher differences (e.g. in the timing above is 3s Vs. 0.25s). \r\n\r\n```python\r\nprint(\"--- Starting test for 10M batches ---\")\r\nnum_batches = 10000000\r\nbatch_size = 128\r\nimport time\r\nstart_time = time.time()\r\nfor i in range(0, num_batches, batch_size):\r\n batch = tokenized_datasets['train'][i:i + batch_size]\r\nend_time = time.time() - start_time\r\nprint(\"--- %3.3f seconds per 10M batches ---\" % (end_time))\r\n\r\n```\r\n\r\n| Number of lines in dataset | Seconds |\r\n|---|---|\r\n| 10M | 241 |\r\n| 100M | 303 |", "It looks like the major slowdown comes from somewhere else then.\r\nIt could either be from the PyTorch DataLoader but my best guess would be the PyTorch RandomSampler.\r\nFor big datasets the sampler takes a ton of RAM as mentioned in https://github.com/huggingface/datasets/issues/610#issuecomment-731725078, it could slowdown your training signficantly.\r\n\r\nCould you run the same experiment with the dataloader and the random sampler to make sure ?", "To debug I have replaced RandomSampler with SequentialSampler in Trainer class, https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py at line 374. \r\nWith SequentialSampler it works as expected, with no slower time.\r\n\r\nHow to fix RandomSampler now? ", "`RandomSampler` comes from PyTorch, so you can open an issue there. You will most likely need to implement your own random-ih sampler that goes faster than PyTorch.", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.", "This is still an issue today. I got a 10x speed up in training on a larger dataset, by switching to the sequential sampler (due to the RandomSampler bottleneck). Monkeypatch here to switch to the seqential sampler in case it's useful:\r\n\r\n```python\r\nimport transformers.trainer as trainer\r\nfrom transformers.trainer import SequentialSampler\r\n\r\n\r\ndef sampler_monkey_patch(dataset, generator):\r\n return SequentialSampler(dataset)\r\n\r\n\r\ntrainer.RandomSampler = sampler_monkey_patch\r\n```\r\n\r\nVersions this was used with:\r\n```\r\ntransformers==4.26.1\r\npytorch==1.13.1\r\ndatasets==2.10.1\r\npython==3.9.16\r\n```\r\n\r\nTo detect if this is an issue for you, it's useful to compare the rate at which samples are processed (and gpu utilization), for a small dataset slice versus a large one.", "You can also use an IterableDataset :\r\n\r\n```python \r\ntrain_dataset = train_dataset.to_iterable_dataset()\r\n```\r\n\r\nPS : pass `num_shards=` with a factor of `num_workers` to distribute the data evenly across DataLoader workers \r\n\r\nPS2 : for distributed, see the \"Distributed\" section at https://huggingface.co/docs/datasets/use_with_pytorch", "@sgugger - this issue was closed prematurely. See the comment above where monkeypatching `Trainer` to use a `SequentialSampler` instead of a `RandomSampler` results in a 10x speedup. Could you please re-open this issue?\r\n\r\nI disagree that this is simply a PyTorch issue. Even if this was purely caused by a bug in PyTorch (which is by no means obvious or clear; see the corresponding PyTorch issue [here](https://github.com/pytorch/pytorch/issues/50689)), HF should simply not include this class in its codebase if it entails causing such a bug to users of the `Trainer` class. At the very least, there should be an official workaround." ]
1,606
1,701
1,614
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.0.0-rc-1 - `tokenizers` version: 0.9.4 - `datasets` version: 1.1.3 - Platform: Linux-3.10.0-862.el7.x86_64-x86_64-with-centos-7.8.2003-Core - Python version: 3.7.2 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes, V100 - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten @sgugger @LysandreJik ## Information Model I am using: BERT, language modelling task. The problem arises when using: * [ x] the official example scripts: /examples/language-modeling/run_mlm.py The tasks I am working on is: * [ x] my own task or dataset: BERT MLM pre-training with own dataset I need to pre-train a BERT base model from scratch with own dataset. The dataset has millions of lines, each line is a short document. I am experiencing slower training time given increasing size of the dataset. To debug the problem, I have already tried to split original datasets into several smaller files (issue https://github.com/huggingface/datasets/issues/610), switch on/off the caching mechanism, but no improvements. What could it be? I am not able to find the origin of the problem. Thanks a lot! ## To reproduce Steps to reproduce the behavior: 1. run_mlm.py with increasing dataset sizes. 2. This results in slower training time for each batch. Below some stats, each batch is 128, and I am running run_mlm.py with --line_by_line option. The increase seems not linear. | Number lines in dataset | seconds per batch | |---|---| | 100k | 0.16 | | 10M | 0.25 | | 100M | 3 | BERT parameters: ``` Model config BertConfig { "architectures": [ "BertForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 18, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 0, "position_embedding_type": "absolute", "type_vocab_size": 2, "vocab_size": 32000 } ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> I expect that the training time for each batch will remain constant given different datasets' sizes.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8818/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8818/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8817
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8817/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8817/comments
https://api.github.com/repos/huggingface/transformers/issues/8817/events
https://github.com/huggingface/transformers/issues/8817
752,279,065
MDU6SXNzdWU3NTIyNzkwNjU=
8,817
cache reuse
{ "login": "Jiaxin-Wen", "id": 48146603, "node_id": "MDQ6VXNlcjQ4MTQ2NjAz", "avatar_url": "https://avatars.githubusercontent.com/u/48146603?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Jiaxin-Wen", "html_url": "https://github.com/Jiaxin-Wen", "followers_url": "https://api.github.com/users/Jiaxin-Wen/followers", "following_url": "https://api.github.com/users/Jiaxin-Wen/following{/other_user}", "gists_url": "https://api.github.com/users/Jiaxin-Wen/gists{/gist_id}", "starred_url": "https://api.github.com/users/Jiaxin-Wen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Jiaxin-Wen/subscriptions", "organizations_url": "https://api.github.com/users/Jiaxin-Wen/orgs", "repos_url": "https://api.github.com/users/Jiaxin-Wen/repos", "events_url": "https://api.github.com/users/Jiaxin-Wen/events{/privacy}", "received_events_url": "https://api.github.com/users/Jiaxin-Wen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You should use [`save_to_disk`](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=save#datasets.DatasetDict.save_to_disk) and [`load_from_disk`](https://huggingface.co/docs/datasets/package_reference/loading_methods.html?highlight=save_to_disk#datasets.load_from_disk):\r\n\r\n```python\r\nimport datasets\r\n# Will download and preprocess the dataset\r\ndata = datasets.load_dataset(\"wikitext\", \"wikitext-2-raw-v1\")\r\n# Save it in a folder that you can copy\r\ndata.save_to_disk(\"PATH/TO/FOLDER\")\r\n\r\n# On the other machine - reload the ready to use dataset from the copied folder\r\ndata = datasets.load_from_disk(\"PATH/TO/FOLDER\")\r\n```", "thanks!" ]
1,606
1,606
1,606
NONE
null
I have downloaded wikitext by `load_dataset("wikitext", "wikitext-2-raw-v1")`, and get the cache file in `.cache/huggingface/datasets`, then I try to copy the `huggingface/datasets` folder to the lab server to reuse but fails. What's the proper way to reuse the cache downloaded on another pc?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8817/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8817/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8816
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8816/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8816/comments
https://api.github.com/repos/huggingface/transformers/issues/8816/events
https://github.com/huggingface/transformers/pull/8816
752,251,556
MDExOlB1bGxSZXF1ZXN0NTI4NjExMzg2
8,816
[Flax test] Add require pytorch to flix flax test
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,606
1,606
1,606
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes Flaky CI. Currently the flax tests are failing which IMO is because of a missing `require_torch` in the flax test. This PR should fix it -> @mfuntowicz could you take a look? ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8816/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8816/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8816", "html_url": "https://github.com/huggingface/transformers/pull/8816", "diff_url": "https://github.com/huggingface/transformers/pull/8816.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8816.patch", "merged_at": 1606484442000 }
https://api.github.com/repos/huggingface/transformers/issues/8815
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8815/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8815/comments
https://api.github.com/repos/huggingface/transformers/issues/8815/events
https://github.com/huggingface/transformers/pull/8815
752,232,311
MDExOlB1bGxSZXF1ZXN0NTI4NTk2MjY4
8,815
Fixed typo in README.md of bert-base-greek-uncased-v1
{ "login": "mdermentzi", "id": 7224988, "node_id": "MDQ6VXNlcjcyMjQ5ODg=", "avatar_url": "https://avatars.githubusercontent.com/u/7224988?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mdermentzi", "html_url": "https://github.com/mdermentzi", "followers_url": "https://api.github.com/users/mdermentzi/followers", "following_url": "https://api.github.com/users/mdermentzi/following{/other_user}", "gists_url": "https://api.github.com/users/mdermentzi/gists{/gist_id}", "starred_url": "https://api.github.com/users/mdermentzi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mdermentzi/subscriptions", "organizations_url": "https://api.github.com/users/mdermentzi/orgs", "repos_url": "https://api.github.com/users/mdermentzi/repos", "events_url": "https://api.github.com/users/mdermentzi/events{/privacy}", "received_events_url": "https://api.github.com/users/mdermentzi/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "Thanks! Looks good to me, just pinging @iliaschalkidis for information/validation", "Wow, thanks for the fix @mdermentzi. If I recall correctly, I was trying things in an initial scratch python script that was using a universal `text` variable back to back and I though it would be better to rename those in 3 different variables to make it clearer. It seems I was quite unwary... ", "No worries @iliaschalkidis! Just a small typo. ;) Thank you for publishing this model. I am so happy I can play with it for my current uni project. " ]
1,606
1,606
1,606
CONTRIBUTOR
null
# What does this PR do? The tokenizer called at the input_ids var of Example 2 is currently encoding text_1. This PR is changing the input to text_2. Motivation and context for this change: I am considering using this model for a uni assignment and it took me a while to understand why the example code was yielding the wrong results. Hopefully, the next person who is eager to try these examples will not get confused by that typo. - [x] This PR fixes a typo or improves the docs. documentation: @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8815/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8815/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8815", "html_url": "https://github.com/huggingface/transformers/pull/8815", "diff_url": "https://github.com/huggingface/transformers/pull/8815.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8815.patch", "merged_at": 1606484098000 }
https://api.github.com/repos/huggingface/transformers/issues/8814
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8814/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8814/comments
https://api.github.com/repos/huggingface/transformers/issues/8814/events
https://github.com/huggingface/transformers/issues/8814
752,222,055
MDU6SXNzdWU3NTIyMjIwNTU=
8,814
I can not find a Linear layer in the end of Multi-Head Attention layer like Figure 2 right, could someone help me solve it
{ "login": "BYRTIMO", "id": 31283481, "node_id": "MDQ6VXNlcjMxMjgzNDgx", "avatar_url": "https://avatars.githubusercontent.com/u/31283481?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BYRTIMO", "html_url": "https://github.com/BYRTIMO", "followers_url": "https://api.github.com/users/BYRTIMO/followers", "following_url": "https://api.github.com/users/BYRTIMO/following{/other_user}", "gists_url": "https://api.github.com/users/BYRTIMO/gists{/gist_id}", "starred_url": "https://api.github.com/users/BYRTIMO/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BYRTIMO/subscriptions", "organizations_url": "https://api.github.com/users/BYRTIMO/orgs", "repos_url": "https://api.github.com/users/BYRTIMO/repos", "events_url": "https://api.github.com/users/BYRTIMO/events{/privacy}", "received_events_url": "https://api.github.com/users/BYRTIMO/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "A bit more context would be appreciated here" ]
1,606
1,606
1,606
NONE
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8814/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8814/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8813
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8813/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8813/comments
https://api.github.com/repos/huggingface/transformers/issues/8813/events
https://github.com/huggingface/transformers/pull/8813
752,139,549
MDExOlB1bGxSZXF1ZXN0NTI4NTIwOTIx
8,813
Fix check copies
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Replacing the error by a warning defeats the purpose of the check as it will make the CI pass when it should fail. We can see if we want to move it in another command, I'm just afraid it will make the failures in the CI (and what the corresponding fixes are) less understandable to the user.", "Let me think about this over the weekend, I'll try to find a solution by Monday :-)", "Awesome! I let you handle this, so I'm closing the PR." ]
1,606
1,686
1,606
CONTRIBUTOR
null
# What does this PR do? The target `make quality` fails when generating the new model table when at least one of the optional packages TF, PT or Flax is not installed. We should not force to have everything installed to do a simple quality check, this can be added to an extra target such as `make full-quality` or something like this. The fix does a condition checking and replace the raised error by a simple warning.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8813/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8813/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8813", "html_url": "https://github.com/huggingface/transformers/pull/8813", "diff_url": "https://github.com/huggingface/transformers/pull/8813.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8813.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8812
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8812/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8812/comments
https://api.github.com/repos/huggingface/transformers/issues/8812/events
https://github.com/huggingface/transformers/pull/8812
752,089,468
MDExOlB1bGxSZXF1ZXN0NTI4NDgxMzEz
8,812
Ctrl for sequence classification
{ "login": "elk-cloner", "id": 5828101, "node_id": "MDQ6VXNlcjU4MjgxMDE=", "avatar_url": "https://avatars.githubusercontent.com/u/5828101?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elk-cloner", "html_url": "https://github.com/elk-cloner", "followers_url": "https://api.github.com/users/elk-cloner/followers", "following_url": "https://api.github.com/users/elk-cloner/following{/other_user}", "gists_url": "https://api.github.com/users/elk-cloner/gists{/gist_id}", "starred_url": "https://api.github.com/users/elk-cloner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elk-cloner/subscriptions", "organizations_url": "https://api.github.com/users/elk-cloner/orgs", "repos_url": "https://api.github.com/users/elk-cloner/repos", "events_url": "https://api.github.com/users/elk-cloner/events{/privacy}", "received_events_url": "https://api.github.com/users/elk-cloner/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Same as GPT-2, this would benefit from also handling padding on the left; I'll work on this in another PR." ]
1,606
1,606
1,606
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #7623 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. @LysandreJik -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8812/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8812/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8812", "html_url": "https://github.com/huggingface/transformers/pull/8812", "diff_url": "https://github.com/huggingface/transformers/pull/8812.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8812.patch", "merged_at": 1606812567000 }
https://api.github.com/repos/huggingface/transformers/issues/8811
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8811/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8811/comments
https://api.github.com/repos/huggingface/transformers/issues/8811/events
https://github.com/huggingface/transformers/issues/8811
752,010,056
MDU6SXNzdWU3NTIwMTAwNTY=
8,811
HuggingFace pipeline sentiment analysis giving wrong results.
{ "login": "vishwa30", "id": 53945080, "node_id": "MDQ6VXNlcjUzOTQ1MDgw", "avatar_url": "https://avatars.githubusercontent.com/u/53945080?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vishwa30", "html_url": "https://github.com/vishwa30", "followers_url": "https://api.github.com/users/vishwa30/followers", "following_url": "https://api.github.com/users/vishwa30/following{/other_user}", "gists_url": "https://api.github.com/users/vishwa30/gists{/gist_id}", "starred_url": "https://api.github.com/users/vishwa30/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vishwa30/subscriptions", "organizations_url": "https://api.github.com/users/vishwa30/orgs", "repos_url": "https://api.github.com/users/vishwa30/repos", "events_url": "https://api.github.com/users/vishwa30/events{/privacy}", "received_events_url": "https://api.github.com/users/vishwa30/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @vishwa30 - what exactly do you mean by incorrect output? -> The model obviously doesn't always classify the input correctly. Since this issue doesn't seem to be a bug, could you maybe look in the forum whether people asked a similar question before and if not post this one there? :-)\r\n\r\nhttps://discuss.huggingface.co/", "@patrickvonplaten Sure! I will do that. Thanks!!" ]
1,606
1,606
1,606
NONE
null
I am just starting with hugging face and following the official doc. When using sentiment analysis pipeline I am getting incorrect output. I am not sure what's the reason behind it. ``` from transformers import pipeline classifier=pipeline('sentiment-analysis') text='This is just a statement' a=classifier(text) print(a) ``` Giving output as: [{'label': 'NEGATIVE', 'score': 0.9583144783973694}] I have changed the input with different sentences it is having problem with neutral statements but is able to predict the positive and negative statements, which have words like {"awesome","good","bad"}. Statements I have tried and respective output: 1. 'Today is thursday' {'label': 'POSITIVE', 'score': 0.987697184085846} 2. 'Give me my water bottle' {'label': 'NEGATIVE', 'score': 0.855629563331604} 3. 'Its raining outside' {'label': 'POSITIVE', 'score': 0.8293998837471008} 4. 'You are awesome' {'label': 'POSITIVE', 'score': 0.9998681545257568} 5. 'I hate you' {'label': 'NEGATIVE', 'score': 0.9991129040718079}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8811/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8811/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8810
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8810/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8810/comments
https://api.github.com/repos/huggingface/transformers/issues/8810/events
https://github.com/huggingface/transformers/pull/8810
751,853,187
MDExOlB1bGxSZXF1ZXN0NTI4Mjg2OTAy
8,810
typo
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,606
1,606
1,606
CONTRIBUTOR
null
s/FSTM/FSMT/
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8810/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8810/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8810", "html_url": "https://github.com/huggingface/transformers/pull/8810", "diff_url": "https://github.com/huggingface/transformers/pull/8810.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8810.patch", "merged_at": 1606430856000 }
https://api.github.com/repos/huggingface/transformers/issues/8809
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8809/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8809/comments
https://api.github.com/repos/huggingface/transformers/issues/8809/events
https://github.com/huggingface/transformers/pull/8809
751,852,317
MDExOlB1bGxSZXF1ZXN0NTI4Mjg2MjE5
8,809
[model loading] remove pointless log entries
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@LysandreJik, I can totally see your point.\r\n\r\nBut this one says the same thing twice:\r\n```\r\n f\"All model checkpoint weights were used when initializing {model.__class__.__name__}.\\n\"\r\n f\"All the weights of {model.__class__.__name__} were initialized from the model checkpoint at {pretrained_model_name_or_path}.\\n\"\r\n```\r\n\r\nThey aren't exactly the same but they say twice that there was no problem in loading the model\r\n\r\nSo as with your excellent example the following would be fitting:\r\n```\r\n\"Loading model {name} at {path}\"\r\n(any exceptions go here)\r\n\"Model loaded\"\r\n```\r\n\r\nWould you be open if changed this PR to follow this strategy?", "They don't really say the same thing, I would see the first one as:\r\n\r\n```\r\nChecking if all checkpoints weights were used in the model ...\r\nAll checkpoint weights were used.\r\n[...]\r\nChecking if all weights of the model were initialized by the checkpoint ...\r\nAll model weights are initialized.\r\n```\r\nI think both serve a purpose:\r\n1. Is the checkpoint meant for that architecture, or was it trained for another one.\r\n2. Is the model perfectly initialized from that checkpoint, or will it require some fine-tuning.", "The full current log for point 2 is:\r\n``` \r\n logger.info(\r\n f\"All the weights of {model.__class__.__name__} were initialized from the model checkpoint at {pretrained_model_name_or_path}.\\n\"\r\n f\"If your task is similar to the task the model of the checkpoint was trained on, \"\r\n f\"you can already use {model.__class__.__name__} for predictions without further training.\"\r\n )\r\n```\r\n\r\nSo that log's line 2+3 are still there. I didn't suggest to remove those.\r\n\r\nBut I feel that this is bordering on splitting hairs (from my side), so I will just let it be. \r\n\r\nYour explanation of its purpose makes sense, @LysandreJik. I'd have just tuned it up to make it more factual and less verbose ...\r\n\r\nMy issue is that I read those logs and not all of them feel like very readable to me...\r\n\r\n\r\n" ]
1,606
1,606
1,606
CONTRIBUTOR
null
This PR removes 2 IMO-pointless log entries that literally say "all is well". Log entries are useful for debugging problems, but just add to the noise that make more difficult to see useful entries, when they state the obvious, no? @sgugger, @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8809/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8809/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8809", "html_url": "https://github.com/huggingface/transformers/pull/8809", "diff_url": "https://github.com/huggingface/transformers/pull/8809.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8809.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8808
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8808/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8808/comments
https://api.github.com/repos/huggingface/transformers/issues/8808/events
https://github.com/huggingface/transformers/pull/8808
751,836,480
MDExOlB1bGxSZXF1ZXN0NTI4Mjc0MDA4
8,808
Fix dpr<>bart config for RAG
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,606
1,606
1,606
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> There was a big silent bug between the interaction of DPR and BERT. Bert added a new config parameter that DPR does not have -> so a "normal" dpr config crashes when used with BERT. This is actually a very big bug and was introduces in #8276 as pointed out by @lhoestq - thanks! Two things went wrong here. 1) We should be more careful in general when introducing new config parameters and calling them via `config.<new_parameter>` especially for models like BERT that can be used with other configs. 2) The DPR tests should have could that, but instead of using a normal DPR config, a BERT-like DPR config was used in the tests, which is dangerous because it exactly doesn't catch errors like those. This PR fixes 1) and 2) by calling the config in the case of the newly introduces parameter only with `getattr(config, <param_name>, <default_value>)` **and** adds the config functionality to DPR as well (DPR also should have this functionality over BERT's positional embedding). IMO `getattr(config, <param_name>, <default_value>)` should be used for models like BERT in general because they could be used and wrapped in many different ways. Also the DPR test is fixed to use a DPR config instead of a BERT config. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8808/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8808/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8808", "html_url": "https://github.com/huggingface/transformers/pull/8808", "diff_url": "https://github.com/huggingface/transformers/pull/8808.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8808.patch", "merged_at": 1606490806000 }
https://api.github.com/repos/huggingface/transformers/issues/8807
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8807/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8807/comments
https://api.github.com/repos/huggingface/transformers/issues/8807/events
https://github.com/huggingface/transformers/pull/8807
751,814,263
MDExOlB1bGxSZXF1ZXN0NTI4MjU3MDUz
8,807
[s2s finetune trainer] potpurri of small fixes
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,606
1,606
1,606
CONTRIBUTOR
null
This PR makes a bunch of small readability improvements around finetune trainer instructions and script missing `\` - no code changes. @sgugger, @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8807/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8807/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8807", "html_url": "https://github.com/huggingface/transformers/pull/8807", "diff_url": "https://github.com/huggingface/transformers/pull/8807.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8807.patch", "merged_at": 1606428387000 }
https://api.github.com/repos/huggingface/transformers/issues/8806
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8806/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8806/comments
https://api.github.com/repos/huggingface/transformers/issues/8806/events
https://github.com/huggingface/transformers/pull/8806
751,790,244
MDExOlB1bGxSZXF1ZXN0NTI4MjM4MjU5
8,806
Create README.md
{ "login": "hailabpucpr", "id": 55989936, "node_id": "MDQ6VXNlcjU1OTg5OTM2", "avatar_url": "https://avatars.githubusercontent.com/u/55989936?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hailabpucpr", "html_url": "https://github.com/hailabpucpr", "followers_url": "https://api.github.com/users/hailabpucpr/followers", "following_url": "https://api.github.com/users/hailabpucpr/following{/other_user}", "gists_url": "https://api.github.com/users/hailabpucpr/gists{/gist_id}", "starred_url": "https://api.github.com/users/hailabpucpr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hailabpucpr/subscriptions", "organizations_url": "https://api.github.com/users/hailabpucpr/orgs", "repos_url": "https://api.github.com/users/hailabpucpr/repos", "events_url": "https://api.github.com/users/hailabpucpr/events{/privacy}", "received_events_url": "https://api.github.com/users/hailabpucpr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,606
1,606
1,606
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8806/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8806/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8806", "html_url": "https://github.com/huggingface/transformers/pull/8806", "diff_url": "https://github.com/huggingface/transformers/pull/8806.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8806.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8805
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8805/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8805/comments
https://api.github.com/repos/huggingface/transformers/issues/8805/events
https://github.com/huggingface/transformers/pull/8805
751,785,948
MDExOlB1bGxSZXF1ZXN0NTI4MjM1MDEz
8,805
Revert "[s2s] finetune.py: specifying generation min_length"
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,606
1,606
1,606
MEMBER
null
Reverts huggingface/transformers#8478
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8805/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8805/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8805", "html_url": "https://github.com/huggingface/transformers/pull/8805", "diff_url": "https://github.com/huggingface/transformers/pull/8805.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8805.patch", "merged_at": 1606417922000 }
https://api.github.com/repos/huggingface/transformers/issues/8804
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8804/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8804/comments
https://api.github.com/repos/huggingface/transformers/issues/8804/events
https://github.com/huggingface/transformers/pull/8804
751,741,643
MDExOlB1bGxSZXF1ZXN0NTI4MTk5MzMz
8,804
MPNet: Masked and Permuted Pre-training for Language Understanding
{ "login": "StillKeepTry", "id": 6577458, "node_id": "MDQ6VXNlcjY1Nzc0NTg=", "avatar_url": "https://avatars.githubusercontent.com/u/6577458?v=4", "gravatar_id": "", "url": "https://api.github.com/users/StillKeepTry", "html_url": "https://github.com/StillKeepTry", "followers_url": "https://api.github.com/users/StillKeepTry/followers", "following_url": "https://api.github.com/users/StillKeepTry/following{/other_user}", "gists_url": "https://api.github.com/users/StillKeepTry/gists{/gist_id}", "starred_url": "https://api.github.com/users/StillKeepTry/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/StillKeepTry/subscriptions", "organizations_url": "https://api.github.com/users/StillKeepTry/orgs", "repos_url": "https://api.github.com/users/StillKeepTry/repos", "events_url": "https://api.github.com/users/StillKeepTry/events{/privacy}", "received_events_url": "https://api.github.com/users/StillKeepTry/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @StillKeepTry - could you maybe link the paper corresponding to your model add a small PR description? :-) That would be very helpful", "Thanks for the new PR @StillKeepTry - could you add a `test_modeling_mpnet.py` file - it would be important to test the model :-) \r\n\r\nAlso it would be amazing if you could give some context of MPNet - is there a paper, blog post, analysis, results that go along with the model? And are there pretrained weights? \r\n\r\nThanks a lot! ", "> Thanks for the new PR @StillKeepTry - could you add a `test_modeling_mpnet.py` file - it would be important to test the model :-) \n> \n> \n> \n> Also it would be amazing if you could give some context of MPNet - is there a paper, blog post, analysis, results that go along with the model? And are there pretrained weights? \n> \n> \n> \n> Thanks a lot! \n\nhttps://arxiv.org/abs/2004.09297", "> > Thanks for the new PR @StillKeepTry - could you add a `test_modeling_mpnet.py` file - it would be important to test the model :-)\r\n> > Also it would be amazing if you could give some context of MPNet - is there a paper, blog post, analysis, results that go along with the model? And are there pretrained weights?\r\n> > Thanks a lot!\r\n> \r\n> https://arxiv.org/abs/2004.09297\r\n\r\nok", "Oh and another thing to do after the merge will be to add your new model to the main README and the documentation so that people can use it! The template should give you a file for the `.rst` (or you can use `docs/model_doc/bert.rst` as an example).", "I have updated `test_modeling_mpnet.py` now.", "Hi, every reviewer. Thank you for your valuable reviews. I have fixed previous comments (like doc, format, and so on) and updated the `tokenization_mpnet.py` and `tokenization_mpnet_fast.py` by removing the inheritance. Besides, I also upload test files (`test_modeling_mpnet.py`, `test_modeling_tf_mpnet.py`) for testing, and model weights into the model hub. ", "Fantastic, thanks for working on it! Will review today.", "@patrickvonplaten Hi, are there any new comments?", "Hello!\r\n\r\nStill some comments:\r\n\r\n1. Update the inputs handling in the TF file, we have merged an update for the booleans last Friday. You can see an example in the TF BERT file if you need one.\r\n2. rebase and fix the conflicting files.\r\n3. Fix the check_code_quality test.", "I think something went wrong with the merge here :-/ Could you try to open a new PR that does not include all previous commits or fix this one? ", "> I think something went wrong with the merge here :-/ Could you try to open a new PR that does not include all previous commits or fix this one?\r\n\r\nOK, it seems something wrong when I update to the latest version. :(", "@patrickvonplaten @JetRunner @sgugger @LysandreJik @jplu The new PR is moved to [https://github.com/huggingface/transformers/pull/8971](https://github.com/huggingface/transformers/pull/8971)" ]
1,606
1,607
1,607
CONTRIBUTOR
null
# Model addition [MPNet](https://arxiv.org/abs/2004.09297) ## Model description MPNet introduces a novel self-supervised objective named masked and permuted language modeling for language understanding. It inherits the advantages of both the masked language modeling (MLM) and the permuted language modeling (PLM) to addresses the limitations of MLM/PLM, and further reduce the inconsistency between the pre-training and fine-tuning paradigms. # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8804/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8804/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8804", "html_url": "https://github.com/huggingface/transformers/pull/8804", "diff_url": "https://github.com/huggingface/transformers/pull/8804.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8804.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8803
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8803/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8803/comments
https://api.github.com/repos/huggingface/transformers/issues/8803/events
https://github.com/huggingface/transformers/issues/8803
751,716,105
MDU6SXNzdWU3NTE3MTYxMDU=
8,803
Get locally cached models programatically
{ "login": "cdpierse", "id": 8831892, "node_id": "MDQ6VXNlcjg4MzE4OTI=", "avatar_url": "https://avatars.githubusercontent.com/u/8831892?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cdpierse", "html_url": "https://github.com/cdpierse", "followers_url": "https://api.github.com/users/cdpierse/followers", "following_url": "https://api.github.com/users/cdpierse/following{/other_user}", "gists_url": "https://api.github.com/users/cdpierse/gists{/gist_id}", "starred_url": "https://api.github.com/users/cdpierse/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cdpierse/subscriptions", "organizations_url": "https://api.github.com/users/cdpierse/orgs", "repos_url": "https://api.github.com/users/cdpierse/repos", "events_url": "https://api.github.com/users/cdpierse/events{/privacy}", "received_events_url": "https://api.github.com/users/cdpierse/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I think that would be a cool addition! What do you think @julien-c?", "Yes, I almost wrote something like that a while ago, so go for it 👍\r\n\r\nTo remove old weights you don't use anymore @cdpierse, we could also document a unix command to `find` files sorted by last access time and `rm` them\r\n\r\n(I think @sshleifer or @patrickvonplaten had a bash alias for this at some point?)", "@julien-c I have the PR for handling cached models pushed but I've been trying to think of some way to add a function that allows model deletion, we could use the model names and etags returned by `get_cached_models()` to select specfic model `.bin` files to delete but the problem is that it will still leave behind stray config and tokenizer files which probably isn't great. The filenames for tokenizers, configs, and models don't seem to be related so I'm not sure if they can be deleted that way. The two approaches seem to be to delete from last accessed date or just delete model `.bin` files. ", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,606
1,614
1,614
CONTRIBUTOR
null
# 🚀 Feature request A small utility function to allow users to get a list of model binaries that are cached locally. Each list entry would be a tuple in the form `(model_url, etag, size_in_MB)`. ## Motivation I have quite a few environments on my local machine containing the package and have downloaded a number of models. Over time these begin to stack up in terms of storage usage so I thought it would be useful at the very least to be able to retrieve a list of the models that are stored locally as well as some info regarding their size. I had also thought about building on this further and providing a function to remove a model from the local cache programmatically. However, for now I think getting a list is a good start. ## Your contribution I have a PR ready to go if you think this would be a suitable feature to add. I've added it inside `file_utils.py` as this seemed like the most appropriate place. The function only adds files to the list that endwith `.bin` so right now only model binaries are included. An example usage of the function is below: ```python from transformers import file_utils models = file_utils.get_cached_models() for model in models: print(model) >>> ('https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-pytorch_model.bin', '"2d19e321961949b7f761cdffefff32c0-66"', 548.118077) >>> ('https://cdn.huggingface.co/distilbert-base-uncased-finetuned-sst-2-english-pytorch_model.bin', '"1d085de7c065928ccec2efa407bd9f1e-16"', 267.844284) >>> ('https://cdn.huggingface.co/twmkn9/bert-base-uncased-squad2/pytorch_model.bin', '"e5f04c87871ae3a98e6eea90f1dec146"', 437.985356) ``` <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8803/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8803/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8802
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8802/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8802/comments
https://api.github.com/repos/huggingface/transformers/issues/8802/events
https://github.com/huggingface/transformers/issues/8802
751,677,012
MDU6SXNzdWU3NTE2NzcwMTI=
8,802
Use GPT to assign sentence probability/perplexity given previous sentence?
{ "login": "TalitaAnthonio", "id": 25078987, "node_id": "MDQ6VXNlcjI1MDc4OTg3", "avatar_url": "https://avatars.githubusercontent.com/u/25078987?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TalitaAnthonio", "html_url": "https://github.com/TalitaAnthonio", "followers_url": "https://api.github.com/users/TalitaAnthonio/followers", "following_url": "https://api.github.com/users/TalitaAnthonio/following{/other_user}", "gists_url": "https://api.github.com/users/TalitaAnthonio/gists{/gist_id}", "starred_url": "https://api.github.com/users/TalitaAnthonio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TalitaAnthonio/subscriptions", "organizations_url": "https://api.github.com/users/TalitaAnthonio/orgs", "repos_url": "https://api.github.com/users/TalitaAnthonio/repos", "events_url": "https://api.github.com/users/TalitaAnthonio/events{/privacy}", "received_events_url": "https://api.github.com/users/TalitaAnthonio/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!" ]
1,606
1,606
1,606
NONE
null
Hi! Is it possible to use GPT to assign a sentence probability given the previous sentences? I have seen this code here, which can be used to assign a perplexity score to a sentence: https://github.com/huggingface/transformers/issues/473 But is there a way to compute this score given a certain context (up to 1024 tokens)?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8802/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8802/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8801
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8801/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8801/comments
https://api.github.com/repos/huggingface/transformers/issues/8801/events
https://github.com/huggingface/transformers/issues/8801
751,670,741
MDU6SXNzdWU3NTE2NzA3NDE=
8,801
Multiprocessing behavior change 3.1.0 -> 3.2.0
{ "login": "jankrepl", "id": 18519371, "node_id": "MDQ6VXNlcjE4NTE5Mzcx", "avatar_url": "https://avatars.githubusercontent.com/u/18519371?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jankrepl", "html_url": "https://github.com/jankrepl", "followers_url": "https://api.github.com/users/jankrepl/followers", "following_url": "https://api.github.com/users/jankrepl/following{/other_user}", "gists_url": "https://api.github.com/users/jankrepl/gists{/gist_id}", "starred_url": "https://api.github.com/users/jankrepl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jankrepl/subscriptions", "organizations_url": "https://api.github.com/users/jankrepl/orgs", "repos_url": "https://api.github.com/users/jankrepl/repos", "events_url": "https://api.github.com/users/jankrepl/events{/privacy}", "received_events_url": "https://api.github.com/users/jankrepl/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[ { "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false } ]
[ "Sorry to insist. Could you at least share your thoughts on this? @LysandreJik @patrickvonplaten ", "Is the only dependency you change `transformers`? The PyTorch version remained the same (v.1.7.0) for both `transformers` versions? If so, I'll take a deeper look this week.", "@LysandreJik In both experiments `torch==v1.7.0`. However, downgrading to `torch==v.1.6.0` (in both experiments) leads to exactly the same problem. I did `pip install transformers==v3.1.0` and `pip install transformers==v3.2.0` back and forth so that could be the only way other dependencies got updated.\r\n\r\nThank you!", "Okay, thanks for checking. I'll have a look this week.", "Hello @LysandreJik , is there any update on this issue?\r\nThank in advance!", "This issue has been stale for 1 month.", "I think I managed to solve the problem.\r\n\r\nInstead of using the environment variable `os.environ[\"CUDA_VISIBLE_DEVICES\"] = str(gpu)` to specify the GPUs one needs to provide it via `torch.device(f\"cuda:{gpu}\")`." ]
1,606
1,615
1,615
NONE
null
## Environment info ``` - `transformers` version: 3.2.0 - Platform: Linux-4.15.0-88-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.7.0 (True) - Tensorflow version (GPU?): 2.3.0 (False) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes (Python multiprocessing) ``` ## Information I am writing a custom script that uses Python's multiprocessing. The goal of it is to have multiple child processes that run inference (using `torch.nn.Module`) **on separate GPUs**. See below a minimal example of the issue. Please note that script contains pure `torch` code, however, it seems like importing `transformers` (**and not even using it afterwards**) changes some internal states. ```python import multiprocessing as mp import os import torch import transformers # <-- Just imported, never used def diagnostics(name): """Print diagnostics.""" print(name) print(f"CUDA initialized: {torch.cuda.is_initialized()}") print(f"Is bad fork: {torch._C._cuda_isInBadFork()}") print(80 * "*") def fun(gpu): current_process = mp.current_process() diagnostics(current_process.pid) os.environ["CUDA_VISIBLE_DEVICES"] = str(gpu) model = torch.nn.Linear(200, 300) model = model.to("cuda") # Trouble maker while True: model(torch.ones(32, 200, device="cuda")) if __name__ == "__main__": n_processes = 2 gpus = [0, 1] start_method = "fork" # fork, forkserver, spawn diagnostics("Parent") mp.set_start_method(start_method) processes = [] for i in range(n_processes): p = mp.Process(name=str(i), target=fun, kwargs={"gpu": gpus[i]}) p.start() processes.append(p) for p in processes: p.join() ``` The above script works as expected in `3.1.0` or when we do not import transformers at all. Each subprocess does inference on a separate GPU. See below the standard output. ``` Parent CUDA initialized: False Is bad fork: False ******************************************************************************** 21091 CUDA initialized: False Is bad fork: False ******************************************************************************** 21092 CUDA initialized: False Is bad fork: False ******************************************************************************** ``` However, for `3.2.0` and higher there is the following error. ``` Parent CUDA initialized: False Is bad fork: False ******************************************************************************** 21236 CUDA initialized: False Is bad fork: True ******************************************************************************** Process 0: Traceback (most recent call last): File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap self.run() File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run self._target(*self._args, **self._kwargs) File "git_example.py", line 21, in fun model = model.to("cuda") # Trouble maker File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 612, in to return self._apply(convert) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 381, in _apply param_applied = fn(param) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 610, in convert return t.to(device, dtype if t.is_floating_point() else None, non_blocking) File "/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py", line 164, in _lazy_init "Cannot re-initialize CUDA in forked subprocess. " + msg) RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method 21237 CUDA initialized: False Is bad fork: True ******************************************************************************** Process 1: Traceback (most recent call last): File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap self.run() File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run self._target(*self._args, **self._kwargs) File "git_example.py", line 21, in fun model = model.to("cuda") # Trouble maker File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 612, in to return self._apply(convert) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 381, in _apply param_applied = fn(param) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 610, in convert return t.to(device, dtype if t.is_floating_point() else None, non_blocking) File "/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py", line 164, in _lazy_init "Cannot re-initialize CUDA in forked subprocess. " + msg) RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method ``` Changing the `start_method` to `"forkserver"` or `"spawn"` prevents the exception from being raised. However, only a single GPU for all child processes is used.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8801/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/8801/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8800
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8800/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8800/comments
https://api.github.com/repos/huggingface/transformers/issues/8800/events
https://github.com/huggingface/transformers/issues/8800
751,610,925
MDU6SXNzdWU3NTE2MTA5MjU=
8,800
Problem with using custom tokenizers with run_mlm.py
{ "login": "antmarakis", "id": 17463361, "node_id": "MDQ6VXNlcjE3NDYzMzYx", "avatar_url": "https://avatars.githubusercontent.com/u/17463361?v=4", "gravatar_id": "", "url": "https://api.github.com/users/antmarakis", "html_url": "https://github.com/antmarakis", "followers_url": "https://api.github.com/users/antmarakis/followers", "following_url": "https://api.github.com/users/antmarakis/following{/other_user}", "gists_url": "https://api.github.com/users/antmarakis/gists{/gist_id}", "starred_url": "https://api.github.com/users/antmarakis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/antmarakis/subscriptions", "organizations_url": "https://api.github.com/users/antmarakis/orgs", "repos_url": "https://api.github.com/users/antmarakis/repos", "events_url": "https://api.github.com/users/antmarakis/events{/privacy}", "received_events_url": "https://api.github.com/users/antmarakis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I have simplified the code to show that it is definitely the pretrained tokenizer that breaks the execution:\r\n\r\n```\r\npython run_mlm.py \\\r\n--model_name_or_path bert-base-cased \\\r\n--train_file data.txt \\\r\n--tokenizer_name custom_tokenizer \\\r\n--output_dir output \\\r\n--do_train \\\r\n--num_train_epochs 1 \\\r\n--overwrite_output_dir\r\n\r\n```", "This seems like an issue that concerns the new mlm script, `tokenizers` and `datasets` so I'll ping the holy trinity that may have an idea where the error comes from: @sgugger @n1t0 @lhoestq ", "Looks like the tokenizer returns an empty batch of elements, which causes an `IndexError` ?", "is this issue resolved? ran into the same error.", "From recent experience, I think this might happen if no `model_max_length` is set for the tokenizer.\r\n\r\nIn the directory where your tokenizer files live, do you mind adding another file called `tokenizer_config.json`, with the following information: `{\"model_max_length\": 512}`?\r\n\r\nThank you.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,606
1,619
1,619
CONTRIBUTOR
null
Hi! I have an issue with running the `run_mlm.py` script with a tokenizer I myself trained. If I use pretrained tokenizers everything works. Versions: ``` python: 3.8.3 transformers: 3.5.1 tokenizers: 0.9.4 torch: 1.7.0 ``` This is how I train my tokenizer: ``` from tokenizers import BertWordPieceTokenizer tokenizer = BertWordPieceTokenizer(lowercase=False, strip_accents=False, clean_text=True) tokenizer.train(files=['/mounts/data/proj/antmarakis/wikipedia/wikipedia_en_1M.txt'], vocab_size=350, special_tokens=[ "[PAD]", "[UNK]", "[CLS]", "[SEP]", "[MASK]", ]) tokenizer.save_model('wikipedia_en') ``` The above results in a vocab.txt file. And this is how I try to train my model (using the `run_mlm.py` script): ``` python run_mlm.py \ --model_type bert \ --config_name bert_custom.json \ --train_file wikipedia_en_1M.txt \ --tokenizer_name wikipedia_en \ --output_dir lm_temp \ --do_train \ --num_train_epochs 1 \ --overwrite_output_dir ``` If I use a pretrained model/tokenizer, this script works (that is, I replace `config_name` and `tokenizer_name` with `model_name_or_path roberta-base` or something). But using the above code, I get the following error message: ``` Traceback (most recent call last): File "run_mlm.py", line 392, in <module> main() File "run_mlm.py", line 334, in main tokenized_datasets = tokenized_datasets.map( File "/mounts/Users/cisintern/antmarakis/.local/lib/python3.8/site-packages/datasets/dataset_dict.py", line 283, in map { File "/mounts/Users/cisintern/antmarakis/.local/lib/python3.8/site-packages/datasets/dataset_dict.py", line 284, in <dictcomp> k: dataset.map( File "/mounts/Users/cisintern/antmarakis/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1240, in map return self._map_single( File "/mounts/Users/cisintern/antmarakis/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 156, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/mounts/Users/cisintern/antmarakis/.local/lib/python3.8/site-packages/datasets/fingerprint.py", line 163, in wrapper out = func(self, *args, **kwargs) File "/mounts/Users/cisintern/antmarakis/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1525, in _map_single writer.write_batch(batch) File "/mounts/Users/cisintern/antmarakis/.local/lib/python3.8/site-packages/datasets/arrow_writer.py", line 278, in write_batch pa_table = pa.Table.from_pydict(typed_sequence_examples) File "pyarrow/table.pxi", line 1531, in pyarrow.lib.Table.from_pydict File "pyarrow/array.pxi", line 295, in pyarrow.lib.asarray File "pyarrow/array.pxi", line 195, in pyarrow.lib.array File "pyarrow/array.pxi", line 107, in pyarrow.lib._handle_arrow_array_protocol File "/mounts/Users/cisintern/antmarakis/.local/lib/python3.8/site-packages/datasets/arrow_writer.py", line 100, in __arrow_array__ if trying_type and out[0].as_py() != self.data[0]: File "pyarrow/array.pxi", line 949, in pyarrow.lib.Array.__getitem__ File "pyarrow/array.pxi", line 362, in pyarrow.lib._normalize_index IndexError: index out of bounds ``` This approach used to work for previous versions, but after I upgraded to the latest releases this doesn't seem to work anymore and I do not know where it broke. Any help would be appreciated!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8800/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8800/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8799
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8799/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8799/comments
https://api.github.com/repos/huggingface/transformers/issues/8799/events
https://github.com/huggingface/transformers/pull/8799
751,519,894
MDExOlB1bGxSZXF1ZXN0NTI4MDE3NzU1
8,799
Warning about too long input for fast tokenizers too
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Failing tests seem to come from some other code (seq2seq)", "@thomwolf could you review this PR as you're the mastermind behind this code?", "@LysandreJik May I merge (failing tests and quality is linked to unrelated `finetune.py` code, I tried to rebase but it does not seem to be enough)" ]
1,606
1,606
1,606
CONTRIBUTOR
null
# What does this PR do? If truncation is not set in tokenizers, but the tokenization is too long for the model (`model_max_length`), we used to trigger a warning that The input would probably fail (which it most likely will). This PR re-enables the warning for fast tokenizers too and uses common code for the trigger to make sure it's consistent across. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. @LysandreJik @thomwolf <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8799/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8799/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8799", "html_url": "https://github.com/huggingface/transformers/pull/8799", "diff_url": "https://github.com/huggingface/transformers/pull/8799.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8799.patch", "merged_at": 1606922308000 }
https://api.github.com/repos/huggingface/transformers/issues/8798
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8798/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8798/comments
https://api.github.com/repos/huggingface/transformers/issues/8798/events
https://github.com/huggingface/transformers/pull/8798
751,493,982
MDExOlB1bGxSZXF1ZXN0NTI3OTk2MDQz
8,798
Fix setup.py on Windows
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,606
1,686
1,606
CONTRIBUTOR
null
# What does this PR do? This PR fixes the target `deps_table_update` on Windows by forcing the newline to be LF.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8798/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8798/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8798", "html_url": "https://github.com/huggingface/transformers/pull/8798", "diff_url": "https://github.com/huggingface/transformers/pull/8798.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8798.patch", "merged_at": 1606497921000 }
https://api.github.com/repos/huggingface/transformers/issues/8797
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8797/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8797/comments
https://api.github.com/repos/huggingface/transformers/issues/8797/events
https://github.com/huggingface/transformers/pull/8797
751,474,167
MDExOlB1bGxSZXF1ZXN0NTI3OTc5ODI4
8,797
Minor docs typo fixes
{ "login": "guyrosin", "id": 1250162, "node_id": "MDQ6VXNlcjEyNTAxNjI=", "avatar_url": "https://avatars.githubusercontent.com/u/1250162?v=4", "gravatar_id": "", "url": "https://api.github.com/users/guyrosin", "html_url": "https://github.com/guyrosin", "followers_url": "https://api.github.com/users/guyrosin/followers", "following_url": "https://api.github.com/users/guyrosin/following{/other_user}", "gists_url": "https://api.github.com/users/guyrosin/gists{/gist_id}", "starred_url": "https://api.github.com/users/guyrosin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/guyrosin/subscriptions", "organizations_url": "https://api.github.com/users/guyrosin/orgs", "repos_url": "https://api.github.com/users/guyrosin/repos", "events_url": "https://api.github.com/users/guyrosin/events{/privacy}", "received_events_url": "https://api.github.com/users/guyrosin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Looks like you have a styling problem, could you run the command `make style` after doing a dev install with\r\n```\r\npip install -e .[dev]\r\n```\r\nin the repo?", "Oops! Done." ]
1,606
1,606
1,606
CONTRIBUTOR
null
# What does this PR do? Just a few typo fixes in the docs. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8797/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8797/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8797", "html_url": "https://github.com/huggingface/transformers/pull/8797", "diff_url": "https://github.com/huggingface/transformers/pull/8797.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8797.patch", "merged_at": 1606667221000 }
https://api.github.com/repos/huggingface/transformers/issues/8796
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8796/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8796/comments
https://api.github.com/repos/huggingface/transformers/issues/8796/events
https://github.com/huggingface/transformers/pull/8796
751,403,581
MDExOlB1bGxSZXF1ZXN0NTI3OTIyODEw
8,796
QARiB Arabic and dialects models
{ "login": "ahmed451", "id": 2007934, "node_id": "MDQ6VXNlcjIwMDc5MzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2007934?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ahmed451", "html_url": "https://github.com/ahmed451", "followers_url": "https://api.github.com/users/ahmed451/followers", "following_url": "https://api.github.com/users/ahmed451/following{/other_user}", "gists_url": "https://api.github.com/users/ahmed451/gists{/gist_id}", "starred_url": "https://api.github.com/users/ahmed451/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ahmed451/subscriptions", "organizations_url": "https://api.github.com/users/ahmed451/orgs", "repos_url": "https://api.github.com/users/ahmed451/repos", "events_url": "https://api.github.com/users/ahmed451/events{/privacy}", "received_events_url": "https://api.github.com/users/ahmed451/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "Thanks @ahmed451! \r\n\r\nFor context, please also read https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755" ]
1,606
1,607
1,607
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8796/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8796/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8796", "html_url": "https://github.com/huggingface/transformers/pull/8796", "diff_url": "https://github.com/huggingface/transformers/pull/8796.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8796.patch", "merged_at": 1607697518000 }
https://api.github.com/repos/huggingface/transformers/issues/8795
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8795/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8795/comments
https://api.github.com/repos/huggingface/transformers/issues/8795/events
https://github.com/huggingface/transformers/pull/8795
751,395,021
MDExOlB1bGxSZXF1ZXN0NTI3OTE2MDUx
8,795
Use model.from_pretrained for DataParallel also
{ "login": "shaie", "id": 3469932, "node_id": "MDQ6VXNlcjM0Njk5MzI=", "avatar_url": "https://avatars.githubusercontent.com/u/3469932?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shaie", "html_url": "https://github.com/shaie", "followers_url": "https://api.github.com/users/shaie/followers", "following_url": "https://api.github.com/users/shaie/following{/other_user}", "gists_url": "https://api.github.com/users/shaie/gists{/gist_id}", "starred_url": "https://api.github.com/users/shaie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shaie/subscriptions", "organizations_url": "https://api.github.com/users/shaie/orgs", "repos_url": "https://api.github.com/users/shaie/repos", "events_url": "https://api.github.com/users/shaie/events{/privacy}", "received_events_url": "https://api.github.com/users/shaie/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Oh, looks like there is a last code-style issue to fix. Could you run `make style` on your branch? Then we can merge this.", "I don't have `make` installed 😄 , what is the style issue? Wonder what style issue can go wrong in such a simple patch. The only thing we added is `self.` in those 2 lines", "`check_code_quality` complains about `finetune.py`, but it's not modified by this patch", "Weird indeed. Will merge and fix if the issue persists." ]
1,606
1,606
1,606
CONTRIBUTOR
null
When training on multiple GPUs, the code wraps a model with torch.nn.DataParallel. However if the model has custom from_pretrained logic, it does not get applied during load_best_model_at_end. This commit uses the underlying model during load_best_model_at_end, and re-wraps the loaded model with DataParallel. If you choose to reject this change, then could you please move the this logic to a function, e.g. def load_best_model_checkpoint(best_model_checkpoint) or something, so that it can be overridden? # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8795/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8795/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8795", "html_url": "https://github.com/huggingface/transformers/pull/8795", "diff_url": "https://github.com/huggingface/transformers/pull/8795.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8795.patch", "merged_at": 1606752670000 }
https://api.github.com/repos/huggingface/transformers/issues/8794
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8794/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8794/comments
https://api.github.com/repos/huggingface/transformers/issues/8794/events
https://github.com/huggingface/transformers/issues/8794
751,358,172
MDU6SXNzdWU3NTEzNTgxNzI=
8,794
Can I get logits for each sequence I acqired from model.generate()?
{ "login": "RandolphShi", "id": 24260605, "node_id": "MDQ6VXNlcjI0MjYwNjA1", "avatar_url": "https://avatars.githubusercontent.com/u/24260605?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RandolphShi", "html_url": "https://github.com/RandolphShi", "followers_url": "https://api.github.com/users/RandolphShi/followers", "following_url": "https://api.github.com/users/RandolphShi/following{/other_user}", "gists_url": "https://api.github.com/users/RandolphShi/gists{/gist_id}", "starred_url": "https://api.github.com/users/RandolphShi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RandolphShi/subscriptions", "organizations_url": "https://api.github.com/users/RandolphShi/orgs", "repos_url": "https://api.github.com/users/RandolphShi/repos", "events_url": "https://api.github.com/users/RandolphShi/events{/privacy}", "received_events_url": "https://api.github.com/users/RandolphShi/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Sadly not at the moment... -> we are currently thinking about how to improve the `generate()` outputs though! ", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,606
1,614
1,614
NONE
null
Hi, I’m currently stucked in getting logits from model.generate. I’m wondering if it is possible to get logits of each seqeucne returned by model.generate. (like logits for each token returned by model.logits)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8794/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8794/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8793
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8793/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8793/comments
https://api.github.com/repos/huggingface/transformers/issues/8793/events
https://github.com/huggingface/transformers/issues/8793
751,328,751
MDU6SXNzdWU3NTEzMjg3NTE=
8,793
Loss pooling layer parameters after Fine-tune.
{ "login": "wlhgtc", "id": 16603773, "node_id": "MDQ6VXNlcjE2NjAzNzcz", "avatar_url": "https://avatars.githubusercontent.com/u/16603773?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wlhgtc", "html_url": "https://github.com/wlhgtc", "followers_url": "https://api.github.com/users/wlhgtc/followers", "following_url": "https://api.github.com/users/wlhgtc/following{/other_user}", "gists_url": "https://api.github.com/users/wlhgtc/gists{/gist_id}", "starred_url": "https://api.github.com/users/wlhgtc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wlhgtc/subscriptions", "organizations_url": "https://api.github.com/users/wlhgtc/orgs", "repos_url": "https://api.github.com/users/wlhgtc/repos", "events_url": "https://api.github.com/users/wlhgtc/events{/privacy}", "received_events_url": "https://api.github.com/users/wlhgtc/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "The pooling layer is not used during the fine-tuning if doing MLM, so gradients are not retro-propagated through that layer; the parameters are not updated.", "@LysandreJik The pooling parameters are not needed in MLM fine-tune. But usually, we use MLM to fine-tune BERT on our own corpus, then we use the saved model weight(missed pooling parameters) in downstream task.\r\nIt's unreasonable for us to random initialize the pool parameters, we should reload google's original pooling parameter(though it was not update in MLM fine-tune).", "I see, thank you for explaining! In that case, would using the `BertForPreTraining` model fit your needs? You would only need to pass the masked LM labels, not the NSP labels, but you would still have all the layers that were used for the pre-training.\r\n\r\nThis is something we had not taken into account when implementing the `add_pooling_layer` argument cc @patrickvonplaten @sgugger ", "Hi @LysandreJik,\r\n\r\nI also tried to further pre-train BERT with new, domain specific text data using the recommended run_mlm_wwm.py file, since I read a paper which outlines the benefits of this approach. I also got the warning that the Pooling Layers are not initialized from the model checkpoint. I have a few follow up questions to that:\r\n\r\n- Does that mean that the final hidden vector of the [CLS] token is randomly initialized? That would be an issue for me since I need it in my downstream application.\r\n- If the former point is true: Why is not at least the hidden vector of the source model copied?\r\n- I think to get a proper hidden vector for [CLS], NSP would be needed. If I understand your answers in issue #6330 correctly, you don't support the NSP objective due to the results of the RoBERTa paper. Does that mean there is no code for pre-training BERT in the whole huggingface library which yields meaningful final [CLS] hidden vectors?\r\n- Is there an alternative to [CLS] for downstream tasks that use sentence/document embeddings rather than token embeddings?\r\n\r\nI would really appreciate any kind of help. Thanks a lot!\r\n\r\n", "The [CLS] token was not randomly initialized. It's a token in BERT vocabulary.\r\nWe talk about Pooling Layer in [here](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/modeling_bert.py#L609).\r\n", "Oh okay, I see. Only the weight matrix and the bias vector of that feed forward operation on the [CLS] vector are randomly initalized, not the [CLS] vector itself. I misunderstood a comment in another forum. Thanks for clarification @wlhgtc!", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,606
1,614
1,614
CONTRIBUTOR
null
According to the [code](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/modeling_bert.py#L1005): if I want to fine-tune BERT with LM, we don't init pooling layer. So we loss the original(pre-trained by Google) parameters if we save the fine-tune model and reload it. Mostly, we use this model for downstream task( text classification), this (may) lead to a worse result. This `add_pooling_layer` should be `true` for all time even if we don't update them in fine-tune. @thomwolf @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8793/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8793/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8792
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8792/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8792/comments
https://api.github.com/repos/huggingface/transformers/issues/8792/events
https://github.com/huggingface/transformers/issues/8792
751,327,544
MDU6SXNzdWU3NTEzMjc1NDQ=
8,792
[finetune_trainer] --evaluate_during_training is no more
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Found the source of breakage: https://github.com/huggingface/transformers/pull/8604 - I guess that PR needs more work" ]
1,606
1,606
1,606
CONTRIBUTOR
null
In `examples/seq2seq/builtin_trainer/` all scripts reference `--evaluate_during_training ` but it doesn't exist in pt trainer, but does exist in tf trainer: ``` grep -Ir evaluate_during builtin_trainer/finetune.sh: --do_train --do_eval --do_predict --evaluate_during_training \ builtin_trainer/train_distil_marian_enro.sh: --do_train --do_eval --do_predict --evaluate_during_training\ builtin_trainer/finetune_tpu.sh: --do_train --do_eval --evaluate_during_training \ builtin_trainer/train_distilbart_cnn.sh: --do_train --do_eval --do_predict --evaluate_during_training \ builtin_trainer/train_distil_marian_enro_tpu.sh: --do_train --do_eval --evaluate_during_training \ builtin_trainer/train_mbart_cc25_enro.sh: --do_train --do_eval --do_predict --evaluate_during_training \ ``` ``` Traceback (most recent call last): File "finetune_trainer.py", line 310, in <module> main() File "finetune_trainer.py", line 118, in main model_args, data_args, training_args = parser.parse_args_into_dataclasses() File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/transformers/hf_argparser.py", line 144, in parse_args_into_dataclasses raise ValueError(f"Some specified arguments are not used by the HfArgumentParser: {remaining_args}") ValueError: Some specified arguments are not used by the HfArgumentParser: ['--evaluate_during_training'] ``` Is this meant to be replaced by: `--evaluation_strategy` - this is the closest I found in `training_args.py` If so which one? `steps` or `epoch`? Also the help output is borked: ``` $ python finetune_trainer.py -h ... [--evaluation_strategy {EvaluationStrategy.NO,EvaluationStrategy.STEPS,EvaluationStrategy.EPOCH}] ``` probably this is not what what's intended, but ``` [--evaluation_strategy {no, steps, epochs} ``` But perhaps it's a bigger issue - I see `trainer.args.evaluate_during_training`: ``` src/transformers/integrations.py: ) and (not trainer.args.do_eval or not trainer.args.evaluate_during_training): ``` and also `--evaluate_during_training` in many other files under `examples/`. Thank you. @sgugger, @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8792/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/8792/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8791
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8791/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8791/comments
https://api.github.com/repos/huggingface/transformers/issues/8791/events
https://github.com/huggingface/transformers/pull/8791
751,314,243
MDExOlB1bGxSZXF1ZXN0NTI3ODQ5MDEw
8,791
[FlaxBert] Fix non-broadcastable attention mask for batched forward-passes
{ "login": "KristianHolsheimer", "id": 8200332, "node_id": "MDQ6VXNlcjgyMDAzMzI=", "avatar_url": "https://avatars.githubusercontent.com/u/8200332?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KristianHolsheimer", "html_url": "https://github.com/KristianHolsheimer", "followers_url": "https://api.github.com/users/KristianHolsheimer/followers", "following_url": "https://api.github.com/users/KristianHolsheimer/following{/other_user}", "gists_url": "https://api.github.com/users/KristianHolsheimer/gists{/gist_id}", "starred_url": "https://api.github.com/users/KristianHolsheimer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KristianHolsheimer/subscriptions", "organizations_url": "https://api.github.com/users/KristianHolsheimer/orgs", "repos_url": "https://api.github.com/users/KristianHolsheimer/repos", "events_url": "https://api.github.com/users/KristianHolsheimer/events{/privacy}", "received_events_url": "https://api.github.com/users/KristianHolsheimer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@mfuntowicz @avital\r\nI just fixed the bug that I was hitting. There might be other places that need this fix as well.", "Wuuhu - first Flax PR :-). This looks great to me @KristianHolsheimer - think you touched all necessary files as well! \r\n\r\n@mfuntowicz - maybe you can take a look as well", "**EDIT** I re-enabled GPU memory preallocation but set the mem fraction < 1/parallelism. That seemed to fix the tests. The problem with this is that future tests might fail if a model doesn't fit in 1/8th of the GPU memory.\r\n\r\n---\r\n\r\nThe flax tests time out. When I ran the tests locally with `pytest -n auto`, I did notice OOM issues due to preallocation of GPU memory by XLA. I addressed this in commit 6bb1f5e600cd35c712f4f980699df7735b4f59eb.\r\n\r\nOther than that, it's hard to debug the tests when there's no output.\r\n\r\nWould it be an option to run these tests single-threaded instead?", "I had the changes in another branche I'm working on, happy to merge this one and will rebase mine 👍.\r\n\r\nThanks for looking at it @KristianHolsheimer " ]
1,606
1,606
1,606
CONTRIBUTOR
null
# What does this PR do? Fixes #8790 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @mfuntowicz @avital @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8791/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8791/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8791", "html_url": "https://github.com/huggingface/transformers/pull/8791", "diff_url": "https://github.com/huggingface/transformers/pull/8791.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8791.patch", "merged_at": 1606479680000 }
https://api.github.com/repos/huggingface/transformers/issues/8790
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8790/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8790/comments
https://api.github.com/repos/huggingface/transformers/issues/8790/events
https://github.com/huggingface/transformers/issues/8790
751,313,654
MDU6SXNzdWU3NTEzMTM2NTQ=
8,790
[FlaxBert] Non-broadcastable attention mask in batched forward-pass
{ "login": "KristianHolsheimer", "id": 8200332, "node_id": "MDQ6VXNlcjgyMDAzMzI=", "avatar_url": "https://avatars.githubusercontent.com/u/8200332?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KristianHolsheimer", "html_url": "https://github.com/KristianHolsheimer", "followers_url": "https://api.github.com/users/KristianHolsheimer/followers", "following_url": "https://api.github.com/users/KristianHolsheimer/following{/other_user}", "gists_url": "https://api.github.com/users/KristianHolsheimer/gists{/gist_id}", "starred_url": "https://api.github.com/users/KristianHolsheimer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KristianHolsheimer/subscriptions", "organizations_url": "https://api.github.com/users/KristianHolsheimer/orgs", "repos_url": "https://api.github.com/users/KristianHolsheimer/repos", "events_url": "https://api.github.com/users/KristianHolsheimer/events{/privacy}", "received_events_url": "https://api.github.com/users/KristianHolsheimer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,606
1,606
1,606
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: Linux - Python version: 3.8 - JAX version: - jax==0.2.6 - jaxlib==0.1.57+cuda110 - flax==0.2.2 - PyTorch version (GPU?): n/a - Tensorflow version (GPU?): n/a - Using GPU in script?: **yes** (cuda 11.0) - Using distributed or parallel set-up in script?: **no** ### Who can help @mfuntowicz @avital @LysandreJik ## Information I ran the script from the recent Twitter [post](https://twitter.com/huggingface/status/1331255460033400834): ![](https://pbs.twimg.com/media/EnmRwGDW4AA0j26?format=jpg) The only thing I changed was that I fed in multiple sentences: ```python from transformers import FlaxBertModel, BertTokenizerFast, TensorType tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased') model = FlaxBertModel.from_pretrained('bert-base-cased') # apply_fn = jax.jit(model.model.apply) sentences = ["this is an example sentence", "this is another", "and a third one"] encodings = tokenizer(sentences, return_tensors=TensorType.JAX, padding=True, truncation=True) tokens, pooled = model(**encodings) ``` > ValueError: Incompatible shapes for broadcasting: ((3, 12, 7, 7), (1, 1, 3, 7)) See full stack trace: https://pastebin.com/sPUSjGVi
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8790/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8790/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8789
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8789/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8789/comments
https://api.github.com/repos/huggingface/transformers/issues/8789/events
https://github.com/huggingface/transformers/issues/8789
751,268,360
MDU6SXNzdWU3NTEyNjgzNjA=
8,789
KeyError: 'eval_loss' when fine-tuning gpt-2 with run_clm.py
{ "login": "Potomac", "id": 1340993, "node_id": "MDQ6VXNlcjEzNDA5OTM=", "avatar_url": "https://avatars.githubusercontent.com/u/1340993?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Potomac", "html_url": "https://github.com/Potomac", "followers_url": "https://api.github.com/users/Potomac/followers", "following_url": "https://api.github.com/users/Potomac/following{/other_user}", "gists_url": "https://api.github.com/users/Potomac/gists{/gist_id}", "starred_url": "https://api.github.com/users/Potomac/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Potomac/subscriptions", "organizations_url": "https://api.github.com/users/Potomac/orgs", "repos_url": "https://api.github.com/users/Potomac/repos", "events_url": "https://api.github.com/users/Potomac/events{/privacy}", "received_events_url": "https://api.github.com/users/Potomac/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This is weird, as the script is tested for evaluation. What does your `dev.txt` file look like?", "Dev.txt contains text in english, one sentence by line.\r\nThe PC I use has 2 graphic cards, so run_clm.py uses the 2 cards for the training, perhaps the bug occurs only when 2 or more graphic card are used for the training ?", "The script is tested on 2 GPUs as well as one. Are you sure this file contains enough text to have a least one batch during evaluation? This is the only thing I can think of for not having an eval_loss returned.", "The dev.txt file contains 46 lines, the train file contains 268263 lines.\r\n\r\nthe specifications of the PC I use :\r\n\r\n- Intel Xeon E5-2650 v4 (Broadwell, 2.20GHz)\r\n- 128 Gb ram\r\n- 2 x Nvidia GeForce GTX 1080 Ti\r\n\r\n", "Like I said, the dev file is maybe too short to provide at least one batch and return a loss. You should try with a longer dev file.", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,606
1,614
1,614
NONE
null
## Environment info - `transformers` version: 4.0.0-rc-1 - Platform: Linux-4.19.0-12-amd64-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): 2.2.0 (True) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: default option ### Who can help albert, bert, GPT2, XLM: @LysandreJik Trainer: @sgugger ## Information Model I am using (Bert, XLNet ...): GPT2 The problem arises when using: * [x] the official example scripts: (give details below) Bug occurs when running run_clm.py file from transformers/examples/language-modeling/ , the evaluation step (--do_eval) will crash with a python error related to missing KeyError 'eval_loss', ## To reproduce Steps to reproduce the behavior: 1. Use run_clm.py file from transformers/examples/language-modeling/ 2. Try to fine-tune gpt-2 model, with your own train file and your own validation file 3. When you add "--do_eval" option in run_clm.py then an error will occur when the step "evaluation" is reached : ``` File "run_clm.py", line 353, in <module> main() File "run_clm.py", line 333, in main perplexity = math.exp(eval_output["eval_loss"]) KeyError: 'eval_loss' ``` when I try to print the content of eval_output then there is just one key : "epoch" the way I execute run_clm.py : ``` python run_clm.py \ --model_name_or_path gpt2 \ --train_file train.txt \ --validation_file dev.txt \ --do_train \ --do_eval \ --per_device_train_batch_size 2 \ --per_device_eval_batch_size 2 \ --output_dir results/test-clm ``` ## Expected behavior The evaluation step should run without problems.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8789/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8789/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8788
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8788/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8788/comments
https://api.github.com/repos/huggingface/transformers/issues/8788/events
https://github.com/huggingface/transformers/pull/8788
751,110,468
MDExOlB1bGxSZXF1ZXN0NTI3Njg2NTU5
8,788
Add QCRI Arabic and Dialectal BERT (QARiB) models
{ "login": "ahmed451", "id": 2007934, "node_id": "MDQ6VXNlcjIwMDc5MzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2007934?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ahmed451", "html_url": "https://github.com/ahmed451", "followers_url": "https://api.github.com/users/ahmed451/followers", "following_url": "https://api.github.com/users/ahmed451/following{/other_user}", "gists_url": "https://api.github.com/users/ahmed451/gists{/gist_id}", "starred_url": "https://api.github.com/users/ahmed451/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ahmed451/subscriptions", "organizations_url": "https://api.github.com/users/ahmed451/orgs", "repos_url": "https://api.github.com/users/ahmed451/repos", "events_url": "https://api.github.com/users/ahmed451/events{/privacy}", "received_events_url": "https://api.github.com/users/ahmed451/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "Thanks for sharing; your filenames are wrong, they should be nested inside folders named from your model id", "> Thanks for sharing; your filenames are wrong, they should be nested inside folders named from your model id\r\n\r\nThanks Julien, I have updated the branch accordingly.", "closing in favor of #8796" ]
1,606
1,607
1,607
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8788/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8788/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8788", "html_url": "https://github.com/huggingface/transformers/pull/8788", "diff_url": "https://github.com/huggingface/transformers/pull/8788.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8788.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8787
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8787/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8787/comments
https://api.github.com/repos/huggingface/transformers/issues/8787/events
https://github.com/huggingface/transformers/issues/8787
751,000,475
MDU6SXNzdWU3NTEwMDA0NzU=
8,787
QA pipeline fails during convert_squad_examples_to_features
{ "login": "TrupeshKumarPatel", "id": 47249670, "node_id": "MDQ6VXNlcjQ3MjQ5Njcw", "avatar_url": "https://avatars.githubusercontent.com/u/47249670?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TrupeshKumarPatel", "html_url": "https://github.com/TrupeshKumarPatel", "followers_url": "https://api.github.com/users/TrupeshKumarPatel/followers", "following_url": "https://api.github.com/users/TrupeshKumarPatel/following{/other_user}", "gists_url": "https://api.github.com/users/TrupeshKumarPatel/gists{/gist_id}", "starred_url": "https://api.github.com/users/TrupeshKumarPatel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TrupeshKumarPatel/subscriptions", "organizations_url": "https://api.github.com/users/TrupeshKumarPatel/orgs", "repos_url": "https://api.github.com/users/TrupeshKumarPatel/repos", "events_url": "https://api.github.com/users/TrupeshKumarPatel/events{/privacy}", "received_events_url": "https://api.github.com/users/TrupeshKumarPatel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "After updating the run_squad.py script with a newer version of transformers, it works now!\r\n\r\nThank you!", "@TrupeshKumarPatel Seems that this is not working still. What was the actual solution to this?", "Hi @aleSuglia,\r\nhere is the updated link: https://github.com/uabinf/nlp-group-project-fall-2020-deepbiocomp/blob/main/scripts/qa_script/qa_squad_v1.ipynb , see if this help. If not then please elaborate on the error or problem that you are facing. ", "I have exactly the same error that you reported: `TypeError: TextInputSequence must be str`\r\nBy debugging, I can see that the variable `truncated_query` has a list of integers (which should be the current question's token ids). However, when you pass that to the [encode_plus](https://github.com/huggingface/transformers/blob/df2af6d8b8765b1ac2cda12d2ece09bf7240fba8/src/transformers/data/processors/squad.py#L181) method, you get the error. I guess it's because `encode_plus` expects strings and not integers. Do you have any suggestion?", "If you googled this error and you are reading this post, please do the following. When you create your tokenizer make sure that you set the flag `use_fast` to `False` like this: \r\n```python\r\nAutoTokenizer.from_pretrained(tokenizer_name, use_fast=False)\r\n```\r\n\r\nThis fixes the error. However, I wonder why there is no backward compatibility...", "Had the similar issue with the above. What @aleSuglia suggested indeed works, but the issue still persists; fast version of the tokenizer should be compatible with the previous methods. In my case, I narrowed the problem down to `InputExample`, where `text_b` can be `None`, https://github.com/huggingface/transformers/blob/447808c85f0e6d6b0aeeb07214942bf1e578f9d2/src/transformers/data/processors/utils.py#L47-L48\r\n\r\nbut the tokenizer apparently doesn't accept `None` as an input. So, I found a workaround by changing\r\n```\r\nInputExample(guid=some_id, text_a=some_text, label=some_label)\r\n-> InputExample(guid=some_id, text_a=some_text, text_b='', label=some_label)\r\n```\r\nI'm not sure this completely solves the issue though.", "Potentially related issues: https://github.com/huggingface/transformers/issues/6545 https://github.com/huggingface/transformers/issues/7735 https://github.com/huggingface/transformers/issues/7011" ]
1,606
1,613
1,606
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.0.0-rc-1 - Platform: Linux-3.10.0-1062.9.1.el7.x86_64-x86_64-with-redhat-7.8-Maipo - Python version: 3.7.9 - PyTorch version (GPU?): 1.7.0 (True) - Tensorflow version (GPU?): 2.3.1 (True) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help @LysandreJik @mfuntowicz IDK who else can help, but in sort, I am looking for someone who can help me in QA tasks. <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao Blenderbot: @patrickvonplaten Bart: @patrickvonplaten Marian: @patrickvonplaten Pegasus: @patrickvonplaten mBART: @patrickvonplaten T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSMT: @stas00 examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): Bert The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: run_squad.py (modifying to run using jupyter notebook, using "HfArgumentParser") The tasks I am working on is: * [x] an official GLUE/SQUaD task: SQUaD * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. modified all argparse to HfArgumentParser 2. created "ModelArguments" dataclass function for HfArgumentParser (Ref: https://github.com/patil-suraj/Notebooks/blob/master/longformer_qa_training.ipynb) 3. need to small changes in the whole script. The test fails with error `TypeError: TextInputSequence must be str` Complete failure result: ``` RemoteTraceback Traceback (most recent call last) RemoteTraceback: """ Traceback (most recent call last): File "/data/user/tr27p/.conda/envs/DeepBioComp/lib/python3.7/multiprocessing/pool.py", line 121, in worker result = (True, func(*args, **kwds)) File "/data/user/tr27p/.conda/envs/DeepBioComp/lib/python3.7/multiprocessing/pool.py", line 44, in mapstar return list(map(*args)) File "/data/user/tr27p/.conda/envs/DeepBioComp/lib/python3.7/site-packages/transformers/data/processors/squad.py", line 175, in squad_convert_example_to_features return_token_type_ids=True, File "/data/user/tr27p/.conda/envs/DeepBioComp/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2439, in encode_plus **kwargs, File "/data/user/tr27p/.conda/envs/DeepBioComp/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py", line 463, in _encode_plus **kwargs, File "/data/user/tr27p/.conda/envs/DeepBioComp/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py", line 378, in _batch_encode_plus is_pretokenized=is_split_into_words, TypeError: TextInputSequence must be str """ The above exception was the direct cause of the following exception: TypeError Traceback (most recent call last) <ipython-input-19-263240bbee7e> in <module> ----> 1 main() <ipython-input-18-61d7f0eab618> in main() 111 # Training 112 if train_args.do_train: --> 113 train_dataset = load_and_cache_examples((model_args, train_args), tokenizer, evaluate=False, output_examples=False) 114 global_step, tr_loss = train(args, train_dataset, model, tokenizer) 115 logger.info(" global_step = %s, average loss = %s", global_step, tr_loss) <ipython-input-8-79eb3ed364c2> in load_and_cache_examples(args, tokenizer, evaluate, output_examples) 54 max_query_length=model_args.max_query_length, 55 is_training=not evaluate, ---> 56 return_dataset="pt", 57 # threads=model_args.threads, 58 ) /data/user/tr27p/.conda/envs/DeepBioComp/lib/python3.7/site-packages/transformers/data/processors/squad.py in squad_convert_examples_to_features(examples, tokenizer, max_seq_length, doc_stride, max_query_length, is_training, padding_strategy, return_dataset, threads, tqdm_enabled) 366 total=len(examples), 367 desc="convert squad examples to features", --> 368 disable=not tqdm_enabled, 369 ) 370 ) /data/user/tr27p/.conda/envs/DeepBioComp/lib/python3.7/site-packages/tqdm/std.py in __iter__(self) 1131 1132 try: -> 1133 for obj in iterable: 1134 yield obj 1135 # Update and possibly print the progressbar. /data/user/tr27p/.conda/envs/DeepBioComp/lib/python3.7/multiprocessing/pool.py in <genexpr>(.0) 323 result._set_length 324 )) --> 325 return (item for chunk in result for item in chunk) 326 327 def imap_unordered(self, func, iterable, chunksize=1): /data/user/tr27p/.conda/envs/DeepBioComp/lib/python3.7/multiprocessing/pool.py in next(self, timeout) 746 if success: 747 return value --> 748 raise value 749 750 __next__ = next # XXX TypeError: TextInputSequence must be str ``` ## Expected behavior #### for more details check here: link: https://github.com/uabinf/nlp-group-project-fall-2020-deepbiocomp/blob/cancer_ask/scripts/qa_script/qa_squad_v1.ipynb <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8787/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8787/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8786
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8786/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8786/comments
https://api.github.com/repos/huggingface/transformers/issues/8786/events
https://github.com/huggingface/transformers/issues/8786
750,977,050
MDU6SXNzdWU3NTA5NzcwNTA=
8,786
What would be the license of the model files available in Hugging face repository?
{ "login": "jaingaurav3", "id": 29180919, "node_id": "MDQ6VXNlcjI5MTgwOTE5", "avatar_url": "https://avatars.githubusercontent.com/u/29180919?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jaingaurav3", "html_url": "https://github.com/jaingaurav3", "followers_url": "https://api.github.com/users/jaingaurav3/followers", "following_url": "https://api.github.com/users/jaingaurav3/following{/other_user}", "gists_url": "https://api.github.com/users/jaingaurav3/gists{/gist_id}", "starred_url": "https://api.github.com/users/jaingaurav3/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jaingaurav3/subscriptions", "organizations_url": "https://api.github.com/users/jaingaurav3/orgs", "repos_url": "https://api.github.com/users/jaingaurav3/repos", "events_url": "https://api.github.com/users/jaingaurav3/events{/privacy}", "received_events_url": "https://api.github.com/users/jaingaurav3/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Maybe @julien-c can answer!", "You can check on the model hub:\r\neg. apache-2.0 models: https://huggingface.co/models?filter=license:apache-2.0\r\nmit models: https://huggingface.co/models?filter=license:mit\r\n\r\netc.", "Thanks @julien-c for the update." ]
1,606
1,606
1,606
NONE
null
Dear Team, Could you clarify what would be the license for different models pushed to hugging face repo like legal_bert, contracts_bert etc? Would the model file follows the same license i.e. Apache 2.0 like the hugging face library? Regards Gaurav
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8786/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8786/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8785
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8785/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8785/comments
https://api.github.com/repos/huggingface/transformers/issues/8785/events
https://github.com/huggingface/transformers/pull/8785
750,976,492
MDExOlB1bGxSZXF1ZXN0NTI3NTc3NDcy
8,785
Update README.md
{ "login": "moniquebm", "id": 60358442, "node_id": "MDQ6VXNlcjYwMzU4NDQy", "avatar_url": "https://avatars.githubusercontent.com/u/60358442?v=4", "gravatar_id": "", "url": "https://api.github.com/users/moniquebm", "html_url": "https://github.com/moniquebm", "followers_url": "https://api.github.com/users/moniquebm/followers", "following_url": "https://api.github.com/users/moniquebm/following{/other_user}", "gists_url": "https://api.github.com/users/moniquebm/gists{/gist_id}", "starred_url": "https://api.github.com/users/moniquebm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/moniquebm/subscriptions", "organizations_url": "https://api.github.com/users/moniquebm/orgs", "repos_url": "https://api.github.com/users/moniquebm/repos", "events_url": "https://api.github.com/users/moniquebm/events{/privacy}", "received_events_url": "https://api.github.com/users/moniquebm/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "Decided to delete \"inference: false\"" ]
1,606
1,606
1,606
CONTRIBUTOR
null
Disable Hosted Inference API while output inconsistency is not solved. "How to use" section. # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8785/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8785/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8785", "html_url": "https://github.com/huggingface/transformers/pull/8785", "diff_url": "https://github.com/huggingface/transformers/pull/8785.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8785.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8784
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8784/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8784/comments
https://api.github.com/repos/huggingface/transformers/issues/8784/events
https://github.com/huggingface/transformers/issues/8784
750,961,168
MDU6SXNzdWU3NTA5NjExNjg=
8,784
Different ouputs from code and Hosted Inference API
{ "login": "moniquebm", "id": 60358442, "node_id": "MDQ6VXNlcjYwMzU4NDQy", "avatar_url": "https://avatars.githubusercontent.com/u/60358442?v=4", "gravatar_id": "", "url": "https://api.github.com/users/moniquebm", "html_url": "https://github.com/moniquebm", "followers_url": "https://api.github.com/users/moniquebm/followers", "following_url": "https://api.github.com/users/moniquebm/following{/other_user}", "gists_url": "https://api.github.com/users/moniquebm/gists{/gist_id}", "starred_url": "https://api.github.com/users/moniquebm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/moniquebm/subscriptions", "organizations_url": "https://api.github.com/users/moniquebm/orgs", "repos_url": "https://api.github.com/users/moniquebm/repos", "events_url": "https://api.github.com/users/moniquebm/events{/privacy}", "received_events_url": "https://api.github.com/users/moniquebm/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @moniquebm .\r\n\r\nThe hosted inference is running `nlp = pipeline('ner', model=model, tokenizer=tokenizer, grouped_entities=True)` by default.\r\nThere is currently no way to overload it.\r\n\r\nDoes that explain the difference? ", "Hi @Narsil \r\n\r\n>Does that explain the difference?\r\n\r\nI'm afraid it doesn't explain... I've just tested nlp = pipeline('ner', model=model, tokenizer=tokenizer, grouped_entities=True) and the following (correct) result is generated programmatically:\r\n\r\n[{'entity_group': 'PUB', 'score': 0.9921221256256103, 'word': 'Tribunal de Contas da União'}, {'entity_group': 'LOC', 'score': 0.9405767321586609, 'word': 'Brasília'}, {'entity_group': 'PESSOA', 'score': 0.9840216636657715, 'word': 'Rui Barbosa'}, {'entity_group': 'ORG', 'score': 0.9529051184654236, 'word': 'Veigamed'}, {'entity_group': 'ORG', 'score': 0.8636592030525208, 'word': 'Buyerbr'}]\r\n\r\nBut in fact it seems the API does not group entities:\r\n\r\n```json\r\n[\r\n {\r\n \"entity_group\": \"LOC\",\r\n \"score\": 0.8127626776695251,\r\n \"word\": \"bras\"\r\n },\r\n {\r\n \"entity_group\": \"PESSOA\",\r\n \"score\": 0.7101765692234039,\r\n \"word\": \"rui barbosa\"\r\n },\r\n {\r\n \"entity_group\": \"ORG\",\r\n \"score\": 0.7679458856582642,\r\n \"word\": \"ve\"\r\n },\r\n {\r\n \"entity_group\": \"ORG\",\r\n \"score\": 0.45047426223754883,\r\n \"word\": \"##igamed\"\r\n },\r\n {\r\n \"entity_group\": \"ORG\",\r\n \"score\": 0.8467527627944946,\r\n \"word\": \"bu\"\r\n },\r\n {\r\n \"entity_group\": \"ORG\",\r\n \"score\": 0.6024420410394669,\r\n \"word\": \"##yerbr\"\r\n }\r\n]\r\n```\r\n\r\nThe tokens are also different. One possible explaination is that the Hosted Inference API may be using English tokenizer, but my model/code used Portuguese tokenizer from this model: https://huggingface.co/neuralmind/bert-base-portuguese-cased\r\n\r\nDoes it make sense?", "You need to update your tokenizer on your model in the hub: 'monilouise/ner_pt_br' to reflect this. The hosted inference can't know how to use a different tokenizer than the one you provide.\r\n\r\nIf you are simply using the one from `neuralmind/bert-base-portuguese-case`, you probably just download theirs, and reupload it as your own following this doc: https://huggingface.co/transformers/model_sharing.html", "May I suggest moving the discussion here: \r\nhttps://discuss.huggingface.co/c/intermediate/6\r\nAs it's not really and transformers problem but a Hub one. \r\n\r\nI am closing the issue here. Feel free to comment here to show the new location of the discussion or ping me directly on discuss.\r\n" ]
1,606
1,606
1,606
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.5.1 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.8.5 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ## To reproduce Steps to reproduce the behavior: 1. Run the following code: ```python from transformers import BertForTokenClassification, DistilBertTokenizerFast, pipeline model = BertForTokenClassification.from_pretrained('monilouise/ner_pt_br') tokenizer = DistilBertTokenizerFast.from_pretrained('neuralmind/bert-base-portuguese-cased', model_max_length=512, do_lower_case=False) nlp = pipeline('ner', model=model, tokenizer=tokenizer) result = nlp("O Tribunal de Contas da União é localizado em Brasília e foi fundado por Rui Barbosa. Fiscaliza contratos, por exemplo com empresas como a Veigamed e a Buyerbr.") print(result) ```` It'll ouput: [{'word': 'Tribunal', 'score': 0.9858521819114685, 'entity': 'B-PUB', 'index': 2}, {'word': 'de', 'score': 0.9954801201820374, 'entity': 'I-PUB', 'index': 3}, {'word': 'Contas', 'score': 0.9929609298706055, 'entity': 'I-PUB', 'index': 4}, {'word': 'da', 'score': 0.9949454665184021, 'entity': 'I-PUB', 'index': 5}, {'word': 'União', 'score': 0.9913719296455383, 'entity': 'L-PUB', 'index': 6}, {'word': 'Brasília', 'score': 0.9405767321586609, 'entity': 'B-LOC', 'index': 10}, {'word': 'Rui', 'score': 0.979736328125, 'entity': 'B-PESSOA', 'index': 15}, {'word': 'Barbosa', 'score': 0.988306999206543, 'entity': 'L-PESSOA', 'index': 16}, {'word': 'Veiga', 'score': 0.9748793244361877, 'entity': 'B-ORG', 'index': 29}, {'word': '##med', 'score': 0.9309309124946594, 'entity': 'L-ORG', 'index': 30}, {'word': 'Bu', 'score': 0.9679405689239502, 'entity': 'B-ORG', 'index': 33}, {'word': '##yer', 'score': 0.6654638051986694, 'entity': 'L-ORG', 'index': 34}, {'word': '##br', 'score': 0.9575732350349426, 'entity': 'L-ORG', 'index': 35}] including all entity types (PUB, PESSOA, ORG and LOC) 2. In Hosted Inference API, the following result is returned for the same sentence, ignoring PUB entity type and giving incorrect and incomplete results: ```json [ { "entity_group": "LOC", "score": 0.8127626776695251, "word": "bras" }, { "entity_group": "PESSOA", "score": 0.7101765692234039, "word": "rui barbosa" }, { "entity_group": "ORG", "score": 0.7679458856582642, "word": "ve" }, { "entity_group": "ORG", "score": 0.45047426223754883, "word": "##igamed" }, { "entity_group": "ORG", "score": 0.8467527627944946, "word": "bu" }, { "entity_group": "ORG", "score": 0.6024420410394669, "word": "##yerbr" } ] ``` ## Expected behavior How can it be possible the same model file give different results?(!) May I be missing anything?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8784/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8784/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8783
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8783/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8783/comments
https://api.github.com/repos/huggingface/transformers/issues/8783/events
https://github.com/huggingface/transformers/pull/8783
750,959,948
MDExOlB1bGxSZXF1ZXN0NTI3NTYzNzI3
8,783
MPNet: Masked and Permuted Pre-training for Natural Language Understanding
{ "login": "StillKeepTry", "id": 6577458, "node_id": "MDQ6VXNlcjY1Nzc0NTg=", "avatar_url": "https://avatars.githubusercontent.com/u/6577458?v=4", "gravatar_id": "", "url": "https://api.github.com/users/StillKeepTry", "html_url": "https://github.com/StillKeepTry", "followers_url": "https://api.github.com/users/StillKeepTry/followers", "following_url": "https://api.github.com/users/StillKeepTry/following{/other_user}", "gists_url": "https://api.github.com/users/StillKeepTry/gists{/gist_id}", "starred_url": "https://api.github.com/users/StillKeepTry/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/StillKeepTry/subscriptions", "organizations_url": "https://api.github.com/users/StillKeepTry/orgs", "repos_url": "https://api.github.com/users/StillKeepTry/repos", "events_url": "https://api.github.com/users/StillKeepTry/events{/privacy}", "received_events_url": "https://api.github.com/users/StillKeepTry/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,606
1,606
1,606
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8783/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8783/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8783", "html_url": "https://github.com/huggingface/transformers/pull/8783", "diff_url": "https://github.com/huggingface/transformers/pull/8783.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8783.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8782
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8782/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8782/comments
https://api.github.com/repos/huggingface/transformers/issues/8782/events
https://github.com/huggingface/transformers/issues/8782
750,957,304
MDU6SXNzdWU3NTA5NTczMDQ=
8,782
Unexpected output from bart-large
{ "login": "jc-hou", "id": 30210529, "node_id": "MDQ6VXNlcjMwMjEwNTI5", "avatar_url": "https://avatars.githubusercontent.com/u/30210529?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jc-hou", "html_url": "https://github.com/jc-hou", "followers_url": "https://api.github.com/users/jc-hou/followers", "following_url": "https://api.github.com/users/jc-hou/following{/other_user}", "gists_url": "https://api.github.com/users/jc-hou/gists{/gist_id}", "starred_url": "https://api.github.com/users/jc-hou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jc-hou/subscriptions", "organizations_url": "https://api.github.com/users/jc-hou/orgs", "repos_url": "https://api.github.com/users/jc-hou/repos", "events_url": "https://api.github.com/users/jc-hou/events{/privacy}", "received_events_url": "https://api.github.com/users/jc-hou/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@jc-hou: I just tested the above script and I get the same output as you. Why did you close the issue?", "`res = model.generate(input_ids, num_beams=1, max_length=100, forced_bos_token_id=0)` solves the issue" ]
1,606
1,635
1,607
NONE
null
I am looking this thread about generation, https://stackoverflow.com/questions/64904840/why-we-need-a-decoder-start-token-id-during-generation-in-huggingface-bart I re-run his code, Use ```facebook/bart-base``` model, ``` from transformers import * import torch model = BartForConditionalGeneration.from_pretrained('facebook/bart-base') tokenizer = BartTokenizer.from_pretrained('facebook/bart-base') input_ids = torch.LongTensor([[0, 894, 213, 7, 334, 479, 2]]) res = model.generate(input_ids, num_beams=1, max_length=100) print(res) preds = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True).strip() for g in res] print(preds) ``` I get output: ``` tensor([[ 2, 0, 894, 213, 7, 334, 479, 2]]) ['He go to school.'] ``` Then I just simply change the model to ```facebook/bart-large``` with everything kept the same, i.e. ``` from transformers import * import torch model = BartForConditionalGeneration.from_pretrained('facebook/bart-large') tokenizer = BartTokenizer.from_pretrained('facebook/bart-large') input_ids = torch.LongTensor([[0, 894, 213, 7, 334, 479, 2]]) res = model.generate(input_ids, num_beams=1, max_length=100) print(res) preds = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True).strip() for g in res] print(preds) ``` Then I get output: ``` tensor([[ 2, 894, 894, 213, 7, 334, 479, 2]]) ['HeHe go to school.'] ``` Is this normal? Thanks.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8782/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8782/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8781
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8781/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8781/comments
https://api.github.com/repos/huggingface/transformers/issues/8781/events
https://github.com/huggingface/transformers/pull/8781
750,906,284
MDExOlB1bGxSZXF1ZXN0NTI3NTE5NTUz
8,781
NerPipeline (TokenClassification) now outputs offsets of words
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,606
1,606
1,606
CONTRIBUTOR
null
# What does this PR do? - It happens that the offsets are missing, it forces the user to pattern match the "word" from his input, which is not always feasible. For instance if a sentence contains the same word twice, then there is no way to know which is which. - This PR proposes to fix that by outputting 2 new keys for this pipelines outputs, "start" and "end", which correspond to the string offsets of the word. That means that we should always have the invariant: ```python input[entity["start"]: entity["end"]] == entity["entity_group"] # or entity["entity"] if not grouped ``` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Example of users that encounter problems: https://huggingface.co/dslim/bert-base-NER?text=Hello+Sarah+Jessica+Parker+who+Jessica+lives+in+New+York https://discuss.huggingface.co/t/token-positions-when-using-the-inference-api/2188 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8781/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8781/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8781", "html_url": "https://github.com/huggingface/transformers/pull/8781", "diff_url": "https://github.com/huggingface/transformers/pull/8781.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8781.patch", "merged_at": 1606763108000 }
https://api.github.com/repos/huggingface/transformers/issues/8780
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8780/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8780/comments
https://api.github.com/repos/huggingface/transformers/issues/8780/events
https://github.com/huggingface/transformers/issues/8780
750,845,308
MDU6SXNzdWU3NTA4NDUzMDg=
8,780
Can't load tokenizer for 'facebook/rag-token-base/question_encoder_tokenizer'.
{ "login": "racoutinho", "id": 4431098, "node_id": "MDQ6VXNlcjQ0MzEwOTg=", "avatar_url": "https://avatars.githubusercontent.com/u/4431098?v=4", "gravatar_id": "", "url": "https://api.github.com/users/racoutinho", "html_url": "https://github.com/racoutinho", "followers_url": "https://api.github.com/users/racoutinho/followers", "following_url": "https://api.github.com/users/racoutinho/following{/other_user}", "gists_url": "https://api.github.com/users/racoutinho/gists{/gist_id}", "starred_url": "https://api.github.com/users/racoutinho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/racoutinho/subscriptions", "organizations_url": "https://api.github.com/users/racoutinho/orgs", "repos_url": "https://api.github.com/users/racoutinho/repos", "events_url": "https://api.github.com/users/racoutinho/events{/privacy}", "received_events_url": "https://api.github.com/users/racoutinho/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Was fixed on master, could you try from master?\r\n\r\ncc @lhoestq @patrickvonplaten ", "Thanks @julien-c ! It worked using master. But I had this other issue:\r\n\r\n`Using custom data configuration dummy.psgs_w100.nq.no_index\r\nReusing dataset wiki_dpr (/Users/rcoutin/.cache/huggingface/datasets/wiki_dpr/dummy.psgs_w100.nq.no_index-dummy=True,with_index=False/0.0.0/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2)\r\nUsing custom data configuration dummy.psgs_w100.nq.exact\r\nReusing dataset wiki_dpr (/Users/rcoutin/.cache/huggingface/datasets/wiki_dpr/dummy.psgs_w100.nq.exact-80150455dfcf97d4/0.0.0/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2)\r\nTraceback (most recent call last):\r\n File \"/Users/rcoutin/git/examples/backup/rag.py\", line 5, in <module>\r\n model = RagTokenForGeneration.from_pretrained(\"facebook/rag-token-nq\", retriever=retriever)\r\n File \"/Users/rcoutin/git/transformers/src/transformers/modeling_utils.py\", line 947, in from_pretrained\r\n model = cls(config, *model_args, **model_kwargs)\r\n File \"/Users/rcoutin/git/transformers/src/transformers/models/rag/modeling_rag.py\", line 1009, in __init__\r\n self.rag = RagModel(config=config, question_encoder=question_encoder, generator=generator, retriever=retriever)\r\n File \"/Users/rcoutin/git/transformers/src/transformers/models/rag/modeling_rag.py\", line 487, in __init__\r\n question_encoder = AutoModel.from_config(config.question_encoder)\r\n File \"/Users/rcoutin/git/transformers/src/transformers/models/auto/modeling_auto.py\", line 615, in from_config\r\n return MODEL_MAPPING[type(config)](config)\r\n File \"/Users/rcoutin/git/transformers/src/transformers/models/dpr/modeling_dpr.py\", line 514, in __init__\r\n self.question_encoder = DPREncoder(config)\r\n File \"/Users/rcoutin/git/transformers/src/transformers/models/dpr/modeling_dpr.py\", line 155, in __init__\r\n self.bert_model = BertModel(config)\r\n File \"/Users/rcoutin/git/transformers/src/transformers/models/bert/modeling_bert.py\", line 764, in __init__\r\n self.embeddings = BertEmbeddings(config)\r\n File \"/Users/rcoutin/git/transformers/src/transformers/models/bert/modeling_bert.py\", line 181, in __init__\r\n self.position_embedding_type = config.position_embedding_type\r\nAttributeError: 'DPRConfig' object has no attribute 'position_embedding_type'`", "Not sure about this one, sorry :/ Calling the RAG gurus!", "Thanks man. I’ll try debug little more my env. Thanks!\n\nEm qua, 25 de nov de 2020 às 20:22, Julien Chaumond <\[email protected]> escreveu:\n\n> Not sure about this one, sorry :/ Calling the RAG gurus!\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/8780#issuecomment-733988235>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABBZZ6SQTOLH4JHX3MVUUTTSRWGT7ANCNFSM4UCODMUA>\n> .\n>\n", "Looks like the issue comes from the changes of #8276\r\ncc @patrickvonplaten @LysandreJik @zhiheng-huang", "Thanks a lot for spotting the bug @racoutinho and pinpointing it @lhoestq. The PR should fix it", "I love how well maintained this repo is ❤️ \r\nJust ran into this issue yesterday, and was very surprised to see it fixed just 1 day later 👍 ", "Thank you, guys!!!! You are rock stars!!!!" ]
1,606
1,606
1,606
NONE
null
Hi all! I'm getting this error when trying to run the example code: Can't load tokenizer for 'facebook/rag-token-base/question_encoder_tokenizer'. Make sure that: - 'facebook/rag-token-base/question_encoder_tokenizer' is a correct model identifier listed on 'https://huggingface.co/models' - or 'facebook/rag-token-base/question_encoder_tokenizer' is the correct path to a directory containing relevant tokenizer files
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8780/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8780/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8779
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8779/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8779/comments
https://api.github.com/repos/huggingface/transformers/issues/8779/events
https://github.com/huggingface/transformers/pull/8779
750,823,565
MDExOlB1bGxSZXF1ZXN0NTI3NDUwMjQx
8,779
Fix PPLM
{ "login": "chutaklee", "id": 6931004, "node_id": "MDQ6VXNlcjY5MzEwMDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/6931004?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chutaklee", "html_url": "https://github.com/chutaklee", "followers_url": "https://api.github.com/users/chutaklee/followers", "following_url": "https://api.github.com/users/chutaklee/following{/other_user}", "gists_url": "https://api.github.com/users/chutaklee/gists{/gist_id}", "starred_url": "https://api.github.com/users/chutaklee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chutaklee/subscriptions", "organizations_url": "https://api.github.com/users/chutaklee/orgs", "repos_url": "https://api.github.com/users/chutaklee/repos", "events_url": "https://api.github.com/users/chutaklee/events{/privacy}", "received_events_url": "https://api.github.com/users/chutaklee/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "PPLM is unfortunately not maintained anymore. The fix would be to pin the `transformers` version in the PPLM README.", "pinging @w4nderlust and @mimosavvy just in case", "This looks good to me, I believe the return of dict (very welcome change by the way) from the model should be the only thing breaking the pplm code, right?", "> This looks good to me, I believe the return of dict (very welcome change by the way) from the model should be the only thing breaking the pplm code, right?\r\n\r\nYeah and the named argument of model `past_key_values`. I have no issue running the provided example command in [here]( https://github.com/huggingface/transformers/tree/master/examples/text-generation/pplm) and `python run_pplm_discrim_train.py --dataset SST --epochs 1 --batch_size 8`.", "Can you run `make style` to fix the code quality check? Then we should be good for merge :-) ", "Much appreciated!" ]
1,606
1,606
1,606
CONTRIBUTOR
null
# What does this PR do? API changes break PPLM example, this PR should fix it. However I haven't test it on 'run_pplm_discrim_train.py'. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8779/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8779/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8779", "html_url": "https://github.com/huggingface/transformers/pull/8779", "diff_url": "https://github.com/huggingface/transformers/pull/8779.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8779.patch", "merged_at": 1606425817000 }
https://api.github.com/repos/huggingface/transformers/issues/8778
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8778/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8778/comments
https://api.github.com/repos/huggingface/transformers/issues/8778/events
https://github.com/huggingface/transformers/issues/8778
750,810,187
MDU6SXNzdWU3NTA4MTAxODc=
8,778
Using the XLNet or Tranformer-XL as an EncoderDecoder
{ "login": "gulnazaki", "id": 38190410, "node_id": "MDQ6VXNlcjM4MTkwNDEw", "avatar_url": "https://avatars.githubusercontent.com/u/38190410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gulnazaki", "html_url": "https://github.com/gulnazaki", "followers_url": "https://api.github.com/users/gulnazaki/followers", "following_url": "https://api.github.com/users/gulnazaki/following{/other_user}", "gists_url": "https://api.github.com/users/gulnazaki/gists{/gist_id}", "starred_url": "https://api.github.com/users/gulnazaki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gulnazaki/subscriptions", "organizations_url": "https://api.github.com/users/gulnazaki/orgs", "repos_url": "https://api.github.com/users/gulnazaki/repos", "events_url": "https://api.github.com/users/gulnazaki/events{/privacy}", "received_events_url": "https://api.github.com/users/gulnazaki/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "@patrickvonplaten any thoughts on this? Since, I found your work on Bert2Bert very informative :)", "Hey @gulnazaki - you can use XLNet as an encoder, but not as a decoder because it'll be very difficult to add cross-attention functionality to XLNet for the decoder...", "Thanks @patrickvonplaten , I thought so. \r\nAlso, the concept of XLNet is kinda the opposite of uni-directional. \r\n\r\nI will try to increase the sequence length of GPT2 for the output sequence.", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,606
1,614
1,614
NONE
null
I want to train a long sequence dataset (MIDI text event representation like the one in [MuseNet](https://openai.com/blog/musenet/#dataset)) from scratch. Since, I can't split the sequence to "sentences" I am using XLNet (or Transformer-XL). I am modelling the task as a sequence2sequence task (with max input seq length of around 40k tokens and output length of 4k tokens) so I want to use an Encoder Decoder Framework. Is it possible to use XLNet as the encoder and decoder, or just the encoder and use GPT-2 for example to do the decoding (because of the smaller output sequence length). Thank you 🤗
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8778/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8778/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8777
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8777/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8777/comments
https://api.github.com/repos/huggingface/transformers/issues/8777/events
https://github.com/huggingface/transformers/pull/8777
750,759,061
MDExOlB1bGxSZXF1ZXN0NTI3Mzk1NjM3
8,777
Better booleans handling in the TF models
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks @patrickvonplaten!\r\n\r\nAs detailed in the first post, boolean parameters cannot be set during the model call in graph mode. This is the major feature brought by this PR. I wanted to focus of TF T5 and TF Bart on a later PR once this logic is ok at least for all the others.", "There is now a better warning message.", "> You say there is a breaking change in graph mode. Does it mean that currently, both eager & graph mode can handle arguments through the configuration & through the function call? I'm unsure on where we stand on this currently.\r\n\r\nYes, both can be done, but it raises issues when through the function call in graph mode. So this PR fixes this with a better handling of this case.\r\n\r\n> It seems like the tests that would be impacted by these changes are the slow tests. Have you run the slow tests? If not, could you run the slow tensorflow tests on this PR? If you don't know how to do that, happy to show you how for next time.\r\n\r\nThis PR partially fixes these tests. Remembert that they do not pass for T5 and BART for the reasons expressed by Patrick. These models, including the saved model tests, will be fixed in same time in a PR just after this one.\r\n\r\nAlso, in a future PR I will rethink the way the attributes are handled in all the layers.", "> Yes, both can be done, but it raises issues when through the function call in graph mode. So this PR fixes this with a better handling of this case.\r\n\r\nSo right now it fails, and with this PR it also fails but with better error handling?\r\n\r\n> This PR partially fixes these tests. Remembert that they do not pass for T5 and BART for the reasons expressed by Patrick. These models, including the saved model tests, will be fixed in same time in a PR just after this one.\r\n\r\nI meant *all* the slow tests, not only the saved models with saved attentions tests. And this PR doesn't only impact the T5 and BART models, so re-running all the slow tests on this PR seems necessary.", "> So right now it fails, and with this PR it also fails but with better error handling?\r\n\r\nNo, before nothing was working in graph mode when the boolean was updated through the function call. Now, I disabled this functionality and there is no more fail, and everything works properly and as expected in eager+graph mode except T5 and BART in graph mode, which will be handled in a later PR.\r\n\r\n> I meant all the slow tests, not only the saved models with saved attentions tests. And this PR doesn't only impact the T5 and BART models, so re-running all the slow tests on this PR seems necessary.\r\n\r\nOk, I will run all of them.", "@LysandreJik All the slow tests are passing but two:\r\n\r\n- `tests/test_modeling_tf_transfo_xl.py::TFTransfoXLModelLanguageGenerationTest::test_lm_generate_transfo_xl_wt103`, I started to see that with @patrickvonplaten \r\n- `tests/test_utils_check_copies.py::CopyCheckTester::test_is_copy_consistent`, @sgugger any idea why this test don't pass anymore? Here the output:\r\n```\r\ndef test_is_copy_consistent(self):\r\n # Base copy consistency\r\n> self.check_copy_consistency(\r\n \"# Copied from transformers.models.bert.modeling_bert.BertLMPredictionHead\",\r\n \"BertLMPredictionHead\",\r\n REFERENCE_CODE + \"\\n\",\r\n )\r\n\r\ntests\\test_utils_check_copies.py:71:\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests\\test_utils_check_copies.py:59: in check_copy_consistency\r\n self.assertTrue(len(check_copies.is_copy_consistent(fname)) == 0)\r\nE AssertionError: False is not true\r\n```", "@LysandreJik Any other needs for this PR to be merged?", "I investigated why the `test_is_copy_consistent` test failed, that is probably because you launched your command from inside the `tests/` directory, and it has a path hardcoded to `src/transformers`, and therefore cannot find the path `tests/src/transformers`. \r\n\r\nNo issues there it seems! Reviewing a final time and merging if all is good.", "@patrickvonplaten you haven't approved this PR, do you want to give it a final look and merge if ok for you?", "> @LysandreJik All the slow tests are passing but two:\r\n> \r\n> * `tests/test_modeling_tf_transfo_xl.py::TFTransfoXLModelLanguageGenerationTest::test_lm_generate_transfo_xl_wt103`, I started to see that with @patrickvonplaten\r\n> * `tests/test_utils_check_copies.py::CopyCheckTester::test_is_copy_consistent`, @sgugger any idea why this test don't pass anymore? Here the output:\r\n> \r\n> ```\r\n> def test_is_copy_consistent(self):\r\n> # Base copy consistency\r\n> > self.check_copy_consistency(\r\n> \"# Copied from transformers.models.bert.modeling_bert.BertLMPredictionHead\",\r\n> \"BertLMPredictionHead\",\r\n> REFERENCE_CODE + \"\\n\",\r\n> )\r\n> \r\n> tests\\test_utils_check_copies.py:71:\r\n> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n> tests\\test_utils_check_copies.py:59: in check_copy_consistency\r\n> self.assertTrue(len(check_copies.is_copy_consistent(fname)) == 0)\r\n> E AssertionError: False is not true\r\n> ```\r\n\r\nI'll investigate for `tests/test_modeling_tf_transfo_xl.py::TFTransfoXLModelLanguageGenerationTest::test_lm_generate_transfo_xl_wt103` -> thanks for pinging me on that! PR is good for me!" ]
1,606
1,607
1,607
CONTRIBUTOR
null
# What does this PR do? This PR provides a better handling for the booleans. More precisely, the execution mode (eager or graph) is detected and the booleans are accordingly set to have a proper execution. Nevertheless, this brings a small breaking change in graph mode, it is not possible anymore to update the booleans with the model parameters but only with through the config and the `return_dict` is forced to be `True`. Now to activate the `output_attentions` or `output_hidden_states` values in graph mode one has to create the model config like: ``` config = XConfig.from_pretrained("name", output_attentions=True, output_hidden_states=True) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8777/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8777/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8777", "html_url": "https://github.com/huggingface/transformers/pull/8777", "diff_url": "https://github.com/huggingface/transformers/pull/8777.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8777.patch", "merged_at": 1607090909000 }
https://api.github.com/repos/huggingface/transformers/issues/8776
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8776/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8776/comments
https://api.github.com/repos/huggingface/transformers/issues/8776/events
https://github.com/huggingface/transformers/issues/8776
750,541,624
MDU6SXNzdWU3NTA1NDE2MjQ=
8,776
Documentation and source for `RobertaClassificationHead`
{ "login": "mnschmit", "id": 2377507, "node_id": "MDQ6VXNlcjIzNzc1MDc=", "avatar_url": "https://avatars.githubusercontent.com/u/2377507?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mnschmit", "html_url": "https://github.com/mnschmit", "followers_url": "https://api.github.com/users/mnschmit/followers", "following_url": "https://api.github.com/users/mnschmit/following{/other_user}", "gists_url": "https://api.github.com/users/mnschmit/gists{/gist_id}", "starred_url": "https://api.github.com/users/mnschmit/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mnschmit/subscriptions", "organizations_url": "https://api.github.com/users/mnschmit/orgs", "repos_url": "https://api.github.com/users/mnschmit/repos", "events_url": "https://api.github.com/users/mnschmit/events{/privacy}", "received_events_url": "https://api.github.com/users/mnschmit/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> which feeds the pooled output into a multilayer feedforward-network with one hidden layer and tanh activation. So this is more than only a simple linear layer.\r\n\r\nActually, the final hidden representation of the `[CLS]` token (or `<s>` token in case of RoBERTa) is not the pooled output. Applying the feedforward neural network with tanh activation on this hidden representation actually gives you the pooled output (which is a vector of size 768 in case of the base sized model). Then, after this, a linear layer called [`out_proj`](https://github.com/huggingface/transformers/blob/90d5ab3bfe8c20d9beccfe89fdfd62a8e5ac31e5/src/transformers/models/roberta/modeling_roberta.py#L1248) is used to project the pooled output of size 768 into a vector of size `num_labels`. So the documentation is still correct.\r\n\r\nFor the second question, actually BERT does the same, it is just implemented differently. In `modeling_bert.py`, they use the `pooled_output` of `BertModel`, and then apply the linear layer on top of this. This pooled output has already applied the feedforward neural network + tanh activation on top of the `[CLS]` token hidden representation, as you can see [here](https://github.com/huggingface/transformers/blob/90d5ab3bfe8c20d9beccfe89fdfd62a8e5ac31e5/examples/movement-pruning/emmental/modeling_bert_masked.py#L371). In `modeling_roberta.py`, they implement it differently: they start from the `sequence_output` (which is a tensor containing the final hidden representations of all tokens in the sequence), then get the hidden repr of the `<s>` token by typing `[:,0,:]`, then apply the feedforward nn + tanh and finally the linear projection layer.\r\n\r\nSo your confusion probably comes from the different ways in which this is implemented in BERT vs RoBERTa, and the meaning of `pooled_output`. Actually, some people use \"pooled output\" to denote the final hidden representation of the [CLS] token, but in HuggingFace transformers, this always refers to the output of a linear layer + tanh on top of this vector. ", "Thank you very much for the explanation @NielsRogge !\r\nMy confusion indeed comes from the different implementations and the meaning of \"pooled output\".\r\n\r\nSo this makes it consistent for the HuggingFace transformers library. But do you know the origin of it (now I am interested for both models)? Why is the `[CLS]` token representation transformed by a linear layer with tanh? I couldn't find any reference to tanh in the [original BERT paper](https://www.aclweb.org/anthology/N19-1423/). What they describe in section 4.1, e.g., sounds to me like there is only one linear layer on top of the [CLS] token representation. Is this a HuggingFace invention then? They don't seem to mention it in [their arXiv paper](https://arxiv.org/abs/1910.03771) either.", "Interesting question! Turns out this has [already been asked before here](https://github.com/huggingface/transformers/issues/782) and the answer by the author is [here](https://github.com/google-research/bert/issues/43#issuecomment-435980269).", "Thank you again @NielsRogge !\r\nI had only searched for issues with RoBERTa. Now it makes sense!" ]
1,606
1,606
1,606
CONTRIBUTOR
null
The docstring for `RobertaForSequenceClassification` says ``` RoBERTa Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks ``` Looking at the code, this does not seem correct. Here, the RoBERTa output is fed into an instance of the class `RobertaClassificationHead`, which feeds the pooled output into a multilayer feedforward-network with one hidden layer and tanh activation. So this is more than only a simple linear layer. I have two questions: 1. Should the documentation reflect this different classification head for RoBERTa? 2. Where does this classification head originally come from? I could not find a citable source where such a "deep" classification head is used. The original RoBERTa paper only seems to state that their task-specific fine-tuning procedure is the same as BERT uses (which is only a linear layer). I would be glad if someone could shed light on this.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8776/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8776/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8775
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8775/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8775/comments
https://api.github.com/repos/huggingface/transformers/issues/8775/events
https://github.com/huggingface/transformers/issues/8775
750,413,539
MDU6SXNzdWU3NTA0MTM1Mzk=
8,775
Converting all model Config classes to dataclasses
{ "login": "norabelrose", "id": 39116809, "node_id": "MDQ6VXNlcjM5MTE2ODA5", "avatar_url": "https://avatars.githubusercontent.com/u/39116809?v=4", "gravatar_id": "", "url": "https://api.github.com/users/norabelrose", "html_url": "https://github.com/norabelrose", "followers_url": "https://api.github.com/users/norabelrose/followers", "following_url": "https://api.github.com/users/norabelrose/following{/other_user}", "gists_url": "https://api.github.com/users/norabelrose/gists{/gist_id}", "starred_url": "https://api.github.com/users/norabelrose/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/norabelrose/subscriptions", "organizations_url": "https://api.github.com/users/norabelrose/orgs", "repos_url": "https://api.github.com/users/norabelrose/repos", "events_url": "https://api.github.com/users/norabelrose/events{/privacy}", "received_events_url": "https://api.github.com/users/norabelrose/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[ "I think it would be nice indeed.\r\n\r\nIt's been on the TO-DO list for a long time (cc @julien-c) but I think nobody's reached it yet so feel free to tackle it :)\r\n\r\nFull (and tested) backward compatibility will be paramount though", "@thomwolf Thanks for responding— I can start working on this once I get done with my current project, which is adding support for Performer-style attention ([see issue here](https://github.com/huggingface/transformers/issues/7675)).", "Seems like this issue is a bit stale but I could use it as a first issue to start contributing. Mind if I take this?", "What do you think @sgugger @patrickvonplaten?", "Not sure it's worth the time: what exactly would we gain from it apart avoiding storing all args from the init? Since we would need to convert all configurations together (for inheritance you need to go from dataclasses to dataclasses) this is work that can't be split across several PRs and needs to happen all at once.\r\n\r\nI also had trouble several times in `TrainingArguments` where the fact it's a dataclass made things harder than they should be, so we may very well lost some of the features of configs (settings params with harmonized names for instance).", "> Not sure it's worth the time: what exactly would we gain from it apart avoiding storing all args from the init? Since we would need to convert all configurations together (for inheritance you need to go from dataclasses to dataclasses) this is work that can't be split across several PRs and needs to happen all at once.\r\n> \r\n> I also had trouble several times in `TrainingArguments` where the fact it's a dataclass made things harder than they should be, so we may very well lost some of the features of configs (settings params with harmonized names for instance).\r\n\r\nI think this is a bit easier than you thought because `class` can inherit from `dataclass` and vice versa. We don't have to have a gigantic PR that changes 150+ files. We can change `PretrainedConfig` into `dataclass` first and do the rest in separate PRs in a backward compatible fashion. See the example blow: \r\n```\r\n@dataclass\r\nclass PretrainedConfig:\r\n x: int = 10\r\n y: int = 15\r\n \r\n def __post_init__(self):\r\n # for the logic that wouldn't fit in the data class constructor, we can add it here\r\n self.z = self.x+ self.y\r\n\r\n@dataclass\r\nclass AlbertConfig(PretrainedConfig):\r\n xy: int = 100\r\n\r\n# this should be backward compatible\r\nclass BertConfig(PretrainedConfig):\r\n def __init__(self, a, b, *args, **kwargs):\r\n super().__init__(*args, **kwargs)\r\n self.a = a\r\n self.b = b\r\n```\r\nBut I do agree that this is not the best ROI and quite a bit of trivial work just to shave off some boilerplate code.", "In that case, okay for me if you want to try to convert `PretrainedConfig` first. If the tests don't pass and you struggle to fix them though, don't spend too much time on it and look for another way to contribute :-)", "Hmm this is more complicated than I thought. `PretrainedConfig`, with all that logic in the constructor, properties and class methods, is too heavy of a class to be a textbook `dataclass`. You are right that I should look for something else :)" ]
1,606
1,651
null
CONTRIBUTOR
null
It seems that we could save a lot of boilerplate code and potentially prevent some bugs if we migrated all of the model config classes over to being dataclasses. Already many of our classes (BaseModelOutput, TrainingArguments, etc.) are dataclasses, so we are already committed to having dataclasses as a dependency. It's relatively low priority, but I would be willing to help implement the change since I'm kind of a neat freak about code.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8775/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8775/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/8774
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8774/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8774/comments
https://api.github.com/repos/huggingface/transformers/issues/8774/events
https://github.com/huggingface/transformers/pull/8774
750,238,291
MDExOlB1bGxSZXF1ZXN0NTI2OTM2Njc3
8,774
Big model table
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,606
1,606
1,606
COLLABORATOR
null
# What does this PR do? This PR adds a big table in the first page of the doc, indicating whether each of our models has support for a slow/fast tokenizer, PyTorch, TensorFlow and Flax. Result can be found [here](https://125258-155220641-gh.circle-artifacts.com/0/docs/_build/html/index.html) (scroll a bit down). It is updated automatically via `make fix-copies` and checked for updates in `make quality`, being built form the content of the auto models module. There were a few issues with the imports on the flax side that I fixed in passing, and I renamed a constant to add the `FLAX` prefix. @mfuntowicz this doesn't really change anything but pinging you just so you're aware.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8774/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8774/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8774", "html_url": "https://github.com/huggingface/transformers/pull/8774", "diff_url": "https://github.com/huggingface/transformers/pull/8774.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8774.patch", "merged_at": 1606323736000 }
https://api.github.com/repos/huggingface/transformers/issues/8773
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8773/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8773/comments
https://api.github.com/repos/huggingface/transformers/issues/8773/events
https://github.com/huggingface/transformers/issues/8773
750,092,949
MDU6SXNzdWU3NTAwOTI5NDk=
8,773
saving checkpoints on gs bucket
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,606
1,614
1,614
NONE
null
Hi In case of running on cloud, the model does not work to save the checkpoints on gs bucket, could you help please? thanks
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8773/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8773/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8772
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8772/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8772/comments
https://api.github.com/repos/huggingface/transformers/issues/8772/events
https://github.com/huggingface/transformers/issues/8772
750,068,293
MDU6SXNzdWU3NTAwNjgyOTM=
8,772
Possible to add additional features as input to TFBertForSequenceClassification?
{ "login": "brandonbell11", "id": 51493518, "node_id": "MDQ6VXNlcjUxNDkzNTE4", "avatar_url": "https://avatars.githubusercontent.com/u/51493518?v=4", "gravatar_id": "", "url": "https://api.github.com/users/brandonbell11", "html_url": "https://github.com/brandonbell11", "followers_url": "https://api.github.com/users/brandonbell11/followers", "following_url": "https://api.github.com/users/brandonbell11/following{/other_user}", "gists_url": "https://api.github.com/users/brandonbell11/gists{/gist_id}", "starred_url": "https://api.github.com/users/brandonbell11/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brandonbell11/subscriptions", "organizations_url": "https://api.github.com/users/brandonbell11/orgs", "repos_url": "https://api.github.com/users/brandonbell11/repos", "events_url": "https://api.github.com/users/brandonbell11/events{/privacy}", "received_events_url": "https://api.github.com/users/brandonbell11/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!" ]
1,606
1,606
1,606
NONE
null
Say I have a binary classification problem, but in addition to the sentence I'd like to also input some scalar value as well. Is it possible to just tack on this scalar as input to the last linear layer of BERT? For example, I'd like to detect if a particular sentence is from my source data or generated. And I know that many instances of a repeated word increases the likelihood that it is a generated sentence. So I'd like to pass the sentence itself into BERT as well as a scalar feature such as the number of unique words in the sentence.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8772/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8772/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8771
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8771/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8771/comments
https://api.github.com/repos/huggingface/transformers/issues/8771/events
https://github.com/huggingface/transformers/issues/8771
750,033,515
MDU6SXNzdWU3NTAwMzM1MTU=
8,771
Model Parallelism and Big Models
{ "login": "alexorona", "id": 11825654, "node_id": "MDQ6VXNlcjExODI1NjU0", "avatar_url": "https://avatars.githubusercontent.com/u/11825654?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexorona", "html_url": "https://github.com/alexorona", "followers_url": "https://api.github.com/users/alexorona/followers", "following_url": "https://api.github.com/users/alexorona/following{/other_user}", "gists_url": "https://api.github.com/users/alexorona/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexorona/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexorona/subscriptions", "organizations_url": "https://api.github.com/users/alexorona/orgs", "repos_url": "https://api.github.com/users/alexorona/repos", "events_url": "https://api.github.com/users/alexorona/events{/privacy}", "received_events_url": "https://api.github.com/users/alexorona/received_events", "type": "User", "site_admin": false }
[ { "id": 2627272588, "node_id": "MDU6TGFiZWwyNjI3MjcyNTg4", "url": "https://api.github.com/repos/huggingface/transformers/labels/Model%20Parallel", "name": "Model Parallel", "color": "8B66A5", "default": false, "description": "Model Parallelilsm Implementations" }, { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false } ]
[ "Thank you, @alexorona!\r\n\r\nI'm still in the process of gathering info/reading up and doing some small experimentation, so will post my thoughts once I have something concrete to share.\r\n\r\nHere are some resources if someone wants to join in:\r\n\r\nAbbreviations:\r\n\r\n- MP = Model Parallelism\r\n- DP = Data Parallelism\r\n- PP = Pipeline Parallelism\r\n\r\nResources:\r\n\r\n- Parallel and Distributed Training tutorials at pytorch - a handful, starting with https://pytorch.org/tutorials/beginner/dist_overview.html\r\n\r\n- fairscale\r\n * github https://github.com/facebookresearch/fairscale\r\n * the MP part of fairscale is a fork of https://github.com/NVIDIA/Megatron-LM\r\n\r\n- ZeRO and deepspeed:\r\n * paper ZeRO: Memory Optimizations Toward Training Trillion Parameter Models https://arxiv.org/abs/1910.02054\r\n * paper ZeRO-Offload: Democratizing Billion-Scale Model Training https://arxiv.org/abs/1910.02054\r\n * detailed blog posts with diagrams: \r\n - https://www.microsoft.com/en-us/research/blog/deepspeed-extreme-scale-model-training-for-everyone/\r\n - https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/\r\n - https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft/\r\n * github https://github.com/microsoft/DeepSpeed\r\n * deepspeed examples git https://github.com/microsoft/DeepSpeedExamples\r\n * deepspeed in PL https://github.com/PyTorchLightning/pytorch-lightning/issues/817\r\n * deepspeed in PT https://github.com/pytorch/pytorch/issues/42849\r\n * discussion of the paper with visuals https://www.youtube.com/watch?v=tC01FRB0M7w\r\n\r\n- Pipeline Parallelism\r\n * DeepSpeed https://www.deepspeed.ai/tutorials/pipeline/\r\n * Fairscale https://fairscale.readthedocs.io/en/latest/api/nn/pipe.html\r\n * GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism https://arxiv.org/abs/1811.06965\r\n * PipeDream: Fast and Efficient Pipeline Parallel DNN Training https://arxiv.org/abs/1806.03377\r\n ", "Update: so we have \r\n* fairscale's sharded_ddp pretty much ready to go https://github.com/huggingface/transformers/pull/9208\r\n* and deepspeed is nicely coming along https://github.com/huggingface/transformers/pull/9211\r\n\r\nI don't have proper benchmarks yet, but I can definitely see 3-5 times less gpu ram usage! So these would be the first go-to solution when a model doesn't fit onto a single GPU.", "OK, so studying @alexorona's t5 MP implementation I think we have a few issues related to how we spread out the models across different devices. \r\n\r\nFor the purpose of this discussion let's use a simplistic approach of having just 2 GPUs (g1 and g2)\r\n\r\n@alexorona's current approach is to assume that encoder and decoder are of the same size and then split 1/2 encoder layers onto g1 and the other half onto g2. Repeat the same for decoder.\r\n\r\nThis approach has 3 issues:\r\n\r\n1. it doesn't work if encoder and decoder aren't of the same size, which is the case with many models. \r\n\r\n2. it introduces unnecessary copying of data from g1 to g2 in the middle of encoder and then again in the middle of decoder, rather than doing just one copy between end of encoder and beginning of decoder. 3 times vs 1 (in our simplistic 2-gpu example). \r\n\r\n3. it leaves out all other layers from the device map and assigns them to the first or the last device in a hardcoded way depending to where they fit better, so the user has no control over where these go.\r\n\r\nIt does make the implementation relatively simple, since we just need to move half the layers of the encoder to g1 and the other half to g2 and bring the inputs/outputs to the right devices.\r\n\r\n* Issue 1 can be fixed by providing 2 device maps - one for encoder and a different one for decoder. They would be the same if `len(encoder) == len(decoder)`. i.e. we are still using @alexorona, split-encoder and split-decoder approach.\r\n\r\n* Issue 2 can be solved again by 2 separate device maps, but the first one will map encoder - the second decoder. So there will be no splitting of the layers of encoder or decoder between separate devices. I think I may try to use this solution for Bart. \r\n```\r\nencoder_device_map > {0 => [1...6]}\r\ndecoder_device_map=> {1 => [1..6]}\r\n```\r\n(note: I'm using a non-python notation of a range here)\r\n\r\nIt will be trickier to allow overlap if the number of layers is different between encoder and decoder - say 6:9 or 6:12 - In which case it might be:\r\n```\r\nencoder_device_map > {0 => [1...6]} # 6 layer encoder\r\ndecoder_device_map=> {0 => [1..2], 1=> [3..9]} # 9 layer decoder\r\n```\r\nSo the model will need to be able to transparently handle switching layers and inputs/outputs not only through its encode/decoder layers but also from encoder to decoder - but it's quite doable.\r\n\r\nThis uneven situation would also be the case on some weird setups like mine where the gpus are of different sizes. On my setup I have one card of 8GB and another 24GB. This won't be an issue with @alexorona's current implementation.\r\n\r\n* To solve Issue 3 would be much more complicated as then almost any main layer/param can be on any device. Not sure about this one. It'd be trivial if pytorch could automatically bring inputs to the device of the params. I sent out a feeler for such possibility here https://github.com/pytorch/pytorch/issues/49961\r\n\r\nIf any of you have had a chance to think about possible solutions and some totally different ways of approaching that please share your insights.", "I was so full of hope that a simple dictionary could serve as a `device_map` for everything, but now you have shattered my blissful ignorance @stas00. But thanks so much for pointing this out! Super important! The characterization is not quite right and I think it's because you're using 2 GPUs, but the problem you identified is real. Basically both the decoder and encoder use the same map, so the first attention block of the decoder is located on the same device as the first attention block of the encoder. The performance degradation is trivial because the hand-off between GPUs when you have 8 or less is pretty efficient (when you have more, there's problems you have to work around by changing the NCCL environment variables). I thought about trying to do what you've suggested, but it meant that the `device_map` would have to get more complicated, which I was trying to avoid. However, if some of the decoder architectures have a different number of layers in the decoder than the encoder, the generalizability of the implementation will just collapse. Oh well. It was nice while it lasted.\r\n\r\nIt looks like you've really busy the last week. Responding to your comments and PRs...", "Thank you for your follow up, @alexorona. \r\n\r\nAs you're saying that from your experience the copying overhead is negligible then your current solution would work perfectly fine in some situations, like the balanced t5, but will need to be altered in others. So very likely it's this and that, rather than not this but that. i.e. no shuttered hopes.\r\n\r\nAnd if this doesn't fit in other situations it can be extended with a separate device_map for encoder and decoder. Perhaps for some models it'd be most efficient to keep the encoder on one set of devices and decoder on the other, and others shared. So that means we need to come with a way of accepting a variety of different device maps.\r\n\r\nPerhaps, we make the device_map to have two parts, but the second part (decoder) to be optional and if not passed then the first one is used for both? Then the simple solution remains mainly unchanged.\r\n\r\nMay I ask if you have used some existing implementation to model your current implementation after, and perhaps you have a list of various MP implementations so that we could study and find the most suitable way that would fit. So far I have only studied the way you approached it.\r\n\r\nThank you.\r\n\r\np.s. here are some examples of models with different encoder/decoder sizes:\r\n* https://huggingface.co/models?search=mbart_\r\n* https://huggingface.co/models?search=allenai%2Fwmt", "I have a few follow up questions, @alexorona \r\n\r\n1. on use of `torch.cuda.empty_cache()` - I guess as long as it remains in `deparallelize` it is not really going to interfere with whatever normal caching is going on. I don't think it will do what you intended it to do with an explicit `gc.collect()` as I explained in https://github.com/huggingface/transformers/pull/9354\r\n\r\n2. when do you think it's better to use this split as you implemented it (again simplifying to 2 gpus 6 layers in encoder and same in decoder):\r\n```\r\n encoder decoder\r\ngpu0 1 2 3 1 2 3\r\ngpu1 4 5 6 4 5 6\r\n```\r\nvs giving the whole gpu to one of them:\r\n```\r\n encoder decoder\r\ngpu0 1 2 3 4 5 6 \r\ngpu1 1 2 3 4 5 6\r\n```\r\n\r\nThank you!", "@alexorona I had a chance to briefly look at your approach to model-parallelism via explicit device map construction. What are your thoughts on extending this approach via the construction of a generic Megatron-style `mpu` object that implements basic methods such as `get_{model,data}_parallel_{rank,group,world_size}()`? My understanding is that DeepSpeed works with any model-parallelism approach that implements these methods (the `mpu` object needs to be passed to `deepspeed.initialize()`), it doesn't have to necessarily be a tensor-splicing approach like Megatron.\r\n\r\nWould it make sense to extend/tweak the device map approach to model-parallelism to fit within the `mpu` setup, as opposed to trying to get deepspeed's memory optimization primitives to work with the MP implementation without leveraging `mpu`?", "@alexorona, I think I found at least one culprit for needing `torch.cuda.set_device(id)` all over the place. There could be more than one culprit, but at least with pytorch-nightly I have to add it in a bunch of places if `apex.normalization.FusedLayerNorm` is used. https://github.com/NVIDIA/apex/issues/1022 If I remove its use, I don't need any `torch.cuda.set_device(id)`.\r\n\r\nOn the other hand I don't see `apex.normalization.FusedLayerNorm` is being used in either t5 or gpt2. So perhaps it's something else. I see many bug reports wrt to switching devices and some ops failing without `torch.cuda.set_device(id)` or some solid pytorch op running just before it. It sounds like a bug in some pytorch operations.\r\n", "Meanwhile I've finished porting `BartForConditionalGeneration` to MP and pretty much adopted a variation of your device_map, so it won't change much from your original design if accepted.\r\n\r\nIt supports either type of map - your split approach or the one I proposed (flat). Here are some examples:\r\n\r\n```\r\ndevice_maps_flat = {\r\n \"sshleifer/tinier_bart\": {\r\n \"encoder\": {0: [0, 1] },\r\n \"decoder\": {1: [0] },\r\n },\r\n \"sshleifer/distilbart-xsum-6-6\": {\r\n \"encoder\": {0: [0, 1, 2, 3, 4, 5] },\r\n \"decoder\": {1: [0, 1, 2, 3, 4, 5] },\r\n },\r\n}\r\n\r\n\r\ndevice_maps_split = {\r\n \"sshleifer/tinier_bart\": {\r\n \"encoder\": {0: [0],\r\n 1: [1],\r\n },\r\n \"decoder\": {1: [0] },\r\n },\r\n \"sshleifer/distilbart-xsum-6-6\": {\r\n \"encoder\": {0: [0, 1, 2],\r\n 1: [3, 4, 5],\r\n },\r\n \"decoder\": {0: [0, 1, 2],\r\n 1: [3, 4, 5],\r\n },\r\n },\r\n}\r\n```\r\n\r\nI think down the road we could support other types by simply using different keys for whatever other configuration is desired.\r\n\r\nI think eventually we will need to benchmark the different splits and see which one is more efficient. e.g. the flat approach currently suffers from the shared embeddings since they need to be constantly switched back and forth between devices!\r\n\r\nI also have much improved magical device switching functions so it should be much faster to port to MP in the future.\r\n\r\nOne other design change I will propose is to drop first/last devices and instead have `self.main_device`, so that everything happens on just one device and we only send to other devices whatever needs to be offloaded - layer/block work that is. So probably it'd mean that the main device should have less than equal number of layers/blocks assigned to it as it'll use more memory for all the inputs and outputs. I still need to polish this idea.", "We also may need to take into consideration @osalpekar's suggestion at https://github.com/pytorch/pytorch/issues/49961#issuecomment-754306157 - I haven't studied that side of things yet so can't comment at the moment. On one side it appear much more complex to setup, on the other side it might make things much easier model-side-wise. If you already familiar with that side of things please share your insights.\r\n", "And another suggestion is to potentially use Pipe Parallelism here: https://github.com/pytorch/pytorch/issues/49961#issuecomment-754326342 by @pritamdamania87\r\n\r\nThe main issue would be that it'll be enabled in pt-1.8\r\n\r\nBut @pritamdamania87 raises a super-important point - and that the current implementation doesn't take advantage of the multiple gpus, other than for their memory. So all the other gpus idle while one works, which is probably not what we want.\r\n\r\nUnless I'm missing something then this means that the current approach that we have been discussing (and released) is really a no-go. Please correct me if I'm wrong.", "Pipeline parallelism is already supported in DeepSpeed, although I haven't played around with it.\r\n\r\nhttps://www.deepspeed.ai/tutorials/pipeline/", "yes, and `fairscale` too!\r\n", "@alexorona, please have a look at this super-important comment https://github.com/pytorch/pytorch/issues/49961#issuecomment-754319348\r\nwhich I understand that `torch.cuda.set_device()` is not just for fixing bugs in some pytorch ops, but it's actually an essential tool to avoid back-n-forth copying of data which happens when `torch.cuda.set_device()` is not set to the device the ops are happening on. Ouch. I couldn't find any docs covering that culprit. \r\n\r\nWe were trying to get rid of it. Now it looks like we need to make sure we have it in every place we switch to a new device. So when switching to a new device we need:\r\n\r\n1. `torch.cuda.set_device(device)`\r\n2. `inputs.to(device)`\r\n3. `layer.to(device)`\r\n\r\n\r\n", "I was asked to share a sort of design/explanation of what we have implemented so far, so here you go (@alexorona please correct me if I have missed anything - thank you!)\r\n\r\n-------------------\r\n\r\n\r\n\r\nHere is an example of a `sshleifer/distilbart-xsum-6-6` `BartForConditionalGeneration` model:\r\n\r\n```\r\n (model): BartModel(\r\n (shared): Embedding(50264, 1024, padding_idx=1)\r\n (encoder): BartEncoder(\r\n (embed_tokens): Embedding(50264, 1024, padding_idx=1)\r\n (embed_positions): BartLearnedPositionalEmbedding(1026, 1024, padding_idx=1)\r\n (layers): ModuleList( 6 x BartEncoderLayer)\r\n (layernorm_embedding): FusedLayerNorm(torch.Size([1024]), eps=1e-05, elementwise_affine=True)\r\n )\r\n (decoder): BartDecoder(\r\n (embed_tokens): Embedding(50264, 1024, padding_idx=1)\r\n (embed_positions): BartLearnedPositionalEmbedding(1026, 1024, padding_idx=1)\r\n (layers): ModuleList( 6 x BartDecoderLayer)\r\n (layernorm_embedding): FusedLayerNorm(torch.Size([1024]), eps=1e-05, elementwise_affine=True)\r\n )\r\n )\r\n (lm_head): Linear(in_features=1024, out_features=50264, bias=False)\r\n)\r\n```\r\nNote that I collapsed the huge bulk of it and it's represented by just 2 lines that I wrote myself - it was not the output of the model dump.\r\n```\r\n (layers): ModuleList( 6 x BartEncoderLayer)\r\n (layers): ModuleList( 6 x BartDecoderLayer)\r\n```\r\nthis is some 90% of the model and that's what we want to spread out through multiple gpus.\r\n\r\n\r\n\r\n\r\n\r\n\r\nSo we have the bulk of memory used by 6 x `BartEncoderLayer` and 6 x `BartDecoderLayer`, plus some other components.\r\n\r\nFor the simplicity of the example let's say we have 2 gpus we want to split the model into.\r\n\r\nCurrently the idea is to put the 6 encoder layers on gpu 0 and the same for decoder layers but on gpu 1:\r\n```\r\ndevice_map = {\r\n \"encoder\": {0: [0, 1, 2, 3, 4, 5] },\r\n \"decoder\": {1: [0, 1, 2, 3, 4, 5] },\r\n }\r\n```\r\nor alternatively, splice each group as following:\r\n```\r\ndevice_map = {\r\n \"encoder\": {0: [0, 1, 2],\r\n 1: [3, 4, 5],\r\n },\r\n \"decoder\": {0: [0, 1, 2],\r\n 1: [3, 4, 5],\r\n },\r\n }\r\n```\r\nand the remaining non-encoder/decoder layer modules can be all on gpu 0 or grouped closer to where they are needed. We still haven't quite finalized that map.\r\n\r\nOf course, other models may have more or less layers and they don't have to have the same number of layers in encoder and decoder.\r\n\r\n\r\nNow that we have the map, we can place different layers/blocks on different devices\r\n\r\n\r\nA simplified explanation would be with the usual drawing of the deep nn (random blocks in this example)\r\n```\r\nblocks | [blk] ... [blk 2] | [blk 3] ... [blk 5] | [blk 6] ... [blk 7] | [head]\r\ndevices | 0 | 1 | 2 | 0\r\n```\r\n\r\nImplementation details:\r\n\r\n1. create model\r\n2. `model.parallelize()`: run through the model's layers and remap them to specific devices as defined by the device map by simply runnin `to(device)`\r\n3. inside `forward` we switch inputs to the same device as the layer's params using a handy wrapper I shared here: https://github.com/pytorch/pytorch/issues/49961#issuecomment-753441248\r\n4. some outputs need to be brought back to the device where the logic of the main program happens (e.g. beam search) \r\n\r\nComplications:\r\n\r\n* shared embeds are a performance issue - we have to switch them back and forth between different devices.\r\n* because some layers have params on different devices the developer has to explicitly choose which device to switch input to\r\n* looks like we may need to sort out that `torch.cuda.set_device()` which apparently is needed too - sometimes to cover for bugs in pytorch, other times for performance - I haven't figured it out yet, I opened an issue:\r\nhttps://github.com/pytorch/pytorch/issues/50112\r\n* beam search works extremely slow with this approach - 10x slowdown.\r\n\r\nTo port a model one needs to apply the device map (stage 2 above) and then gradually deal with wrong device errors, by remapping the inputs to the devices of the params of the layer. Alex was doing each variable manually, which is a huge pain. I automated this process (it's in 2 PRs that haven't been merged yet, the Bart PR has a smarter function)\r\n\r\nTransitions:\r\n\r\n- Alex defined first/last devices to work with. In Bart MP I shifted to a different mapping where everything happens on main_device (say 0), and we only ever switch devices for those stacks of encoder/decoder layers that repeat, but all the helping params remain on device 0, which greatly simplifies things.\r\n\r\n- So when we pass data to the parallelized model we `.to(main_device)` and most of the layers are already on the main_device, so now we only need to switch devices when the stacks end. So if you take the following map:\r\n\r\n```\r\ndevice_map = {\r\n \"encoder\": {0: [0, 1, 2, 3, 4, 5] },\r\n \"decoder\": {1: [0, 1, 2, 3, 4, 5] },\r\n }\r\n```\r\n\r\nHere one only need to change devices twice\r\n1. once when switching between encoder.5 and encoder.0 and\r\n2. once more when returning from forward of decoder.5,\r\n\r\nbut of course, since the user may choose to split them vertically as so:\r\n\r\n```\r\ndevice_map = {\r\n \"encoder\": {0: [0, 1, 2],\r\n 1: [3, 4, 5],\r\n },\r\n \"decoder\": {0: [0, 1, 2],\r\n 1: [3, 4, 5],\r\n },\r\n }\r\n```\r\n\r\nthere will be more switches here.\r\n\r\nSo with the automation of switching `forward` input to the desired device it's only a few surprises that one has to resolve, since each model has some unexpected needs.\r\n\r\nOverall, with the great foundation @alexorona laid out and with a bit of the automation I added the implementation is solid and would work just fine for those who can afford idling gpus.\r\n\r\nWhat we need to figure out next is how these idling gpus will co-operate with all the other great components we have been working on (fairscale/deepspeed/pytorch pipelines/etc.)\r\n", "Great recap @stas00 ", "update: I made t5 work with HF trainer and --model_parallel in eval mode https://github.com/huggingface/transformers/pull/9323 - needed to copy the outputs back to the first device - it's more or less fine in the training stage (it worked in the first place), **but w/ beam search size 4 it's 10x slower on eval w/ MP than w/o MP** - it gets hit badly by the back-n-forth data copying.", "The more I'm reading on various Parallelization strategies the more I see how confusing the terminology is.\r\n\r\nWhat's most call Model Parallel (MP) should probably be called \"Model Distributed\" - since all we are doing here is splitting the model across several GPUs, as such \"Model Distributed\" is a much closer to reality term.\r\n\r\nNext comes Pipeline Parallelism (PP) - where we split the mini-batch into micro-batches and feed into Model Parallel / Model Distributed, so that while a GPU that completed its `forward` idles waiting for other GPUs to compute their chunks of layers of the model and backprop, it can start on a new input. It is a Pipeline for sure, is this parallel though - I have a hard time calling it Parallel, since all the ops are sequential still.\r\n\r\nIt's much easier to understand this by studying this diagram from the [GPipe paper](https://ai.googleblog.com/2019/03/introducing-gpipe-open-source-library.html)\r\n\r\n![mp-pp](https://user-images.githubusercontent.com/10676103/104242585-3c23f280-5414-11eb-8d83-c7ac109e36f7.png)\r\n\r\nThis diagram makes it very clear why what we have implemented is what it calls a a naive MP, and you can see the huge idling with 4 GPUs.\r\n\r\nIt then shows how it tries to resolve this idling problem with Pipeline. There is still idling but less so.\r\n\r\nIt also misrepresents the length of time forward and backward paths take. From asking the experts in general backward is ~2x slower than forward. But as I was corrected on slack, the length of the bubble is about the same regardless of their execution speed. (Thanks @deepakn94)\r\n\r\nAnd Deepak also stressed out that since with PP there is a splitting into micro-batches, the effective batch size has to be big enough, otherwise PP will be idling too - so it requires experimentation to find a good batch size.\r\n\r\nBottom line, PP is an improved version of MP, according to my current understanding. I'm still still researching. \r\n\r\nI think the real Parallelization is the [ZeRO paper](https://arxiv.org/abs/1910.02054) where Sharding/Partitioning is done and then it's truly parallel processing, but I'm still trying to understand what exactly is going on there. (Need to find a good diagram visually showing what it does) Grr, I see others use sharding/partitioning as a replacement for parallelism... so confusing. \r\n\r\nI updated https://github.com/huggingface/transformers/issues/8771#issuecomment-733224520 with resources on PP and next need to try to convert perhaps t5 to PP and see how it works in practice. There will be issues to overcome due to BN and tied weights.", "@deepakn94 helped me to finally grasp ZeRO-powered data parallelism, as it's described on this diagram from this [blog post](https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/)\r\n![DeepSpeed-Image-1](https://user-images.githubusercontent.com/10676103/104272403-df472d00-5451-11eb-94da-53017aa6631d.png)\r\n\r\nSo it's quite simple conceptually, this is just your usual DataParallel (DP), except, instead of replicating the full model params, gradients and optimizer states, each gpu stores only a slice of it. And then at run-time when the full layer params are needed just for the given layer, all gpus sync to give each other parts that they miss - this is it.\r\n\r\nConsider this simple model with 3 layers and each layer has 3 params:\r\n```\r\nLa | Lb | Lc\r\n---|----|---\r\na0 | b0 | c0\r\na1 | b1 | c1\r\na2 | b2 | c2\r\n```\r\nLx being the layer and we have 3 layers, and ax being the weights - 3 weights\r\n\r\nIf we have 3 GPUs, the Sharded DDP (= Zero DP) splits the model onto 3 GPUs like so:\r\n\r\n```\r\nGPU0:\r\nLa | Lb | Lc\r\n---|----|---\r\na0 | b0 | c0\r\n\r\nGPU1:\r\nLa | Lb | Lc\r\n---|----|---\r\na1 | b1 | c1\r\n\r\nGPU2:\r\nLa | Lb | Lc\r\n---|----|---\r\na2 | b2 | c2\r\n```\r\n\r\nIn a way this is horizontal slicing, if you imagine the typical DNN diagram. Vertical slicing is where one puts whole layer-groups on different GPUs. But it's just the starting point.\r\n\r\nNow each of these GPUs will get the usual mini-batch as it works in DP:\r\n```\r\nx0 => GPU0\r\nx1 => GPU1\r\nx2 => GPU2\r\n```\r\n\r\nThe inputs are unmodified - they think they are going to be processed by the normal model.\r\n\r\nSo the inputs first hit the first layer La.\r\n\r\nLet's focus just on GPU0: x0 needs a0, a1, a2 params to do its forward path, but GPU0 has only a0 - so what it does is it gets sent a1 from GPU1 and a2 from GPU2. Now the forward step can happen.\r\n\r\nIn parallel GPU1 gets mini-batch x1 and it only has a1, but needs a0 and a2 params, so it gets those from GPU0 and GPU2. \r\n\r\nSame happens to GPU2 that gets input x2. It gets a0 and a1 from GPU0 and GPU1. \r\n\r\nAs soon as the calculation is done, the data that is no longer needed gets dropped - it's only used during the calculation.\r\n\r\nThe same is repeated at every other stage.\r\n\r\nAnd the whole larger thing is repeated for layer Lb, then Lc forward-wise, and then backward Lc -> Lb -> La.\r\n\r\nTo me this sounds like an efficient group backpacking weight distribution strategy:\r\n\r\n1. person A carries the tent\r\n2. person B carries the stove\r\n3. person C carries the entertainment system\r\n\r\nNow each night they all share what they have with others and get from others what the don't have, and in the morning they pack up their allocated type of gear and continue on their way. This is Sharded DDP / Zero DP.\r\n\r\nCompare this strategy to the simple one where each person has to carry their own tent, stove and entertainment system, which would be far more inefficient. This is DataParallel in pytorch.\r\n\r\nAnd I think pretty much everywhere I read Sharded == Partitioned, so I think those are synonyms in the context of distributed models.\r\n\r\n", "**edit: 2021-02-15: Note that `finetune_trainer.py` was moved to `examples/legacy/seq2seq/`, and there is a new script `run_seq2seq.py` that took over `finetune_trainer.py`, you will find transition notes [here](https://github.com/huggingface/transformers/issues/10036)**\r\n\r\nThe simplest way to quickly reproduce the following is to switch to the transformers sha of the time this was posted, that is:\r\n\r\n```\r\ngit clone https://github.com/huggingface/transformers\r\ncd transformers\r\ngit checkout 7e662e6a3be0ece4 \r\n```\r\n\r\n--------------\r\n\r\nThe amazing discovery of the day is DeepSpeed's [Zero-Offload](https://www.deepspeed.ai/tutorials/zero-offload/). ZeRO-Offload is a ZeRO optimization that offloads the optimizer memory and computation from the GPU to the host CPU. \r\n\r\nYou can use DeepSpeed with a single GPU and train with huge models that won't normally fit onto a single GPU.\r\n\r\nFirst let's try to finetune the huge `t5-3b` with a 24GB rtx-3090:\r\n```\r\nexport BS=1; rm -r output_dir; CUDA_VISIBLE_DEVICES=0 PYTHONPATH=../../src USE_TF=0 ./finetune_trainer.py \\\r\n--model_name_or_path t5-3b --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval \\\r\n--do_predict --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 \\\r\n--logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 \\\r\n--overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate \\\r\n--eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 \\\r\n--val_max_target_length 128 --warmup_steps 5 --n_train 60 --n_val 10 --n_test 10 --fp16\r\n```\r\nNo cookie, even with BS=1\r\n```\r\nRuntimeError: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 23.70 GiB total capacity; 21.37 GiB already allocated; 45.69 MiB free; 22.05 GiB reserved in total by PyTorch)\r\n```\r\n\r\nNow update your `transformers` to master, then install deepspeed:\r\n```\r\npip install deepspeed\r\n```\r\n\r\nand let's try again:\r\n```\r\nexport BS=20; rm -r output_dir; CUDA_VISIBLE_DEVICES=0 PYTHONPATH=../../src USE_TF=0 deepspeed --num_gpus=1 \\\r\n./finetune_trainer.py --model_name_or_path t5-3b --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro \\\r\n--do_eval --do_predict --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 \\\r\n--logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 \\\r\n--overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate \\\r\n--eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 \\\r\n--val_max_target_length 128 --warmup_steps 5 --n_train 60 --n_val 10 --n_test 10 --deepspeed ds_config_1gpu.json --fp16\r\n```\r\net voila! we get a BS=20 trained just fine. I can probably push BS even further. It OOMed at BS=30.\r\n```\r\n2021-01-12 19:06:31 | INFO | __main__ | train_n_objs = 60\r\n2021-01-12 19:06:31 | INFO | __main__ | train_runtime = 8.8511\r\n2021-01-12 19:06:35 | INFO | __main__ | val_n_objs = 10\r\n2021-01-12 19:06:35 | INFO | __main__ | val_runtime = 3.5329\r\n2021-01-12 19:06:39 | INFO | __main__ | test_n_objs = 10\r\n2021-01-12 19:06:39 | INFO | __main__ | test_runtime = 4.1123\r\n```\r\n\r\nAmazing!\r\n\r\nImportant note - I used `CUDA_VISIBLE_DEVICES=0` to single out one gpu, but deepspeed has a bug now where it ignores that env var, so it'll be using the first GPU instead. microsoft/DeepSpeed#662 But hoping it will get fixed eventually.\r\n\r\nThe config file `ds_config_1gpu.json` is:\r\n```\r\n{\r\n \"fp16\": {\r\n \"enabled\": true,\r\n \"loss_scale\": 0,\r\n \"loss_scale_window\": 1000,\r\n \"hysteresis\": 2,\r\n \"min_loss_scale\": 1\r\n },\r\n\r\n \"zero_optimization\": {\r\n \"stage\": 2,\r\n \"allgather_partitions\": true,\r\n \"allgather_bucket_size\": 2e8,\r\n \"reduce_scatter\": true,\r\n \"reduce_bucket_size\": 2e8,\r\n \"overlap_comm\": true,\r\n \"contiguous_gradients\": true,\r\n \"cpu_offload\": true\r\n },\r\n\r\n \"optimizer\": {\r\n \"type\": \"Adam\",\r\n \"params\": {\r\n \"adam_w_mode\": true,\r\n \"lr\": 3e-5,\r\n \"betas\": [ 0.9, 0.999 ],\r\n \"eps\": 1e-8,\r\n \"weight_decay\": 3e-7\r\n }\r\n },\r\n\r\n \"scheduler\": {\r\n \"type\": \"WarmupLR\",\r\n \"params\": {\r\n \"warmup_min_lr\": 0,\r\n \"warmup_max_lr\": 3e-5,\r\n \"warmup_num_steps\": 500\r\n }\r\n }\r\n}\r\n\r\n```\r\n\r\nI had to lower the ZeRO buffers from the default 5e8 to 2e8, otherwise it was OOM'ing even on BS=1.\r\n\r\n**important**: DeepSpeed made some changes in the non-released version as of this writing and so the above config won't work anymore. It dropped `adam_w_mode` and added a proper `AdamW` optimizer (it was always there, but just not exposed normally), so replace that section with:\r\n\r\n```\r\n \"optimizer\": {\r\n \"type\": \"AdamW\",\r\n \"params\": {\r\n \"lr\": 3e-5,\r\n \"betas\": [ 0.9, 0.999 ],\r\n \"eps\": 1e-8,\r\n \"weight_decay\": 3e-7\r\n }\r\n },\r\n```\r\n\r\nAnd it's not optimized yet, I just found at least one config that worked for this simple proof-of-concept test.\r\n\r\nGo and check it out!\r\n\r\n**edit:** I was asked about RAM usage for this task, it was 71GB peak, I re-run the same command as above with:\r\n`/usr/bin/time -v ` before `deepspeed` and got:\r\n\r\n```\r\n User time (seconds): 117.12\r\n System time (seconds): 53.46\r\n Percent of CPU this job got: 122%\r\n Elapsed (wall clock) time (h:mm:ss or m:ss): 2:19.38\r\n Average shared text size (kbytes): 0\r\n Average unshared data size (kbytes): 0\r\n Average stack size (kbytes): 0\r\n Average total size (kbytes): 0\r\n Maximum resident set size (kbytes): 70907544\r\n Average resident set size (kbytes): 0\r\n Major (requiring I/O) page faults: 3245\r\n Minor (reclaiming a frame) page faults: 31346864\r\n Voluntary context switches: 16348\r\n Involuntary context switches: 52489\r\n Swaps: 0\r\n File system inputs: 1402864\r\n File system outputs: 11143504\r\n Socket messages sent: 0\r\n Socket messages received: 0\r\n Signals delivered: 0\r\n Page size (bytes): 4096\r\n Exit status: 0\r\n```\r\n\r\nSo the peak RSS entry is 71GB:\r\n```\r\n Maximum resident set size (kbytes): 70907544\r\n```\r\n\r\nThe doc is here: https://huggingface.co/transformers/master/main_classes/trainer.html#deepspeed\r\nAnd it's already slightly outdated - I need to modify it to cover that it works with single GPUs too!\r\n\r\n@alexorona, I think you'd be super-happy about this one.\r\n\r\np.s. if you need to setup the dir and the data, first do:\r\n```\r\ngit clone https://github.com/huggingface/transformers/\r\ncd transformers/\r\ncd examples/seq2seq\r\nwget https://cdn-datasets.huggingface.co/translation/wmt_en_ro.tar.gz\r\ntar -xzvf wmt_en_ro.tar.gz\r\n```\r\nbefore running any of the above scripts.\r\n\r\nOh, and I'm on pytorch-nightly since that's the only version that works at the moment with rtx-3090.", "**edit: 2021-02-15: Note that `finetune_trainer.py` was moved to `examples/legacy/seq2seq/`, and there is a new script `run_seq2seq.py` that took over `finetune_trainer.py`, you will find the transition notes [here](https://github.com/huggingface/transformers/issues/10036)**\r\n\r\nThe simplest way to quickly reproduce the following is to switch to the transformers sha of the time this was posted, that is:\r\n\r\n```\r\ngit clone https://github.com/huggingface/transformers\r\ncd transformers\r\ngit checkout 7e662e6a3be0ece4 \r\n```\r\n\r\n--------------\r\n\r\nOK and to finish the day here are some benchmarks - thank you @sgugger for letting me run those on your machine with dual titan rtx.\r\n\r\nLet's start with the results table:\r\n\r\n\r\n| Method | max BS | train time | eval time |\r\n|---------------------------|--------|------------|-----------|\r\n| baseline | 16 | 30.9458 | 56.3310 |\r\n| fp16 | 20 | 21.4943 | 53.4675 |\r\n| sharded_ddp | 30 | 25.9085 | 47.5589 |\r\n| sharded_ddp+fp16 | 30 | 17.3838 | 45.6593 |\r\n| deepspeed w/o cpu offload | 40 | **10.4007** | 34.9289 |\r\n| deepspeed w/ cpu offload | **50** | 20.9706 | **32.1409** |\r\n\r\nBaseline + data setup was:\r\n```\r\ngit clone https://github.com/huggingface/transformers/\r\ncd transformers/\r\ncd examples/seq2seq\r\nwget https://cdn-datasets.huggingface.co/translation/wmt_en_ro.tar.gz\r\ntar -xzvf wmt_en_ro.tar.gz\r\nexport BS=16; rm -r output_dir; PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch \\\r\n--nproc_per_node=2 ./finetune_trainer.py --model_name_or_path t5-large --output_dir output_dir \\\r\n--adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_train --evaluation_strategy=steps --freeze_embeds \\\r\n--label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 \\\r\n--max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS \\\r\n--per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler \\\r\n--task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 500 \\\r\n--n_train 2000 --n_val 500\r\n```\r\n\r\nNotes:\r\n\r\n- We are doing a small train=2000, eval=500 items to do the comparisons. Eval does by default beam search size=4, so it's slower than training with the same number of samples, that's why I used 4x less eval items\r\n- task: translation\r\n- model: t5-large\r\n- We have 2x 24GB GPUs\r\n- DeepSpeed wasn't really designed for evaluation according to its developers but you can see it rocks there too.\r\n\r\nResults: Well, Deepspeed beats all solutions that were compared - it's much faster and can fit much bigger batches into the given hardware. as you can see from the previous post https://github.com/huggingface/transformers/issues/8771#issuecomment-759176685 - the cpu offloading while is slower on training it can fit more into your hardware. and it's the winner for eval!\r\n\r\nNote: these benchmarks aren't perfect as they take a lot of time to handle you can see that BS numbers are pretty rounded - surely they can be somewhat bigger and speed somewhat better as a result, so I'm sure both sharded ddp and deepspeed can be optimized further.\r\n\r\nBut that's a good start. As both sharded ddp and deepspeed are now in master https://huggingface.co/transformers/master/main_classes/trainer.html#trainer-integrations please go ahead and do your own benchmarks.\r\n\r\nAnd now the raw results - sorry it's not markdown'ed:\r\n\r\n```\r\n\r\n# setup\r\n\r\nconda install -y pytorch==1.7.1 torchvision cudatoolkit=10.2 -c pytorch\r\npip install deepspeed fairscale\r\n\r\n# versions\r\n\r\nPyTorch version: 1.7.1\r\nIs debug build: False\r\nCUDA used to build PyTorch: 10.2\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 20.04.1 LTS (x86_64)\r\nGCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0\r\nClang version: 10.0.0-4ubuntu1\r\nCMake version: version 3.16.3\r\n\r\nPython version: 3.8 (64-bit runtime)\r\nIs CUDA available: True\r\nCUDA runtime version: 10.0.130\r\nGPU models and configuration:\r\nGPU 0: TITAN RTX\r\nGPU 1: TITAN RTX\r\n\r\nNvidia driver version: 450.102.04\r\ncuDNN version: Probably one of the following:\r\n/usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn.so.7.6.5\r\n\r\ntransformers_version\": \"4.2.0dev0\", (master)\r\n\r\n# baseline\r\n\r\n\r\nmax that I could fit was BS=16\r\n\r\nexport BS=16; rm -r output_dir; PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 ./finetune_trainer.py --model_name_or_path t5-large --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 500 --n_train 2000 --n_val 500\r\n\r\n\r\n01/13/2021 05:31:19 - INFO - __main__ - train_runtime = 30.9458\r\n01/13/2021 05:32:15 - INFO - __main__ - val_bleu = 25.8269\r\n01/13/2021 05:32:15 - INFO - __main__ - val_runtime = 56.331\r\n\r\n# w/ --fp16\r\n\r\ncould fit BS=20\r\n\r\nexport BS=20; rm -r output_dir; PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 ./finetune_trainer.py --model_name_or_path t5-large --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 500 --n_train 2000 --n_val 500 --fp16\r\n\r\n01/13/2021 05:33:49 - INFO - __main__ - train_runtime = 21.4943\r\n01/13/2021 05:34:42 - INFO - __main__ - val_bleu = 25.7895\r\n01/13/2021 05:34:42 - INFO - __main__ - val_runtime = 53.4675\r\n\r\n\r\n------------------------------------------------\r\n\r\n# w/ --sharded_ddp\r\n\r\nto compare with BS=20\r\n\r\nexport BS=20; rm -r output_dir; PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 ./finetune_trainer.py --model_name_or_path t5-large --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 500 --n_train 2000 --n_val 500 --sharded_ddp\r\n\r\n\r\n01/13/2021 06:26:11 - INFO - __main__ - train_runtime = 28.9404\r\n01/13/2021 05:36:16 - INFO - __main__ - val_bleu = 25.7201\r\n01/13/2021 05:36:16 - INFO - __main__ - val_runtime = 55.0909\r\n\r\nbut can fit more now, so same with BS=30\r\n\r\nexport BS=30; rm -r output_dir; PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 ./finetune_trainer.py --model_name_or_path t5-large --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 500 --n_train 2000 --n_val 500 --sharded_ddp\r\n\r\n01/13/2021 06:28:02 - INFO - __main__ - train_runtime = 25.9085\r\n01/13/2021 05:39:08 - INFO - __main__ - val_bleu = 25.7178\r\n01/13/2021 05:39:08 - INFO - __main__ - val_runtime = 47.5589\r\n\r\n\r\n# w/ --sharded_ddp --fp16\r\n\r\nexport BS=20; rm -r output_dir; PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 ./finetune_trainer.py --model_name_or_path t5-large --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 500 --n_train 2000 --n_val 500 --sharded_ddp --fp16\r\n\r\n01/13/2021 06:29:08 - INFO - __main__ - train_runtime = 21.4775\r\n01/13/2021 05:41:39 - INFO - __main__ - val_bleu = 25.7162\r\n01/13/2021 05:41:39 - INFO - __main__ - val_runtime = 53.2397\r\n\r\nbut can fit more now, so same with BS=30\r\n\r\n01/13/2021 06:30:03 - INFO - __main__ - train_runtime = 17.3838\r\n01/13/2021 05:43:56 - INFO - __main__ - val_bleu = 25.7314\r\n01/13/2021 05:43:56 - INFO - __main__ - val_runtime = 45.6593\r\n\r\n# w/ --deepspeed ds_config.json (stage 2 w/o cpu offloading)\r\n\r\nI changed the config file to:\r\n\r\n \"cpu_offload\": false\r\n\r\nexport BS=40; rm -r output_dir; PYTHONPATH=../../src USE_TF=0 deepspeed ./finetune_trainer.py --model_name_or_path t5-large --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 500 --n_train 2000 --n_val 500 --deepspeed ds_config.json\r\n\r\n01/13/2021 06:32:35 - INFO - __main__ - train_runtime = 10.4007\r\n01/13/2021 06:33:10 - INFO - __main__ - val_bleu = 25.9687\r\n01/13/2021 06:33:10 - INFO - __main__ - val_runtime = 34.9289\r\n\r\n\r\n# w/ --deepspeed ds_config.json (stage 2 w/ cpu offloading)\r\n\r\nif we lower the buffers to `1.5e8` and enable cpu offloading:\r\n\r\n \"allgather_bucket_size\": 1.5e8,\r\n \"reduce_bucket_size\": 1.5e8,\r\n \"cpu_offload\": true\r\n\r\nwe can get to BS=50!\r\n\r\nBS=50 rm -r output_dir; PYTHONPATH=../../src USE_TF=0 deepspeed ./finetune_trainer.py --model_name_or_path t5-large --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 500 --n_train 2000 --n_val 500 --deepspeed ds_config.json\r\n\r\n01/13/2021 06:40:51 - INFO - __main__ - train_runtime = 20.9706\r\n01/13/2021 06:41:23 - INFO - __main__ - val_bleu = 25.9244\r\n01/13/2021 06:41:23 - INFO - __main__ - val_runtime = 32.1409\r\n\r\nI'm pretty sure if the buffers are even smaller it could do even higher BS. But it's late and I'm going to sleep.\r\n\r\n```\r\n\r\nHere is the config file that was used for deepspeed: https://github.com/huggingface/transformers/blob/69ed36063a732c37fdf72c605c65ebb5b2e85f44/examples/seq2seq/ds_config.json\r\n", "Whoah! ZeRO stage 1: sharded optimizer has been just merged into pytorch! https://github.com/pytorch/pytorch/pull/46750\r\nWith complements of @blefaudeux and the FairScale and DeepSpeed teams!\r\n\r\nPipeline too: https://github.com/pytorch/pytorch/tree/master/torch/distributed/pipeline\r\n\r\nAnd more coming later: https://github.com/pytorch/pytorch/issues/42849\r\n", "> Whoah! ZeRO stage 1: sharded optimizer has been just merged into pytorch! [pytorch/pytorch#46750](https://github.com/pytorch/pytorch/pull/46750)\r\n> With complements of @blefaudeux and the FairScale and DeepSpeed teams!\r\n> \r\n> Pipeline too: https://github.com/pytorch/pytorch/tree/master/torch/distributed/pipeline\r\n> \r\n> And more coming later: [pytorch/pytorch#42849](https://github.com/pytorch/pytorch/issues/42849)\r\n\r\nthanks ! the whole fairscale suite will take a little more time, so it's good that HF is integrated already, the work will not be lost. Great [blog post](https://github.com/huggingface/blog/pull/71) also, and thanks for the numbers ! Some improvements planned over time speed wise within fairscale/shardedddp which should trickle down automatically, thinking for instance about the experimental optimizers in pytorch which flatten the params or better bucketing for the reduce part", "These are great news, @blefaudeux! Thank you for sharing.\r\n\r\nI hope you create a page on github with such news, so it'd be easy to keep abreast of the speed improvements and to appraise users of the need to update to this or that version if they want certain improvements/speed ups. \r\n\r\nIf it's not too much trouble that is.\r\n\r\np.s. my fantasy is that there will be a ZeRO Central, where updates from the all collaborating ZeRO implementations get posted.\r\n\r\ne.g. DeepSpeed just released a new paper: https://arxiv.org/abs/1910.02054 - this would have been a great candidate for such sharing.", "This is very impressive work!\r\n\r\nFrom the perspective of an end-user doing seq2seq (e.g. T5), running the above examples for T5-11B (both sharded_ddp and deepspeed) doesn't appear to be performing complete model parallelism (or, at least, I am getting OOM errors on a machine with four A100-SXM4-40GBs, Python 3.7, pull of HF ~4.3.0 master from yesterday, CUDA 11.0, Pytorch 1.7.1, DeepSpeed compiled from source with the A100 8.0 arch enabled for the A100, BS=1). I understand from the blog post this is likely because sharding is only currently implemented for the optimizer and gradients, but not the model parameters? Is there an interim suggestion for easily running these large models in 4.3? It looks like there's currently confusion since --model_parallel was removed in 4.2 (and some confusion about how to run large models using the /examples/ now, e.g. #9243 ) ", "> This is very impressive work!\r\n\r\nTotally agree. Those both teams and the inventors of ZeRO are awesome!\r\n \r\n> From the perspective of an end-user doing seq2seq (e.g. T5), running the above examples for T5-11B (both sharded_ddp and deepspeed)\r\n\r\none of them - not both. Will send a PR to block such attempts. https://github.com/huggingface/transformers/pull/9712/\r\n\r\nDeepSpeed already does sharded ddp. Slowly, slowly we will get a better understanding and better documentation.\r\n\r\n> doesn't appear to be performing complete model parallelism (or, at least, I am getting OOM errors on a machine with four A100-SXM4-40GBs, Python 3.7, pull of HF ~4.3.0 master from yesterday, CUDA 11.0, Pytorch 1.7.1, DeepSpeed compiled from source with the A100 8.0 arch enabled for the A100, BS=1). I understand from the blog post this is likely because sharding is only currently implemented for the optimizer and gradients, but not the model parameters?\r\n\r\nThat's correct. Not yet.\r\n\r\n* With fairscale you get sharding or optim/grads.\r\n* With deepspeed you get all that, plus cpu-offload, plus better memory management.\r\n\r\nWe would need to have Pipeline parallelism working to support 2D parallelism, which probably should fit t5-11b onto 4 gpus. I'm working on this at the moment, but run into multiple limitations of the PP implementations https://github.com/pytorch/pytorch/pull/50693 and https://github.com/microsoft/DeepSpeed/pull/659.\r\n\r\nIn any case please update your master as I merged a bug fix some 6 hours ago, but I don't think it'd make any difference to your situation.\r\n\r\n> Is there an interim suggestion for easily running these large models in 4.3? It looks like there's currently confusion since --model_parallel was removed in 4.2 (and some confusion about how to run large models using the /examples/ now, e.g. #9243 )\r\n\r\nThe `--model_parallel` flag was half-baked so it was removed until the day we actually have something solid in place. but you can still use model parallelism.\r\n\r\nWhat you can do now is to activate our naive model parallelism, which I think may just fit the 45GB model over 4x 40GB GPUs. See: https://huggingface.co/transformers/model_doc/t5.html?highlight=parallel#transformers.T5EncoderModel.parallelize\r\nWe currently have t5, gpt2 and (unmerged bart pr) with this version of naive MP.\r\n\r\nBut it's going to be slow, see: https://github.com/huggingface/transformers/issues/8771#issuecomment-758250421 because 3 out of 4 gpus will be idling at any given moment. Basically, you will have a speed of a single gpu, with extra slowdown due to data being copied between gpus back and forth. We need to get PP working to overcome this.", "@PeterAJansen As Stas points out, you should use the model parallelism implementation from 4.1.0. You'll likely need somewhere around 256 GB total GPU memory to train t5-11b with max 512 input tokens and 320 GB for 1024 tokens (so p4 instance in AWS). \r\n\r\nIn 4.1.0, there's only a few changes to the code you'd need to do to accomplish this: 1) set `train_args = TrainingArguments(model_parallel = True) `and, 2) after loading the model, call `model.parallelize()` (no arguments needed -- custom device map won't help you with t5-11b). @stas00, can you confirm what the procedure is for >= 4.2.0? I haven't been able to keep up with the changes with the move. ", "@alexorona, as the doc [goes](https://huggingface.co/transformers/model_doc/t5.html?highlight=parallel#transformers.T5EncoderModel.parallelize), in the current master incarnation all you need to do is to call:\r\n```\r\nmodel.parallelize()\r\n```\r\nbefore you do the training. This then sets:\r\n\r\n`self.is_model_parallel` to `True` and the trainer does the same thing it was doing when `--model_parallel` was used. It just does it smarter now and no longer requires an extra flag.\r\n\r\nThe new logic is:\r\n```\r\n if hasattr(model, \"is_parallelizable\") and model.is_parallelizable and model.model_parallel:\r\n self.is_model_parallel = True\r\n else:\r\n self.is_model_parallel = False\r\n```\r\n\r\nThe reason `--model_parallel` was removed is because it exposed that flag to all example scripts, but the scripts like `finetune_trainer.py` weren't synced, so as a user would run `finetune_trainer.py --model_parallel` nothing would happen, that's why just that flag was removed.\r\n\r\nBut nothing else changed from your original implementation API-wise, @alexorona. The PRs I proposed which would change the device map have been parked for now.\r\n\r\nWe may re-add this flag in the future once the scripts will be able to activate MP internally.", "@stas00 The flag was exposed because `TrainingArguments` would automatically increase the batch size if more than one GPU was detected (it would default to model parallelism behavior), thus defeating the purpose of model parallelism. Did you change that behavior?", "As I was trying to convey we found a way to do the exact same thing without needing an extra flag. That's it's enough to run \r\n`model.parallelize()` right after creating the model, for the trainer to do the right thing.\r\n\r\n> TrainingArguments would automatically increase the batch size if more than one GPU was detected \r\n\r\nAs you can see it forces it to appear as having just 1 gpu, so no DP will be activated.\r\n\r\nhttps://github.com/huggingface/transformers/blob/7acfa95afb8194f8f9c1f4d2c6028224dbed35a2/src/transformers/trainer.py#L285-L290\r\n\r\nPlease let me know if we have missed anything in the re-shuffle.\r\n\r\n" ]
1,606
1,695
null
CONTRIBUTOR
null
# 🚀 Feature request This is a discussion issue for training/fine-tuning very large transformer models. Recently, model parallelism was added for gpt2 and t5. The current implementation is for PyTorch only and requires manually modifying the model classes for each model. Possible routes (thanks to @stas00 for identifying these): - `fairscale` to avoid individual model implementation - `deepspeed` to possibly enable even larger models to be trained
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8771/reactions", "total_count": 25, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 17, "rocket": 7, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8771/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/8770
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8770/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8770/comments
https://api.github.com/repos/huggingface/transformers/issues/8770/events
https://github.com/huggingface/transformers/pull/8770
749,953,145
MDExOlB1bGxSZXF1ZXN0NTI2Njg1MDkx
8,770
Extend typing to path-like objects in `PretrainedConfig` and `PreTrainedModel`
{ "login": "gcompagnoni", "id": 60468746, "node_id": "MDQ6VXNlcjYwNDY4NzQ2", "avatar_url": "https://avatars.githubusercontent.com/u/60468746?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gcompagnoni", "html_url": "https://github.com/gcompagnoni", "followers_url": "https://api.github.com/users/gcompagnoni/followers", "following_url": "https://api.github.com/users/gcompagnoni/following{/other_user}", "gists_url": "https://api.github.com/users/gcompagnoni/gists{/gist_id}", "starred_url": "https://api.github.com/users/gcompagnoni/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gcompagnoni/subscriptions", "organizations_url": "https://api.github.com/users/gcompagnoni/orgs", "repos_url": "https://api.github.com/users/gcompagnoni/repos", "events_url": "https://api.github.com/users/gcompagnoni/events{/privacy}", "received_events_url": "https://api.github.com/users/gcompagnoni/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Good idea!\r\n\r\nThis could be done for tokenizers as well, no? (the `from_pretrained` for tokenizers is in `tokenization_utils_base.py`)", "I have extended the same modifications to the tokenizers, as suggested by @thomwolf , and to auto classes too." ]
1,606
1,606
1,606
CONTRIBUTOR
null
# What does this PR do? In my experience, I often call the `from_pretrained` and `save_pretrained` methods of models and configurations with a path-like variable rather than a string. Since the paths are then used by various `os` functions, this works just fine: however, the relevant variables are typed as strings only, raising warnings when using an IDE :anguished: . This PR extends the typing to `Union[str, os.PathLike]` when relevant inside `PretrainedConfig` and `PreTrainedModel` methods. Since passing a path-like object is already tacitly supported in most cases, no significant changes to the code are necessary. In a few places, the relevant variable needs to be turned to a string in order to support functions such as `is_remote_url`. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. Maybe (documentation): @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8770/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8770/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8770", "html_url": "https://github.com/huggingface/transformers/pull/8770", "diff_url": "https://github.com/huggingface/transformers/pull/8770.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8770.patch", "merged_at": 1606492379000 }
https://api.github.com/repos/huggingface/transformers/issues/8769
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8769/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8769/comments
https://api.github.com/repos/huggingface/transformers/issues/8769/events
https://github.com/huggingface/transformers/issues/8769
749,905,825
MDU6SXNzdWU3NDk5MDU4MjU=
8,769
LXMERT - Visual features don't match original implementation
{ "login": "eladsegal", "id": 13485709, "node_id": "MDQ6VXNlcjEzNDg1NzA5", "avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eladsegal", "html_url": "https://github.com/eladsegal", "followers_url": "https://api.github.com/users/eladsegal/followers", "following_url": "https://api.github.com/users/eladsegal/following{/other_user}", "gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}", "starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions", "organizations_url": "https://api.github.com/users/eladsegal/orgs", "repos_url": "https://api.github.com/users/eladsegal/repos", "events_url": "https://api.github.com/users/eladsegal/events{/privacy}", "received_events_url": "https://api.github.com/users/eladsegal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @eladsegal ! \r\n\r\nThis is indeed interesting. The FRCNN config should match the exact setting used in the original demo. I will say, however, everything is finicky, but has been tested + should work.\r\n\r\nCould you let me know if you:\r\n- modified any of the settings in the config\r\n- provide a link to the script script you are using so I can see the exact changes you have made\r\n- If also possible, plot the bounding boxes on the original prediction so that I can see how the bounding boxes are different (I think that may be pretty telling)\r\n\r\nI think I should be able to figure out what may be happening if you could let me know the above!", "Thanks for the response, @eltoto1219!\r\nThe 33% I got previously was a mistake, not sure what change I made that caused it. \r\nI started from scratch now, and the results are a lot closer to the original.\r\nI've made the following changes:\r\n- https://github.com/huggingface/transformers/blob/master/examples/lxmert/utils.py#L552 - do this only if the image is from a URL (https://github.com/huggingface/transformers/issues/8333) (improves accuracy on GQA by ~4 points)\r\n\r\nAnd changes so extracting_data.py will work for batch size larger than 1:\r\n- https://github.com/huggingface/transformers/blob/master/examples/lxmert/extracting_data.py#L95 - changed to if len(batch) > 0\r\n\r\n- https://github.com/huggingface/transformers/blob/master/examples/lxmert/modeling_frcnn.py#L42 - added .view(-1, 1, 1) to allow broadcasting\r\n\r\nFor `extracting_data.py` I made a few more changes to be able to extract only for GQA's testdev images (so it will be a lot faster to run).\r\nYou can reproduce everything with the code in:\r\nhttps://github.com/eladsegal/gqa_lxmert\r\nThere's also a notebook there showing the different bounding boxes.\r\n\r\nFor some reason, inference with different batch sizes in the FRCNN model results in different features for the same image.\r\nI got the following accuracies for GQA for the following different batch sizes:\r\n6: 57.15%\r\n2: 57.64%\r\n1: 58.54%\r\nUsing features from the original LXMERT repo results in 59.29%.", "Of course! I am am real glad you were able to catch the color format error for images downloaded via a url (It would have taken me forever to find something like that)! I will have to fix that as soon as possible + do some more testing. \r\n\r\nI find the different downstream lxmert accuracies on GQA when different batch sizes are used for feature extraction really interesting aswell. In the original repo, the extraction was set up so that one image went through the faster frcnn at a time. I am thinking something may be getting unordered when using multiple images at once, so I will take a look. I think it would also be worth pointing out that there are multiple releases of visual genome images (from 2014 and 2016). If downloading directly from https://visualgenome.org/ there appears to be quite a few corrupt images. \r\n\r\nOn the other hand, all visual genome images can be downloaded from https://cs.stanford.edu/people/dorarad/gqa/about.html (which I am assuming is the latest version). That may in part be why the accuracy is lower. (GQA train + val split come from visual genome. GQA testdev split comes from the COCO test split) There may also be some very small changes to some extraction hyper-parameters (most likely the NMS threshold for the post processing of bounding boxes) which may have also resulted in slightly different inaccuracies. \r\n\r\nI'll go ahead and extract features across the different versions of visual genome + gqa, and compare element-wise with the features from the original lxmert repo and see if anything is any different, and if so, how different. It should take me a couple of days, but I will get back to you by then!", "Thank you, I really appreciate this!\r\n\r\n>I think it would also be worth pointing out that there are multiple releases of visual genome images (from 2014 and 2016). If downloading directly from https://visualgenome.org/ there appears to be quite a few corrupt images.\r\n> \r\n> On the other hand, all visual genome images can be downloaded from https://cs.stanford.edu/people/dorarad/gqa/about.html (which I am assuming is the latest version). That may in part be why the accuracy is lower. (GQA train + val split come from visual genome. GQA testdev split comes from the COCO test split) \r\n\r\nNot sure about this as a possible reason for the lower accuracy, as the images I used were downloaded from GQA's website, and I only did comparison on the testdev split, and no additional training at all.\r\n\r\n> There may also be some very small changes to some extraction hyper-parameters (most likely the NMS threshold for the post processing of bounding boxes) which may have also resulted in slightly different inaccuracies.\r\n\r\nSounds like a very reasonable explanation for the different and features and the small accuracy difference!\r\n\r\n\r\n", "@eltoto1219 The issue with batch-wise extraction is known:\r\nhttps://github.com/airsplay/py-bottom-up-attention/issues/3#issuecomment-624240642", "Hey @eladsegal !\r\n\r\nI stumbled upon that actually on Friday myself too. I rewrote the extraction script to only allow frcnn extraction for one image at a time. \r\n\r\nI think that the discrepancy in accuracy comes from the fact that Hao actually used a caffe-based frcnn pretrained model trained specifically to predict 36 images, while the pytorch one here was pretrained to predict 10-100 images. That and the potential of slightly different NMS thresholds. \r\n\r\nAslong as the batch-size is 1, the feature quality is most likely the same, however, if we are finetuning with the features from the model used in the aforementioned script, those technically wont be the exact same features used to pretrain lxmert, they should still get the job done if your okay with being ~1% lower than the reported accuracy.\r\n\r\n\r\nHere is the link to the fixed script:\r\n\r\nhttps://drive.google.com/file/d/1er2axVyGj8eW84QBGrV0dqTmKbxyS8F7/view?usp=sharing\r\n", "Thank you very much @eltoto1219, this has been extremely helpful!", "Why [`examples/lxmert/`](https://github.com/huggingface/transformers/blob/master/examples/lxmert/) no longer exists?", "This was an error, it has been put back a few days ago. Sorry for the inconvenience." ]
1,606
1,609
1,607
CONTRIBUTOR
null
## Environment info - `transformers` version: 4.0.0-rc-1 - Platform: Linux-4.15.0-122-generic-x86_64-with-debian-buster-sid - Python version: 3.7.9 - PyTorch version (GPU?): 1.7.0+cu101 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @eltoto1219 ## Information Model I am using: **unc-nlp/lxmert-gqa-uncased** The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [X] **GQA** ## Details I've tried to reproduce LXMERT results on GQA - Using the [visual features from the original repo](https://github.com/airsplay/lxmert#gqa), I got an accuracy of 59.29% on the testdev split (which is a bit less than expected, but close enough). However, when generating the visual features using the [extraction script](https://github.com/huggingface/transformers/blob/master/examples/lxmert/extracting_data.py) in the examples (which uses "unc-nlp/frcnn-vg-finetuned"), the accuracy is only ~33%. I checked further, and the bounding boxes are also different. Any idea to what could be the problem?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8769/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8769/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8768
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8768/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8768/comments
https://api.github.com/repos/huggingface/transformers/issues/8768/events
https://github.com/huggingface/transformers/pull/8768
749,890,314
MDExOlB1bGxSZXF1ZXN0NTI2NjMyNjAy
8,768
Attempt to get a better fix for QA
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,606
1,606
1,606
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8768/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8768/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8768", "html_url": "https://github.com/huggingface/transformers/pull/8768", "diff_url": "https://github.com/huggingface/transformers/pull/8768.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8768.patch", "merged_at": 1606329938000 }
https://api.github.com/repos/huggingface/transformers/issues/8767
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8767/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8767/comments
https://api.github.com/repos/huggingface/transformers/issues/8767/events
https://github.com/huggingface/transformers/issues/8767
749,876,277
MDU6SXNzdWU3NDk4NzYyNzc=
8,767
Allow to set truncation strategy for pipeline
{ "login": "Backfighter", "id": 8530887, "node_id": "MDQ6VXNlcjg1MzA4ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/8530887?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Backfighter", "html_url": "https://github.com/Backfighter", "followers_url": "https://api.github.com/users/Backfighter/followers", "following_url": "https://api.github.com/users/Backfighter/following{/other_user}", "gists_url": "https://api.github.com/users/Backfighter/gists{/gist_id}", "starred_url": "https://api.github.com/users/Backfighter/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Backfighter/subscriptions", "organizations_url": "https://api.github.com/users/Backfighter/orgs", "repos_url": "https://api.github.com/users/Backfighter/repos", "events_url": "https://api.github.com/users/Backfighter/events{/privacy}", "received_events_url": "https://api.github.com/users/Backfighter/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Indeed, not being able to pass tokenizer arguments is a limiting factor of the pipelines. We're working on pipelines v2 (cc @mfuntowicz) which will allow such arguments to be passed.\r\n\r\nIn the meantime, we would definitely welcome a PR offering this functionality - but it would have to be agnostic to the argument, not specific to the truncation strategy.", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,606
1,614
1,614
NONE
null
# 🚀 Feature request The highlevel pipeline function should allow to set the truncation strategy of the tokenizer in the pipeline. ## Motivation Some models will crash if the input sequence has too many tokens and require truncation. Additionally available memory is limited and it is often useful to shorten the amount of tokens. Sadly this is currently not possible using the pipeline-API. One has to call the tokenizer manually to set the truncation strategy or hope that the task specific pipeline has truncation turned on by default (the summarization pipeline for example has not). ## Your contribution I could potentially create a PR for this, but want to confirm first that the change is welcome.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8767/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8767/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8766
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8766/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8766/comments
https://api.github.com/repos/huggingface/transformers/issues/8766/events
https://github.com/huggingface/transformers/issues/8766
749,864,145
MDU6SXNzdWU3NDk4NjQxNDU=
8,766
[Error: PyTorch to tf]convert_pytorch_checkpoint_to_tf2: AttributeError: bert.pooler.dense.weight not found in PyTorch model
{ "login": "singhsidhukuldeep", "id": 10228227, "node_id": "MDQ6VXNlcjEwMjI4MjI3", "avatar_url": "https://avatars.githubusercontent.com/u/10228227?v=4", "gravatar_id": "", "url": "https://api.github.com/users/singhsidhukuldeep", "html_url": "https://github.com/singhsidhukuldeep", "followers_url": "https://api.github.com/users/singhsidhukuldeep/followers", "following_url": "https://api.github.com/users/singhsidhukuldeep/following{/other_user}", "gists_url": "https://api.github.com/users/singhsidhukuldeep/gists{/gist_id}", "starred_url": "https://api.github.com/users/singhsidhukuldeep/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/singhsidhukuldeep/subscriptions", "organizations_url": "https://api.github.com/users/singhsidhukuldeep/orgs", "repos_url": "https://api.github.com/users/singhsidhukuldeep/repos", "events_url": "https://api.github.com/users/singhsidhukuldeep/events{/privacy}", "received_events_url": "https://api.github.com/users/singhsidhukuldeep/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hey @singhsidhukuldeep, \r\n\r\nCould you maybe upload your weights to a colab so that I can reproduce the error or upload your weights to the model hub and give me a path to it? \r\n\r\nThis way, I can reproduce the error and solve it :-) \r\n\r\nThanks a lot!", "Hi @patrickvonplaten ,\r\n\r\nI tried this\r\n```Python\r\npt_model = TFBertForPreTraining.from_pretrained(model_output_location, from_pt=True)\r\n\r\nprint(\"\\n\\n>>> Saving HuggingFace to tensorflow(pb)\")\r\ntf.saved_model.save(pt_model,TF_model_output_location)\r\n```\r\n\r\nand it worked, but I am not able to understand the limitations here!", "If you want to convert from <del>TF to PT</del> PT to TF this is exactly how you should to it...", "@patrickvonplaten I am looking to convert PyTorch to TF!", "Sorry I meant PT to TF -> your approach is correct here.", "Got it! Thanks for the help. \r\n\r\nOne last thing, this gives a `*.h5` file (weights only)\r\nIs there a way to get `*.pb` file with structure and weights?", "Hi @singhsidhukuldeep, I tried this, it did save the model in assets, variables, and saved_model.pb format, but couldn't get any .h5 file that I need, am I missing something?\r\n\r\nP.S. I am trying to convert a standard config BERT MaskedLM model\r\n\r\n> Hi @patrickvonplaten ,\r\n> \r\n> I tried this\r\n> \r\n> ```python\r\n> pt_model = TFBertForPreTraining.from_pretrained(model_output_location, from_pt=True)\r\n> \r\n> print(\"\\n\\n>>> Saving HuggingFace to tensorflow(pb)\")\r\n> tf.saved_model.save(pt_model,TF_model_output_location)\r\n> ```\r\n> \r\n> and it worked, but I am not able to understand the limitations here!\r\n\r\n", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,606
1,614
1,614
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.5.1 - Platform: Linux-5.4.0-1029-gcp-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.7.0 (False) - Tensorflow version (GPU?): 2.3.1 (False) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help I think: @patrickvonplaten @LysandreJik @VictorSanh Anyone is welcome! <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao Blenderbot: @patrickvonplaten Bart: @patrickvonplaten Marian: @patrickvonplaten Pegasus: @patrickvonplaten mBART: @patrickvonplaten T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSMT: @stas00 examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Trying to convert my pytorch checkpoint to tf using the below code: ```python from transformers import convert_pytorch_checkpoint_to_tf2 convert_pytorch_checkpoint_to_tf2.convert_pt_checkpoint_to_tf( model_type = "bert", pytorch_checkpoint_path="model/pytorch_model.bin", config_file="model/config.json", tf_dump_path="TFmodel", compare_with_pt_model=False, use_cached_models=False ) ``` my model folder has a tiny bert trained using HuggingFace: contents of `model` folder are: `checkpoint-500 special_tokens_map.json config.json tokenizer_config.json eval_results_mlm_wwm.txt training_args.bin pytorch_model.bin vocab.txt` ## To reproduce Error: ``` Loading PyTorch weights from /home/3551351/bert-mlm/model/pytorch_model.bin PyTorch checkpoint contains 8,354,548 parameters Traceback (most recent call last): File "pt2tf.py", line 7, in <module> convert_pytorch_checkpoint_to_tf2.convert_pt_checkpoint_to_tf( File "/home/3551351/.conda/envs/kuldeepVenv/lib/python3.8/site-packages/transformers/convert_pytorch_checkpoint_to_tf2.py", line 283, in convert_pt_checkpoint_to_tf tf_model = load_pytorch_checkpoint_in_tf2_model(tf_model, pytorch_checkpoint_path) File "/home/3551351/.conda/envs/kuldeepVenv/lib/python3.8/site-packages/transformers/modeling_tf_pytorch_utils.py", line 96, in load_pytorch_checkpoint_in_tf2_model return load_pytorch_weights_in_tf2_model( File "/home/3551351/.conda/envs/kuldeepVenv/lib/python3.8/site-packages/transformers/modeling_tf_pytorch_utils.py", line 172, in load_pytorch_weights_in_tf2_model raise AttributeError("{} not found in PyTorch model".format(name)) AttributeError: bert.pooler.dense.weight not found in PyTorch model ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8766/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8766/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8765
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8765/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8765/comments
https://api.github.com/repos/huggingface/transformers/issues/8765/events
https://github.com/huggingface/transformers/pull/8765
749,825,329
MDExOlB1bGxSZXF1ZXN0NTI2NTc4Mzk0
8,765
Fix QA argument handler
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The errors are due to connection errors." ]
1,606
1,606
1,606
MEMBER
null
The QA argument handler does not handle multiple sequences at a time anymore. This was not tested, so I added it to the tests. Fix https://github.com/huggingface/transformers/issues/8759 To test it out run the following on `master`: ```py from transformers import pipeline nlp = pipeline("question-answering") context = r""" Extractive Question Answering is the task of extracting an answer from a text given a question. An example of a question answering dataset is the SQuAD dataset, which is entirely based on that task. If you would like to fine-tune a model on a SQuAD task, you may leverage the `run_squad.py`. """ print( nlp( question=["What is extractive question answering?", "What is a good example of a question answering dataset?"], context=[context, context], ) ) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8765/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8765/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8765", "html_url": "https://github.com/huggingface/transformers/pull/8765", "diff_url": "https://github.com/huggingface/transformers/pull/8765.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8765.patch", "merged_at": 1606330936000 }
https://api.github.com/repos/huggingface/transformers/issues/8764
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8764/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8764/comments
https://api.github.com/repos/huggingface/transformers/issues/8764/events
https://github.com/huggingface/transformers/pull/8764
749,796,758
MDExOlB1bGxSZXF1ZXN0NTI2NTU0NjU5
8,764
Tokenizers - move from hardcoded configs and models to fully hosted
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,606
1,651
1,614
MEMBER
null
# What does this PR do? Fixes #8125 #8117 Tokenizer checkpoint files are now handled the same way than model files. Also: - add a `tokenizer_class_name` field in the tokenization config file `tokenizer_config.json`. This field is used when possible by `AutoTokenizer` to disambiguate the tokenizer to instantiate - concurrently configuration and vocabulary files are updated on the relevant model repo in the hub to wake them independant from the code-base. - the max length of the models was always the same for all the variante in an architecture. Consequently we simply the `max_model_input_sizes` attribute to make it a single integer instead of a mapping from string (model names) to integers. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8764/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8764/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8764", "html_url": "https://github.com/huggingface/transformers/pull/8764", "diff_url": "https://github.com/huggingface/transformers/pull/8764.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8764.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8763
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8763/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8763/comments
https://api.github.com/repos/huggingface/transformers/issues/8763/events
https://github.com/huggingface/transformers/pull/8763
749,788,584
MDExOlB1bGxSZXF1ZXN0NTI2NTQ3NTgz
8,763
Migration guide from v3.x to v4.x
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Will update the switch to fast tokenizers when a decision has been made @thomwolf @n1t0 ", "@sgugger please let me know if this is what you had in mind." ]
1,606
1,606
1,606
MEMBER
null
Additionally to the release notes, this completes the migration guide to showcase the expected breaking changes from v3.x to v4.x, and how to retrieve the original behavior.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8763/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8763/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8763", "html_url": "https://github.com/huggingface/transformers/pull/8763", "diff_url": "https://github.com/huggingface/transformers/pull/8763.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8763.patch", "merged_at": 1606698788000 }
https://api.github.com/repos/huggingface/transformers/issues/8762
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8762/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8762/comments
https://api.github.com/repos/huggingface/transformers/issues/8762/events
https://github.com/huggingface/transformers/issues/8762
749,745,855
MDU6SXNzdWU3NDk3NDU4NTU=
8,762
AttributeError: type object 'T5ForConditionalGeneration' has no attribute 'from_config'
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I also tried to use automodel for this, I needed to modify the T5Config, I called it like this with automodel. thank you for your help. I need to make this work with not pretrained model. \r\n\r\n` model = AutoModel.from_config(config)\r\n`\r\nI am getting this error:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"finetune_t5_trainer.py\", line 236, in <module>\r\n main()\r\n File \"finetune_t5_trainer.py\", line 89, in main\r\n model = AutoModel.from_config(config)\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/transformers/modeling_auto.py\", line 604, in from_config\r\n config.__class__, cls.__name__, \", \".join(c.__name__ for c in MODEL_MAPPING.keys())\r\nValueError: Unrecognized configuration class <class 'seq2seq.models.t5.configuration_t5.T5Config'> for this kind of AutoModel: AutoModel.\r\nModel type should be one of RetriBertConfig, T5Config, DistilBertConfig, AlbertConfig, CamembertConfig, XLMRobertaConfig, BartConfig, LongformerConfig, RobertaConfig, LayoutLMConfig, SqueezeBertConfig, BertConfig, OpenAIGPTConfig, GPT2Config, MobileBertConfig, TransfoXLConfig, XLNetConfig, FlaubertConfig, FSMTConfig, XLMConfig, CTRLConfig, ElectraConfig, ReformerConfig, FunnelConfig, LxmertConfig, BertGenerationConfig, DebertaConfig, DPRConfig, XLMProphetNetConfig, ProphetNetConfig.\r\n\r\n\r\n\r\n```", "solved with model = T5ForConditionalGeneration(config=config) syntax has been changed from 3.5.0 to 3.5.1 thanks " ]
1,606
1,606
1,606
NONE
null
## Environment info - `transformers` version: 3.5.1 - Platform: Linux - Python version: 3.7 - PyTorch version (GPU?): 1.6 - Tensorflow version (GPU?): - - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help Model Cards: @julien-c Text Generation: @patrickvonplaten @TevenLeScao T5: @patrickvonplaten examples/seq2seq: @patil-suraj ## Information Hi I would like to use T5 untrained here is the command I try: ` model = T5ForConditionalGeneration.from_config(config=config) ` I am getting this error, could you assist me please? thank you Looks weird error to me, since from_config should work based on documentations. thanks ``` File "finetune_t5_trainer.py", line 88, in main model = T5ForConditionalGeneration.from_config(config=config) AttributeError: type object 'T5ForConditionalGeneration' has no attribute 'from_config' ``` ## To reproduce Please run the command given ## Expected behavior load unpretrained model
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8762/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8762/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8761
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8761/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8761/comments
https://api.github.com/repos/huggingface/transformers/issues/8761/events
https://github.com/huggingface/transformers/pull/8761
749,704,905
MDExOlB1bGxSZXF1ZXN0NTI2NDc3NTQ0
8,761
Create README.md
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,606
1,606
1,606
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8761/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8761/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8761", "html_url": "https://github.com/huggingface/transformers/pull/8761", "diff_url": "https://github.com/huggingface/transformers/pull/8761.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8761.patch", "merged_at": 1606258285000 }
https://api.github.com/repos/huggingface/transformers/issues/8760
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8760/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8760/comments
https://api.github.com/repos/huggingface/transformers/issues/8760/events
https://github.com/huggingface/transformers/pull/8760
749,701,657
MDExOlB1bGxSZXF1ZXN0NTI2NDc0ODQ1
8,760
Create README.md
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,606
1,606
1,606
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8760/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8760/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8760", "html_url": "https://github.com/huggingface/transformers/pull/8760", "diff_url": "https://github.com/huggingface/transformers/pull/8760.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8760.patch", "merged_at": 1606412623000 }
https://api.github.com/repos/huggingface/transformers/issues/8759
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8759/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8759/comments
https://api.github.com/repos/huggingface/transformers/issues/8759/events
https://github.com/huggingface/transformers/issues/8759
749,686,010
MDU6SXNzdWU3NDk2ODYwMTA=
8,759
Version 3.5 broke the multi context/questions feature for the QuestionAnsweringPipeline
{ "login": "Mathieu4141", "id": 23531109, "node_id": "MDQ6VXNlcjIzNTMxMTA5", "avatar_url": "https://avatars.githubusercontent.com/u/23531109?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mathieu4141", "html_url": "https://github.com/Mathieu4141", "followers_url": "https://api.github.com/users/Mathieu4141/followers", "following_url": "https://api.github.com/users/Mathieu4141/following{/other_user}", "gists_url": "https://api.github.com/users/Mathieu4141/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mathieu4141/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mathieu4141/subscriptions", "organizations_url": "https://api.github.com/users/Mathieu4141/orgs", "repos_url": "https://api.github.com/users/Mathieu4141/repos", "events_url": "https://api.github.com/users/Mathieu4141/events{/privacy}", "received_events_url": "https://api.github.com/users/Mathieu4141/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[ { "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false } ]
[ "Thank you for reporting this. Fixing it in #8765" ]
1,606
1,606
1,606
NONE
null
## Environment info - `transformers` version: 3.5.1 (also in 3.5.0) - Platform: Darwin-20.1.0-x86_64-i386-64bit - Python version: 3.7.5 - PyTorch version (GPU?): 1.7.0 (False) - Tensorflow version (GPU?): 2.3.1 (False) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help tokenizers: @mfuntowicz ## Information Model I am using (Bert, XLNet ...): Default QuestionAnsweringPipeline The problem arises when using: * [x] my own modified scripts: (see below, modified from the example given here https://huggingface.co/transformers/usage.html#extractive-question-answering) The tasks I am working on is: * [x] an official GLUE/SQUaD task: Extractive Question Answering ## To reproduce Steps to reproduce the behavior: 1. Install transformers 3.5.1 (also in 3.5.0) 2. Run the following: ```python from transformers import pipeline nlp = pipeline("question-answering") context = r""" Extractive Question Answering is the task of extracting an answer from a text given a question. An example of a question answering dataset is the SQuAD dataset, which is entirely based on that task. If you would like to fine-tune a model on a SQuAD task, you may leverage the `run_squad.py`. """ print( nlp( question=["What is extractive question answering?", "What is a good example of a question answering dataset?"], context=[context, context], ) ) ``` In versions 3.5.0 and 3.5.1, I have this error: ``` multiprocessing.pool.RemoteTraceback: """ Traceback (most recent call last): File "/Users/cytadel/.pyenv/versions/3.7.5/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py", line 121, in worker result = (True, func(*args, **kwds)) File "/Users/cytadel/.pyenv/versions/3.7.5/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py", line 44, in mapstar return list(map(*args)) File "/Users/cytadel/Library/Caches/pypoetry/virtualenvs/feedly.ml-cyber-attacks-4LjjtgqO-py3.7/lib/python3.7/site-packages/transformers/data/processors/squad.py", line 110, in squad_convert_example_to_features for (i, token) in enumerate(example.doc_tokens): AttributeError: 'list' object has no attribute 'doc_tokens' """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/Users/cytadel/feedly/ml/do_not_commit.py", line 14, in <module> context=[context, context], File "/Users/cytadel/Library/Caches/pypoetry/virtualenvs/feedly.ml-cyber-attacks-4LjjtgqO-py3.7/lib/python3.7/site-packages/transformers/pipelines.py", line 1787, in __call__ for example in examples File "/Users/cytadel/Library/Caches/pypoetry/virtualenvs/feedly.ml-cyber-attacks-4LjjtgqO-py3.7/lib/python3.7/site-packages/transformers/pipelines.py", line 1787, in <listcomp> for example in examples File "/Users/cytadel/Library/Caches/pypoetry/virtualenvs/feedly.ml-cyber-attacks-4LjjtgqO-py3.7/lib/python3.7/site-packages/transformers/data/processors/squad.py", line 368, in squad_convert_examples_to_features disable=not tqdm_enabled, File "/Users/cytadel/Library/Caches/pypoetry/virtualenvs/feedly.ml-cyber-attacks-4LjjtgqO-py3.7/lib/python3.7/site-packages/tqdm/std.py", line 1171, in __iter__ for obj in iterable: File "/Users/cytadel/.pyenv/versions/3.7.5/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py", line 325, in <genexpr> return (item for chunk in result for item in chunk) File "/Users/cytadel/.pyenv/versions/3.7.5/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py", line 748, in next raise value AttributeError: 'list' object has no attribute 'doc_tokens' ``` ## Expected behavior Same result as in transformers version 3.4.0: `[{'score': 0.6222442984580994, 'start': 34, 'end': 96, 'answer': 'the task of extracting an answer from a text given a question.'}, {'score': 0.5115318894386292, 'start': 147, 'end': 161, 'answer': 'SQuAD dataset,'}]`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8759/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8759/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8758
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8758/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8758/comments
https://api.github.com/repos/huggingface/transformers/issues/8758/events
https://github.com/huggingface/transformers/issues/8758
749,657,202
MDU6SXNzdWU3NDk2NTcyMDI=
8,758
[Help] GPU with query answering
{ "login": "thiagomoeng", "id": 64150563, "node_id": "MDQ6VXNlcjY0MTUwNTYz", "avatar_url": "https://avatars.githubusercontent.com/u/64150563?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thiagomoeng", "html_url": "https://github.com/thiagomoeng", "followers_url": "https://api.github.com/users/thiagomoeng/followers", "following_url": "https://api.github.com/users/thiagomoeng/following{/other_user}", "gists_url": "https://api.github.com/users/thiagomoeng/gists{/gist_id}", "starred_url": "https://api.github.com/users/thiagomoeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thiagomoeng/subscriptions", "organizations_url": "https://api.github.com/users/thiagomoeng/orgs", "repos_url": "https://api.github.com/users/thiagomoeng/repos", "events_url": "https://api.github.com/users/thiagomoeng/events{/privacy}", "received_events_url": "https://api.github.com/users/thiagomoeng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!" ]
1,606
1,606
1,606
NONE
null
I want to figure out some way to get faster results from a QA model. I did some tests on google cloud with different GPUs and got some results, those tests was made with different GPUs and same CPU using this code: ``` from transformers import AutoTokenizer, AutoModelForQuestionAnswering from transformers.pipelines import pipeline tokenizer = AutoTokenizer.from_pretrained("deepset/bert-large-uncased-whole-word-masking-squad2") model = AutoModelForQuestionAnswering.from_pretrained("deepset/bert-large-uncased-whole-word-masking-squad2") nlp_qa = pipeline('question-answering', model=model, tokenizer=tokenizer) X = nlp_qa(context = text, question=queryy, topk = 3, device = 0, max_answer_len = 50) ``` Where context is just a long string and the question a simple query, and I got those results: ``` TESTE 1: ********** 4 vCPUs 15Gb RAM NVIDIA TESLA P100X1 Tempo1: 1:45 min Tempo2: 1:40 min Tempo3: 1:45 min *************** *************** TESTE 2: ********** 4 vCPUs 15Gb RAM NVIDIA TESLA V100X1 Tempo1: 1:58 min Tempo2: 1:58 min Tempo3: 1:55 min *************** *************** TESTE 3: ********** 4 vCPUs 15Gb RAM NVIDIA TESLA K80X1 Tempo1: 2:06 min Tempo2: 2:18 min Tempo3: 2:20 min *************** *************** TESTE 4: ********** 4 vCPUs 15Gb RAM NVIDIA TESLA T4X1 Tempo1: 1:45 min Tempo2: 1:50 min Tempo3: 1:50 min *************** *************** TESTE 5: ********** 4 vCPUs 15Gb RAM NVIDIA NONE Tempo1: 2:22 min Tempo2: 1:57 min Tempo3: 1:57 min ``` I want to know if I am using GPU wrong, or is it normal to get almost same results with and without GPU on this set?. Is there anyway to get faster results?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8758/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8758/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8757
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8757/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8757/comments
https://api.github.com/repos/huggingface/transformers/issues/8757/events
https://github.com/huggingface/transformers/issues/8757
749,645,144
MDU6SXNzdWU3NDk2NDUxNDQ=
8,757
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 49: invalid start byte
{ "login": "singhsidhukuldeep", "id": 10228227, "node_id": "MDQ6VXNlcjEwMjI4MjI3", "avatar_url": "https://avatars.githubusercontent.com/u/10228227?v=4", "gravatar_id": "", "url": "https://api.github.com/users/singhsidhukuldeep", "html_url": "https://github.com/singhsidhukuldeep", "followers_url": "https://api.github.com/users/singhsidhukuldeep/followers", "following_url": "https://api.github.com/users/singhsidhukuldeep/following{/other_user}", "gists_url": "https://api.github.com/users/singhsidhukuldeep/gists{/gist_id}", "starred_url": "https://api.github.com/users/singhsidhukuldeep/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/singhsidhukuldeep/subscriptions", "organizations_url": "https://api.github.com/users/singhsidhukuldeep/orgs", "repos_url": "https://api.github.com/users/singhsidhukuldeep/repos", "events_url": "https://api.github.com/users/singhsidhukuldeep/events{/privacy}", "received_events_url": "https://api.github.com/users/singhsidhukuldeep/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "You should first convert your checkpoint to a huggingface checkpoint, using the conversion script. You can check the [docs here](https://huggingface.co/transformers/converting_tensorflow_models.html#bert).", "Hi @LysandreJik \r\nThank you so much for the response,\r\nafter training I will get a PyTorch checkpoint, right?\r\nWhat is the procedure to get a `tf` checkpoint?", "> You should first convert your checkpoint to a huggingface checkpoint, using the conversion script. You can check the [docs here](https://huggingface.co/transformers/converting_tensorflow_models.html#bert).\r\n\r\nHi @LysandreJik ,\r\nI tried the above approach, and I converted it to a huggingface checkpoint.\r\n\r\nNow when I run below command:\r\n\r\n```\r\npython run_mlm_wwm.py \\\r\n --model_name_or_path google-bert-tiny/pytorch_model.bin \\\r\n --config_name google-bert-tiny/bert_config.json \\\r\n --train_file train.txt \\\r\n --validation_file val.txt \\\r\n --do_train \\\r\n --do_eval \\\r\n --output_dir test-mlm-wwm \\\r\n --cache_dir cache\r\n```\r\n\r\n\r\nI am getting this error:\r\n```\r\nTraceback (most recent call last):\r\n File \"run_mlm_wwm.py\", line 340, in <module>\r\n main()\r\n File \"run_mlm_wwm.py\", line 236, in main\r\n tokenizer = AutoTokenizer.from_pretrained(\r\n File \"/home/3551351/.conda/envs/kuldeepVenv/lib/python3.8/site-packages/transformers/tokenization_auto.py\", line 306, in from_pretrained\r\n config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)\r\n File \"/home/3551351/.conda/envs/kuldeepVenv/lib/python3.8/site-packages/transformers/configuration_auto.py\", line 333, in from_pretrained\r\n config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)\r\n File \"/home/3551351/.conda/envs/kuldeepVenv/lib/python3.8/site-packages/transformers/configuration_utils.py\", line 391, in get_config_dict\r\n config_dict = cls._dict_from_json_file(resolved_config_file)\r\n File \"/home/3551351/.conda/envs/kuldeepVenv/lib/python3.8/site-packages/transformers/configuration_utils.py\", line 474, in _dict_from_json_file\r\n text = reader.read()\r\n File \"/home/3551351/.conda/envs/kuldeepVenv/lib/python3.8/codecs.py\", line 322, in decode\r\n (result, consumed) = self._buffer_decode(data, self.errors, final)\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte\r\n```\r\n@thomwolf ", "I believe the `model_name_or_path` should point to a directory containing both the configuration and model files, with their appropriate name (`config.json`, `pytorch_model.bin`).\r\n\r\n```\r\ndirectory \r\n - config.json\r\n - pytorch_model.bin\r\n```\r\n\r\nRegarding your question to convert a model to a TensorFlow implementation, you can first convert your model to PyTorch and then load it in TensorFlow:\r\n\r\nLet's say you saved the model in the directory `directory`:\r\n```py\r\nfrom transformers import TFBertForPreTraining\r\n\r\npt_model = BertForPreTraining.from_pretrained(directory, from_pt=True)\r\n```\r\nYou can then save it as any other TensorFlow model.", "Hi @LysandreJik\r\n\r\nAfter giving the folder to config and model,\r\n```Python\r\nfrom transformers import convert_pytorch_checkpoint_to_tf2\r\nconvert_pytorch_checkpoint_to_tf2.convert_pt_checkpoint_to_tf(\r\n model_type = \"bert\", \r\n pytorch_checkpoint_path=\"model/\", \r\n config_file=\"model/config.json\", \r\n tf_dump_path=\"TFmodel\", \r\n compare_with_pt_model=False, \r\n use_cached_models=False\r\n)\r\n```\r\n \r\nI am getting this error:\r\n```shell\r\nLoading PyTorch weights from /home/3551351/bert-mlm/model\r\nTraceback (most recent call last):\r\n File \"pt2tf.py\", line 7, in <module>\r\n convert_pytorch_checkpoint_to_tf2.convert_pt_checkpoint_to_tf(\r\n File \"/home/3551351/.conda/envs/kuldeepVenv/lib/python3.8/site-packages/transformers/convert_pytorch_checkpoint_to_tf2.py\", line 283, in convert_pt_checkpoint_to_tf\r\n tf_model = load_pytorch_checkpoint_in_tf2_model(tf_model, pytorch_checkpoint_path)\r\n File \"/home/3551351/.conda/envs/kuldeepVenv/lib/python3.8/site-packages/transformers/modeling_tf_pytorch_utils.py\", line 93, in load_pytorch_checkpoint_in_tf2_model\r\n pt_state_dict = torch.load(pt_path, map_location=\"cpu\")\r\n File \"/home/3551351/.conda/envs/kuldeepVenv/lib/python3.8/site-packages/torch/serialization.py\", line 581, in load\r\n with _open_file_like(f, 'rb') as opened_file:\r\n File \"/home/3551351/.conda/envs/kuldeepVenv/lib/python3.8/site-packages/torch/serialization.py\", line 230, in _open_file_like\r\n return _open_file(name_or_buffer, mode)\r\n File \"/home/3551351/.conda/envs/kuldeepVenv/lib/python3.8/site-packages/torch/serialization.py\", line 211, in __init__\r\n super(_open_file, self).__init__(open(name, mode))\r\nIsADirectoryError: [Errno 21] Is a directory: '/home/3551351/bert-mlm/model'\r\n```\r\n\r\n", "I'm sorry, I think you misunderstood me. I was saying that about the way you launch your script, not the way you do the conversion:\r\n\r\n```\r\npython run_mlm_wwm.py \\\r\n --model_name_or_path google-bert-tiny \\\r\n --config_name google-bert-tiny \\\r\n --train_file train.txt \\\r\n --validation_file val.txt \\\r\n --do_train \\\r\n --do_eval \\\r\n --output_dir test-mlm-wwm \\\r\n --cache_dir cache\r\n```", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,606
1,614
1,614
NONE
null
## Environment info - `transformers` version: 3.5.1 - Platform: Linux-5.4.0-1029-gcp-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.7.0 (False) - Tensorflow version (GPU?): 2.3.1 (False) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help I think: @patrickvonplaten @LysandreJik @VictorSanh Anyone is welcome! ## Information I am using `examples/language-modeling/run_mlm_wwm.py` to train my own Tiny BERT model. ## To reproduce Using Tiny BERT from Google [https://github.com/google-research/bert/blob/master/README.md](https://github.com/google-research/bert/blob/master/README.md) Using `examples/language-modeling/run_mlm_wwm.py` from HuggingFace to train a language model on raw text. files in my `google-bert-tiny` are `bert_config.json bert_model.ckpt.data-00000-of-00001 bert_model.ckpt.index vocab.txt` Steps to reproduce the behavior: 1. install transformers torch and Tensorflow using pip 2. Get `examples/language-modeling/run_mlm_wwm.py` from HuggingFace>Transformers [Link](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm_wwm.py) 3. Running the following command: ```shell python run_mlm_wwm.py \ --model_name_or_path google-bert-tiny/bert_model.ckpt.index \ --config_name google-bert-tiny/bert_config.json \ --train_file train.txt \ --validation_file val.txt \ --do_train \ --do_eval \ --output_dir test-mlm-wwm \ --cache_dir cache ``` Error: ``` Traceback (most recent call last): File "run_mlm_wwm.py", line 340, in <module> main() File "run_mlm_wwm.py", line 236, in main tokenizer = AutoTokenizer.from_pretrained( File "/home/3551351/.conda/envs/kuldeepVenv/lib/python3.8/site-packages/transformers/tokenization_auto.py", line 306, in from_pretrained config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs) File "/home/3551351/.conda/envs/kuldeepVenv/lib/python3.8/site-packages/transformers/configuration_auto.py", line 333, in from_pretrained config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/home/3551351/.conda/envs/kuldeepVenv/lib/python3.8/site-packages/transformers/configuration_utils.py", line 391, in get_config_dict config_dict = cls._dict_from_json_file(resolved_config_file) File "/home/3551351/.conda/envs/kuldeepVenv/lib/python3.8/site-packages/transformers/configuration_utils.py", line 474, in _dict_from_json_file text = reader.read() File "/home/3551351/.conda/envs/kuldeepVenv/lib/python3.8/codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 49: invalid start byte ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Want it to train <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8757/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8757/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8756
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8756/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8756/comments
https://api.github.com/repos/huggingface/transformers/issues/8756/events
https://github.com/huggingface/transformers/issues/8756
749,603,307
MDU6SXNzdWU3NDk2MDMzMDc=
8,756
Continued training of the original BERT models (not to PyTorch)
{ "login": "singhsidhukuldeep", "id": 10228227, "node_id": "MDQ6VXNlcjEwMjI4MjI3", "avatar_url": "https://avatars.githubusercontent.com/u/10228227?v=4", "gravatar_id": "", "url": "https://api.github.com/users/singhsidhukuldeep", "html_url": "https://github.com/singhsidhukuldeep", "followers_url": "https://api.github.com/users/singhsidhukuldeep/followers", "following_url": "https://api.github.com/users/singhsidhukuldeep/following{/other_user}", "gists_url": "https://api.github.com/users/singhsidhukuldeep/gists{/gist_id}", "starred_url": "https://api.github.com/users/singhsidhukuldeep/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/singhsidhukuldeep/subscriptions", "organizations_url": "https://api.github.com/users/singhsidhukuldeep/orgs", "repos_url": "https://api.github.com/users/singhsidhukuldeep/repos", "events_url": "https://api.github.com/users/singhsidhukuldeep/events{/privacy}", "received_events_url": "https://api.github.com/users/singhsidhukuldeep/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "You should first convert your checkpoint to a huggingface checkpoint. You can check the docs on how to do that [here](https://huggingface.co/transformers/converting_tensorflow_models.html#bert).", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,606
1,614
1,614
NONE
null
# 🚀 Feature request I am looking for continued training of original tiny-bert with my own raw data using masked language modelling. But I want the final model in TF. ## Motivation I tried this [LM](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm_wwm.py), but this only works from pytorch to pytorch. I tried with using original ckpt and from_tf, it always results in a .h5 error Please let me know I can explain more OR help in any way. Basically, we should be able to use tf weights >> masked language modelling >> and have a domain specific pre-trained Tensorflow language model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8756/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8756/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8755
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8755/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8755/comments
https://api.github.com/repos/huggingface/transformers/issues/8755/events
https://github.com/huggingface/transformers/issues/8755
749,567,125
MDU6SXNzdWU3NDk1NjcxMjU=
8,755
Why there are no such 'cls/' layers in roberta pytorch checkpoints
{ "login": "cloudyskyy", "id": 30574139, "node_id": "MDQ6VXNlcjMwNTc0MTM5", "avatar_url": "https://avatars.githubusercontent.com/u/30574139?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cloudyskyy", "html_url": "https://github.com/cloudyskyy", "followers_url": "https://api.github.com/users/cloudyskyy/followers", "following_url": "https://api.github.com/users/cloudyskyy/following{/other_user}", "gists_url": "https://api.github.com/users/cloudyskyy/gists{/gist_id}", "starred_url": "https://api.github.com/users/cloudyskyy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cloudyskyy/subscriptions", "organizations_url": "https://api.github.com/users/cloudyskyy/orgs", "repos_url": "https://api.github.com/users/cloudyskyy/repos", "events_url": "https://api.github.com/users/cloudyskyy/events{/privacy}", "received_events_url": "https://api.github.com/users/cloudyskyy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The authors of RoBERTa removed the next sentence prediction task during pre-training, as it didn't help much. See section 1 of the [paper](https://arxiv.org/pdf/1907.11692.pdf).", "> The authors of RoBERTa removed the next sentence prediction task during pre-training, as it didn't help much. See section 1 of the [paper](https://arxiv.org/pdf/1907.11692.pdf).\r\n\r\nReally appreciate your apply! However, the 2 'cls/seq_relationship/' layers are responsible for the NSP task. The rest should be responsible for the MLM task. What is more, these layers are the exact layers that I extract from the original roberta TensorFlow checkpoint published by the author of the paper... This is confusing. I am just wondering why the huggingface pytorch checkpoints just don't stay the weights of the MLM task, in UNILM, these weights are precious. Of course NSP is not that important.", "Yes you're right, sorry. I think that the masked language modeling head has a different name in Huggingface Transformers. It is simply called `lm_head`. See [here](https://github.com/huggingface/transformers/blob/a7d73cfdd497d7bf6c9336452decacf540c46e20/src/transformers/models/roberta/modeling_roberta.py#L869) for the PyTorch implementation of RoBERTa. Note that you should use `RobertaForMaskedLM` rather than `RobertaModel`, since the latter does not have a masked language modeling head on top.", "> I think that the masked language modeling head has a different name in Huggingface Transformers. It is simply called `lm_head`. See here: https://huggingface.co/transformers/_modules/transformers/modeling_tf_roberta.html#TFRobertaForMaskedLM\r\n\r\nAppreciate again! I will have a look tomorrow, and in fact it is 2 a.m. in my city right now and I am \r\ntotally in bed hahahh", "> Yes you're right, sorry. I think that the masked language modeling head has a different name in Huggingface Transformers. It is simply called `lm_head`. See [here](https://github.com/huggingface/transformers/blob/a7d73cfdd497d7bf6c9336452decacf540c46e20/src/transformers/models/roberta/modeling_roberta.py#L869) for the PyTorch implementation of RoBERTa. Note that you should use `RobertaForMaskedLM` rather than `RobertaModel`, since the latter does not have a masked language modeling head on top.\r\n\r\nThat really makes sense to me, even I am in bed. thanks a lot!", "You're welcome! Good night", "> You're welcome! Good night\r\nYour approach sovled my problem perfectly, now I have successfully converted the pytorch weights into tensorflow weights. Time to close the issue now. ^_^ " ]
1,606
1,606
1,606
NONE
null
In pytorch checkpoints of roberta in huggingface transformers, the last two layers are the "pooler layers": pooler.dense.weight pooler.dense.bias However, In original roberta tensorflow checkpoints, the last few layers are not the pooler layers, instead, they are: cls/predictions/output_bias (DT_FLOAT) [21128] cls/predictions/transform/LayerNorm/beta (DT_FLOAT) [768] cls/predictions/transform/LayerNorm/gamma (DT_FLOAT) [768] cls/predictions/transform/dense/bias (DT_FLOAT) [768] cls/predictions/transform/dense/kernel (DT_FLOAT) [768,768] cls/seq_relationship/output_bias (DT_FLOAT) [2] cls/seq_relationship/output_weights (DT_FLOAT) [2,768] these 'cls/' layers came after the pooler layers. I converted the pytorch checkpoints into tensorflow checkpoints. Then when I try to load the weights, all I was told was: tensorflow.python.framework.errors_impl.NotFoundError: Key cls/predictions/transform/dense/kernel not found in checkpoint which means the 'cls/' layers do not exist at all! so why these layers are gone in pytorch checkpoints provided by huggingface transformers? What should I do to get the weights of these 'cls/' layers? I am trying to use a roberta checkpoint that is trained by someone else using huggingface transformers, however, I have to convert it to a tensorflow version for my code is in tensorflow version , but this problem occurs. how can I correctly convert the checkpoints?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8755/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8755/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8754
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8754/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8754/comments
https://api.github.com/repos/huggingface/transformers/issues/8754/events
https://github.com/huggingface/transformers/issues/8754
749,564,651
MDU6SXNzdWU3NDk1NjQ2NTE=
8,754
Allow to provide specific params in WandbCallback
{ "login": "raphael0202", "id": 9609923, "node_id": "MDQ6VXNlcjk2MDk5MjM=", "avatar_url": "https://avatars.githubusercontent.com/u/9609923?v=4", "gravatar_id": "", "url": "https://api.github.com/users/raphael0202", "html_url": "https://github.com/raphael0202", "followers_url": "https://api.github.com/users/raphael0202/followers", "following_url": "https://api.github.com/users/raphael0202/following{/other_user}", "gists_url": "https://api.github.com/users/raphael0202/gists{/gist_id}", "starred_url": "https://api.github.com/users/raphael0202/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/raphael0202/subscriptions", "organizations_url": "https://api.github.com/users/raphael0202/orgs", "repos_url": "https://api.github.com/users/raphael0202/repos", "events_url": "https://api.github.com/users/raphael0202/events{/privacy}", "received_events_url": "https://api.github.com/users/raphael0202/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Sorry I just noticed this issue.\r\nYou can actually already do it. After your run has been created, you can do `wandb.config['frozen_layers'] = 3`\r\nWe need to add a way to let you also create a run first (instead of `Trainer`) and then let `Trainer` adds automatically the extra configuration parameters.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,606
1,619
1,619
CONTRIBUTOR
null
# 🚀 Feature request It would be nice to be able to track additional params in wandb when using the Trainer interface. For example, I need to track down how many layers were frozen in each experiment. I'm currently using a custom WandCallback class to do this. ## Your contribution I can work on a PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8754/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8754/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8753
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8753/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8753/comments
https://api.github.com/repos/huggingface/transformers/issues/8753/events
https://github.com/huggingface/transformers/pull/8753
749,563,452
MDExOlB1bGxSZXF1ZXN0NTI2MzYxMTk3
8,753
update README.txt
{ "login": "bino282", "id": 17800187, "node_id": "MDQ6VXNlcjE3ODAwMTg3", "avatar_url": "https://avatars.githubusercontent.com/u/17800187?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bino282", "html_url": "https://github.com/bino282", "followers_url": "https://api.github.com/users/bino282/followers", "following_url": "https://api.github.com/users/bino282/following{/other_user}", "gists_url": "https://api.github.com/users/bino282/gists{/gist_id}", "starred_url": "https://api.github.com/users/bino282/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bino282/subscriptions", "organizations_url": "https://api.github.com/users/bino282/orgs", "repos_url": "https://api.github.com/users/bino282/repos", "events_url": "https://api.github.com/users/bino282/events{/privacy}", "received_events_url": "https://api.github.com/users/bino282/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "Closing this one as duplicate was already merged!\r\n\r\nFor context please also read https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755" ]
1,606
1,607
1,607
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8753/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8753/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8753", "html_url": "https://github.com/huggingface/transformers/pull/8753", "diff_url": "https://github.com/huggingface/transformers/pull/8753.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8753.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8752
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8752/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8752/comments
https://api.github.com/repos/huggingface/transformers/issues/8752/events
https://github.com/huggingface/transformers/pull/8752
749,364,203
MDExOlB1bGxSZXF1ZXN0NTI2MTkxODEz
8,752
Create README.md
{ "login": "ghost", "id": 10137, "node_id": "MDQ6VXNlcjEwMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghost", "html_url": "https://github.com/ghost", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.github.com/users/ghost/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghost/subscriptions", "organizations_url": "https://api.github.com/users/ghost/orgs", "repos_url": "https://api.github.com/users/ghost/repos", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "received_events_url": "https://api.github.com/users/ghost/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,606
1,606
1,606
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8752/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8752/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8752", "html_url": "https://github.com/huggingface/transformers/pull/8752", "diff_url": "https://github.com/huggingface/transformers/pull/8752.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8752.patch", "merged_at": 1606343902000 }
https://api.github.com/repos/huggingface/transformers/issues/8751
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8751/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8751/comments
https://api.github.com/repos/huggingface/transformers/issues/8751/events
https://github.com/huggingface/transformers/pull/8751
749,357,623
MDExOlB1bGxSZXF1ZXN0NTI2MTg2Mjc3
8,751
Create README.md
{ "login": "ghost", "id": 10137, "node_id": "MDQ6VXNlcjEwMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghost", "html_url": "https://github.com/ghost", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.github.com/users/ghost/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghost/subscriptions", "organizations_url": "https://api.github.com/users/ghost/orgs", "repos_url": "https://api.github.com/users/ghost/repos", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "received_events_url": "https://api.github.com/users/ghost/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "Update model card" ]
1,606
1,607
1,607
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8751/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8751/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8751", "html_url": "https://github.com/huggingface/transformers/pull/8751", "diff_url": "https://github.com/huggingface/transformers/pull/8751.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8751.patch", "merged_at": 1607697614000 }
https://api.github.com/repos/huggingface/transformers/issues/8750
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8750/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8750/comments
https://api.github.com/repos/huggingface/transformers/issues/8750/events
https://github.com/huggingface/transformers/pull/8750
749,302,153
MDExOlB1bGxSZXF1ZXN0NTI2MTQxMTYy
8,750
Fix minor bug to handle dynamic sequence length
{ "login": "duyvuleo", "id": 5590702, "node_id": "MDQ6VXNlcjU1OTA3MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/5590702?v=4", "gravatar_id": "", "url": "https://api.github.com/users/duyvuleo", "html_url": "https://github.com/duyvuleo", "followers_url": "https://api.github.com/users/duyvuleo/followers", "following_url": "https://api.github.com/users/duyvuleo/following{/other_user}", "gists_url": "https://api.github.com/users/duyvuleo/gists{/gist_id}", "starred_url": "https://api.github.com/users/duyvuleo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/duyvuleo/subscriptions", "organizations_url": "https://api.github.com/users/duyvuleo/orgs", "repos_url": "https://api.github.com/users/duyvuleo/repos", "events_url": "https://api.github.com/users/duyvuleo/events{/privacy}", "received_events_url": "https://api.github.com/users/duyvuleo/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[ { "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false } ]
[ "Hi! Could you run `make quality` at the root of your clone so that it passes the code quality test?", "@LysandreJik : I did run `make quality` but got the following error:\r\n\r\n`(pyvenv3-transformers-forked) ➜ transformers git:(master-minor-fix-t5) make quality\r\nblack --check examples tests src utils\r\nAll done! ✨ 🍰 ✨\r\n621 files would be left unchanged.\r\nisort --check-only examples tests src utils\r\nflake8 examples tests src utils\r\npython utils/style_doc.py src/transformers docs/source --max_len 119 --check_only\r\n/Library/Developer/CommandLineTools/usr/bin/make extra_quality_checks\r\n/Users/PZ9DU5/vuh/tools/pyvenv3-transformers-forked/lib/python3.7/site-packages/setuptools/dist.py:454: UserWarning: Normalizing '4.0.0-rc-1' to '4.0.0rc1'\r\n warnings.warn(tmpl.format(**locals()))\r\nrunning deps_table_update\r\nupdating src/transformers/dependency_versions_table.py\r\npython utils/check_copies.py\r\npython utils/check_dummies.py\r\npython utils/check_repo.py\r\nChecking all models are properly tested.\r\nChecking all models are properly documented.\r\nChecking all models are in at least one auto class.\r\nTraceback (most recent call last):\r\n File \"utils/check_repo.py\", line 400, in <module>\r\n check_repo_quality()\r\n File \"utils/check_repo.py\", line 396, in check_repo_quality\r\n check_all_models_are_auto_configured()\r\n File \"utils/check_repo.py\", line 342, in check_all_models_are_auto_configured\r\n all_auto_models = get_all_auto_configured_models()\r\n File \"utils/check_repo.py\", line 316, in get_all_auto_configured_models\r\n for attr_name in dir(transformers.models.auto.modeling_auto):\r\nAttributeError: module 'transformers.models.auto' has no attribute 'modeling_auto'\r\nmake[1]: *** [extra_quality_checks] Error 1\r\nmake: *** [quality] Error 2`", "anoy nody working on htis?" ]
1,606
1,697
null
NONE
null
This PR is to fix a minor bug for handling dynamic sequence length, e.g., during building the serving model(s).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8750/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8750/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8750", "html_url": "https://github.com/huggingface/transformers/pull/8750", "diff_url": "https://github.com/huggingface/transformers/pull/8750.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8750.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8749
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8749/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8749/comments
https://api.github.com/repos/huggingface/transformers/issues/8749/events
https://github.com/huggingface/transformers/issues/8749
749,300,370
MDU6SXNzdWU3NDkzMDAzNzA=
8,749
[core] transformers version number normalization
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "We're making a release today, so will change the version number to the proper format just after :-)", "Following what @sgugger said!" ]
1,606
1,608
1,608
CONTRIBUTOR
null
It looks that we need to either normalize the `transformers` version number to one of the accepted formats: ``` x.y.z x.y.z.dev0 x.y.z.rc1 ``` or silence the warning. Currently, `setuptools` doesn't like `-rc-1` and `-dev`, as you can see from: ``` python setup.py --name .../python3.8/site-packages/setuptools/dist.py:452: UserWarning: Normalizing '4.0.0-rc-1' to '4.0.0rc1' ``` with `4.0.0-dev` ``` python3.8/site-packages/setuptools/dist.py:452: UserWarning: Normalizing '4.0.0-dev' to '4.0.0.dev0' ``` Otherwise, this warning will be showing up all the time during `make style` and friends once https://github.com/huggingface/transformers/pull/8645 is merged. @sgugger, @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8749/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8749/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8748
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8748/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8748/comments
https://api.github.com/repos/huggingface/transformers/issues/8748/events
https://github.com/huggingface/transformers/issues/8748
749,265,320
MDU6SXNzdWU3NDkyNjUzMjA=
8,748
"AutoTokenizer.from_pretrained" does not work when loading a pretrained Albert model
{ "login": "iamfaith", "id": 16201784, "node_id": "MDQ6VXNlcjE2MjAxNzg0", "avatar_url": "https://avatars.githubusercontent.com/u/16201784?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iamfaith", "html_url": "https://github.com/iamfaith", "followers_url": "https://api.github.com/users/iamfaith/followers", "following_url": "https://api.github.com/users/iamfaith/following{/other_user}", "gists_url": "https://api.github.com/users/iamfaith/gists{/gist_id}", "starred_url": "https://api.github.com/users/iamfaith/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iamfaith/subscriptions", "organizations_url": "https://api.github.com/users/iamfaith/orgs", "repos_url": "https://api.github.com/users/iamfaith/repos", "events_url": "https://api.github.com/users/iamfaith/events{/privacy}", "received_events_url": "https://api.github.com/users/iamfaith/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Can you share your version of `transformers`, `tokenizers`?", "I can reproduce this in a Colab notebook when doing `pip install transformers`. \r\n- Transformers version 3.5.1\r\n- Tokenizers version 0.9.3\r\n\r\nMight be solved with v4? ", "I am having the same issue with AlbertTokenizer.from_pretrained", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.", "i have the same question!" ]
1,606
1,628
1,614
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: 5.4.0-53-generic #59~18.04.1-Ubuntu SMP Wed Oct 21 12:14:56 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux - Python version: 3.7.9 - PyTorch version (GPU?): 1.7.0 - Tensorflow version (GPU?): N/A - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao Blenderbot: @patrickvonplaten Bart: @patrickvonplaten Marian: @patrickvonplaten Pegasus: @patrickvonplaten mBART: @patrickvonplaten T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSMT: @stas00 examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Install PyTorch from the official website as well as the transformers via pip. 2. Using the following pre-trained model: ``` from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("ckiplab/albert-tiny-chinese") model = AutoModelForMaskedLM.from_pretrained("ckiplab/albert-tiny-chinese") ``` 3. Error: ``` Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 683/683 [00:00<00:00, 1.32MB/s] Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 112/112 [00:00<00:00, 215kB/s] Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 174/174 [00:00<00:00, 334kB/s] Traceback (most recent call last): File "/home/faith/torch_tutorials/torch_chatbot.py", line 30, in <module> tokenizer = AutoTokenizer.from_pretrained("ckiplab/albert-tiny-chinese") File "/home/faith/miniconda3/envs/torch/lib/python3.7/site-packages/transformers/tokenization_auto.py", line 341, in from_pretrained return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/home/faith/miniconda3/envs/torch/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1653, in from_pretrained resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs File "/home/faith/miniconda3/envs/torch/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1725, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/home/faith/miniconda3/envs/torch/lib/python3.7/site-packages/transformers/tokenization_albert.py", line 149, in __init__ self.sp_model.Load(vocab_file) File "/home/faith/miniconda3/envs/torch/lib/python3.7/site-packages/sentencepiece.py", line 367, in Load return self.LoadFromFile(model_file) File "/home/faith/miniconda3/envs/torch/lib/python3.7/site-packages/sentencepiece.py", line 177, in LoadFromFile return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg) TypeError: not a string ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> Expect to download this model correctly with error prompting.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8748/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8748/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8747
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8747/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8747/comments
https://api.github.com/repos/huggingface/transformers/issues/8747/events
https://github.com/huggingface/transformers/pull/8747
749,214,877
MDExOlB1bGxSZXF1ZXN0NTI2MDY4MTc5
8,747
Return correct Bart hidden state tensors
{ "login": "joeddav", "id": 9353833, "node_id": "MDQ6VXNlcjkzNTM4MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joeddav", "html_url": "https://github.com/joeddav", "followers_url": "https://api.github.com/users/joeddav/followers", "following_url": "https://api.github.com/users/joeddav/following{/other_user}", "gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}", "starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joeddav/subscriptions", "organizations_url": "https://api.github.com/users/joeddav/orgs", "repos_url": "https://api.github.com/users/joeddav/repos", "events_url": "https://api.github.com/users/joeddav/events{/privacy}", "received_events_url": "https://api.github.com/users/joeddav/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "## UPDATE:\r\n\r\nThis PR is ready for review.\r\n\r\n@sgugger, @LysandreJik - this very-well documented issue: https://github.com/huggingface/transformers/issues/8601 shows that for some models the gradient of the outputted `hidden_states` and the `attentions` cannot be computed because the tensors are excluded from the computation graph via some `transpose`, `permute`, or `slice` operations. \r\n@joeddav found a great fix for Bart and I applied the same fix now for all other models and added a test.\r\n\r\nThe only models that are not capable of keeping the gradient in `attentions` and `hidden_states` are \r\n- Longformer -> chunked attention slice operations don't allow keeping grad\r\n- Reformer -> customized backward doesn't help here\r\n- ProphetNet -> Decoder part can't keep grad because of slice operations\r\n- TransfoXL, XLNet -> two stream attention doesn't allow to keep the grad either\r\n\r\nAll other models can keep the grad which is ensured by the test." ]
1,606
1,613
1,606
CONTRIBUTOR
null
# What does this PR do? Fixes #8601. When `output_hidden_states=True`, hidden states transposing is done _before_ being fed through the next layer. This ensures that the returned hidden state tensors lie upstream in the graph from the model outputs (allowing their gradients to be computed).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8747/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8747/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8747", "html_url": "https://github.com/huggingface/transformers/pull/8747", "diff_url": "https://github.com/huggingface/transformers/pull/8747.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8747.patch", "merged_at": 1606338365000 }
https://api.github.com/repos/huggingface/transformers/issues/8746
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8746/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8746/comments
https://api.github.com/repos/huggingface/transformers/issues/8746/events
https://github.com/huggingface/transformers/pull/8746
749,203,633
MDExOlB1bGxSZXF1ZXN0NTI2MDU5MDc4
8,746
Fix slow tests v2
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,606
1,606
1,606
MEMBER
null
Fix a few tests that were broken in recent PRs.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8746/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8746/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8746", "html_url": "https://github.com/huggingface/transformers/pull/8746", "diff_url": "https://github.com/huggingface/transformers/pull/8746.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8746.patch", "merged_at": 1606228512000 }
https://api.github.com/repos/huggingface/transformers/issues/8745
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8745/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8745/comments
https://api.github.com/repos/huggingface/transformers/issues/8745/events
https://github.com/huggingface/transformers/pull/8745
749,201,019
MDExOlB1bGxSZXF1ZXN0NTI2MDU2OTUz
8,745
added instructions for syncing upstream master with forked master via PR
{ "login": "bdalal", "id": 3478378, "node_id": "MDQ6VXNlcjM0NzgzNzg=", "avatar_url": "https://avatars.githubusercontent.com/u/3478378?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bdalal", "html_url": "https://github.com/bdalal", "followers_url": "https://api.github.com/users/bdalal/followers", "following_url": "https://api.github.com/users/bdalal/following{/other_user}", "gists_url": "https://api.github.com/users/bdalal/gists{/gist_id}", "starred_url": "https://api.github.com/users/bdalal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bdalal/subscriptions", "organizations_url": "https://api.github.com/users/bdalal/orgs", "repos_url": "https://api.github.com/users/bdalal/repos", "events_url": "https://api.github.com/users/bdalal/events{/privacy}", "received_events_url": "https://api.github.com/users/bdalal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@LysandreJik, @sgugger \r\n\r\nWe are trying to give instructions to avoid:\r\n\r\n1. this: \r\n![snapshot_3](https://user-images.githubusercontent.com/10676103/100027351-77edf480-2da1-11eb-8d0a-4590569042c0.png)\r\nThis is just one of the many examples - a snapshot is from the bottom of https://github.com/huggingface/transformers/pull/8400\r\n\r\n You can browse recent PRs for many more of these.\r\n\r\n These are not \"legit\" references - but automatic replays of PRs.\r\n\r\n2. unnecessary notifications for the developers mentioned in PR commit messages when these are replied in user forks.\r\n\r\n`CONTRIBUTING.md` is not the most intuitive place for this, but at the moment there is no other place I could think of. At the very least if a new fork user starts doing this, we can refer them to this section.\r\n\r\nOf course, the perfect solution would be for github to give repo admins an option to not allow ping-backs from repo forks. But I don't think it's available now.\r\n" ]
1,606
1,606
1,606
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #8742 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [commit](https://github.com/lucidworks/transformers/commit/46b17d206529206848116fd6219643446bac938c#commitcomment-44356945)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 --> @stas00
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8745/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8745/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8745", "html_url": "https://github.com/huggingface/transformers/pull/8745", "diff_url": "https://github.com/huggingface/transformers/pull/8745.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8745.patch", "merged_at": 1606230707000 }
https://api.github.com/repos/huggingface/transformers/issues/8744
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8744/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8744/comments
https://api.github.com/repos/huggingface/transformers/issues/8744/events
https://github.com/huggingface/transformers/pull/8744
749,196,714
MDExOlB1bGxSZXF1ZXN0NTI2MDUzMzQ4
8,744
Added instructions for syncing forked masters to avoid references
{ "login": "bdalal", "id": 3478378, "node_id": "MDQ6VXNlcjM0NzgzNzg=", "avatar_url": "https://avatars.githubusercontent.com/u/3478378?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bdalal", "html_url": "https://github.com/bdalal", "followers_url": "https://api.github.com/users/bdalal/followers", "following_url": "https://api.github.com/users/bdalal/following{/other_user}", "gists_url": "https://api.github.com/users/bdalal/gists{/gist_id}", "starred_url": "https://api.github.com/users/bdalal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bdalal/subscriptions", "organizations_url": "https://api.github.com/users/bdalal/orgs", "repos_url": "https://api.github.com/users/bdalal/repos", "events_url": "https://api.github.com/users/bdalal/events{/privacy}", "received_events_url": "https://api.github.com/users/bdalal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@stas00 I performed the merge using the steps I've written and no references or pings are made, so it works. Let me know if some improvements can be made.\r\nThanks.", "Sorry, this one should've been for merging into my fork. Created the PR in the wrong place.\r\nI'll do a rebase for merge here." ]
1,606
1,606
1,606
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #8742 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [commit](https://github.com/lucidworks/transformers/commit/46b17d206529206848116fd6219643446bac938c#commitcomment-44356945)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 --> @stas00
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8744/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8744/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8744", "html_url": "https://github.com/huggingface/transformers/pull/8744", "diff_url": "https://github.com/huggingface/transformers/pull/8744.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8744.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8743
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8743/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8743/comments
https://api.github.com/repos/huggingface/transformers/issues/8743/events
https://github.com/huggingface/transformers/pull/8743
749,194,847
MDExOlB1bGxSZXF1ZXN0NTI2MDUxNzgw
8,743
MT5 should have an autotokenizer
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,606
1,606
1,606
MEMBER
null
MT5 should have an auto-tokenizer. Currently fails a lot of slow tests.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8743/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8743/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8743", "html_url": "https://github.com/huggingface/transformers/pull/8743", "diff_url": "https://github.com/huggingface/transformers/pull/8743.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8743.patch", "merged_at": 1606229426000 }
https://api.github.com/repos/huggingface/transformers/issues/8742
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8742/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8742/comments
https://api.github.com/repos/huggingface/transformers/issues/8742/events
https://github.com/huggingface/transformers/issues/8742
749,189,261
MDU6SXNzdWU3NDkxODkyNjE=
8,742
Add instructions for syncing forked masters to avoid references
{ "login": "bdalal", "id": 3478378, "node_id": "MDQ6VXNlcjM0NzgzNzg=", "avatar_url": "https://avatars.githubusercontent.com/u/3478378?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bdalal", "html_url": "https://github.com/bdalal", "followers_url": "https://api.github.com/users/bdalal/followers", "following_url": "https://api.github.com/users/bdalal/following{/other_user}", "gists_url": "https://api.github.com/users/bdalal/gists{/gist_id}", "starred_url": "https://api.github.com/users/bdalal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bdalal/subscriptions", "organizations_url": "https://api.github.com/users/bdalal/orgs", "repos_url": "https://api.github.com/users/bdalal/repos", "events_url": "https://api.github.com/users/bdalal/events{/privacy}", "received_events_url": "https://api.github.com/users/bdalal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,606
1,606
1,606
CONTRIBUTOR
null
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> Ticket created based on discussion https://github.com/lucidworks/transformers/commit/46b17d206529206848116fd6219643446bac938c#commitcomment-44356945 The problem is when someone on a forked repository decides to sync up their masters with upstream (HF master) using a branch and a PR, all the PR and issue references on the upstream will make their way into the forked PR's commit history, if that user creates a merge commit. Since GitHub autolinks issues and PRs on public forks, this will end up pinging the devs responsible for the referenced PRs creating unnecessary noise. The solution is to use a squashed merge. One way to educate users with forked repos about this potential issue is to add instructions on how to do so to the `CONTRIBUTING.md` file. ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md --> A PR with instructions to avoid this situation will be up shortly. cc @stas00
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8742/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8742/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8741
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8741/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8741/comments
https://api.github.com/repos/huggingface/transformers/issues/8741/events
https://github.com/huggingface/transformers/pull/8741
749,177,708
MDExOlB1bGxSZXF1ZXN0NTI2MDM3NDQx
8,741
Model parallel documentation
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,606
1,606
1,606
MEMBER
null
Fixes the parallelization docs
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8741/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8741/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8741", "html_url": "https://github.com/huggingface/transformers/pull/8741", "diff_url": "https://github.com/huggingface/transformers/pull/8741.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8741.patch", "merged_at": 1606180489000 }
https://api.github.com/repos/huggingface/transformers/issues/8740
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8740/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8740/comments
https://api.github.com/repos/huggingface/transformers/issues/8740/events
https://github.com/huggingface/transformers/issues/8740
749,111,184
MDU6SXNzdWU3NDkxMTExODQ=
8,740
Blank line indicates the end of a document for NER training ?
{ "login": "polodealvarado", "id": 30154911, "node_id": "MDQ6VXNlcjMwMTU0OTEx", "avatar_url": "https://avatars.githubusercontent.com/u/30154911?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polodealvarado", "html_url": "https://github.com/polodealvarado", "followers_url": "https://api.github.com/users/polodealvarado/followers", "following_url": "https://api.github.com/users/polodealvarado/following{/other_user}", "gists_url": "https://api.github.com/users/polodealvarado/gists{/gist_id}", "starred_url": "https://api.github.com/users/polodealvarado/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polodealvarado/subscriptions", "organizations_url": "https://api.github.com/users/polodealvarado/orgs", "repos_url": "https://api.github.com/users/polodealvarado/repos", "events_url": "https://api.github.com/users/polodealvarado/events{/privacy}", "received_events_url": "https://api.github.com/users/polodealvarado/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,606
1,614
1,614
NONE
null
Hi everyone! In the <https://huggingface.co/transformers/custom_datasets.html#token-classification-with-w-nut-emerging-entities> , it says that each line of the dataset file contains either (1) a word and tag separated by a tab, or (2) a blank line indicating the end of a document. The blank line should not represent the end of a sentence?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8740/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8740/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8739
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8739/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8739/comments
https://api.github.com/repos/huggingface/transformers/issues/8739/events
https://github.com/huggingface/transformers/issues/8739
749,090,434
MDU6SXNzdWU3NDkwOTA0MzQ=
8,739
AttributeError: 'BertTokenizerFast' object has no attribute 'max_len'
{ "login": "zcain117", "id": 14796584, "node_id": "MDQ6VXNlcjE0Nzk2NTg0", "avatar_url": "https://avatars.githubusercontent.com/u/14796584?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zcain117", "html_url": "https://github.com/zcain117", "followers_url": "https://api.github.com/users/zcain117/followers", "following_url": "https://api.github.com/users/zcain117/following{/other_user}", "gists_url": "https://api.github.com/users/zcain117/gists{/gist_id}", "starred_url": "https://api.github.com/users/zcain117/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zcain117/subscriptions", "organizations_url": "https://api.github.com/users/zcain117/orgs", "repos_url": "https://api.github.com/users/zcain117/repos", "events_url": "https://api.github.com/users/zcain117/events{/privacy}", "received_events_url": "https://api.github.com/users/zcain117/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It is actually due to https://github.com/huggingface/transformers/pull/8604, where we removed several deprecated arguments. The `run_language_modeling.py` script is deprecated in favor of `language-modeling/run_{clm, plm, mlm}.py`.\r\n\r\nIs it possible for you to switch to one of these newer scripts? If not, the fix is to change `max_len` to `model_max_length`. We welcome PRs to fix it, but we won't be maintaining that script ourselves as there exists better alternatives now (which run on TPU too :slightly_smiling_face:)", "Thanks for taking a look! I will try out the new script", "The new runner is working for us on TPUs. Thanks again for the tip!", "Hello, Everything was a few days. I am getting the same error \" data_args.block_size = min(data_args.block_size, tokenizer.max_len)\r\n**AttributeError: 'RobertaTokenizerFast' object has no attribute 'max_len\"**. \r\n\r\nI can't switch to a new script as you mentioned. Kindly help me with this error. I do not know how to fix it. Here is my chunk of codes.\r\n```\r\n\r\n`!python \"/content/transformers/examples/contrib/legacy/run_language_modeling.py\" \\\r\n --output_dir \"/content/drive/MyDrive/Vancouver\" \\\r\n --model_name_or_path roberta-base \\\r\n --do_train \\\r\n --per_gpu_train_batch_size 8 \\\r\n --seed 42 \\\r\n --train_data_file \"/content/input_textOC.txt\" \\\r\n --block_size 256 \\\r\n --line_by_line \\\r\n --learning_rate 6e-4 \\\r\n --num_train_epochs 3 \\\r\n --save_total_limit 2 \\\r\n --save_steps 200 \\\r\n --weight_decay 0.01 \\\r\n --mlm`\r\n```", "> It is actually due to #8604, where we removed several deprecated arguments. The `run_language_modeling.py` script is deprecated in favor of `language-modeling/run_{clm, plm, mlm}.py`.\r\n> \r\n> Is it possible for you to switch to one of these newer scripts? If not, the fix is to change `max_len` to `model_max_length`. We welcome PRs to fix it, but we won't be maintaining that script ourselves as there exists better alternatives now (which run on TPU too )\r\n\r\nThe fix is mentioned above:\r\n\r\n> fix is to change `max_len` to `model_max_length`", "If you cannot switch scripts, I recommend pinning the library. You're having this error because you're using a legacy script with a `master` version that is not compatible.\r\n\r\nYou could pin it to v3.5.1.", "Thanks, I appreciate your response. However, I am still a basic learner. Can you please explain it a bit? how to pin it to v3.5.1.. Is it mean to use the old version of huggingface.?", "If you wish to stick to that deprecated example, yes! You can do so by checking out the tag v3.5.1:\r\n\r\n```\r\ngit checkout v3.5.1\r\n```\r\n\r\nIf you have installed transformers from pypi (and not from source), you should also update your transformers version:\r\n\r\n```\r\npip install -U transformers==3.5.1\r\n```\r\n\r\nPlease note that the script won't be in \"/content/transformers/examples/contrib/legacy/run_language_modeling.py\" anymore, but in \"/content/transformers/examples/language-modeling/run_language_modeling.py\"", "> It is actually due to #8604, where we removed several deprecated arguments. The `run_language_modeling.py` script is deprecated in favor of `language-modeling/run_{clm, plm, mlm}.py`.\r\n\r\nHello, I am facing the same issue with `run_language_modeling.py` (and more). Where can I find this new file `language-modeling/run_{clm, plm, mlm}.py`? Thanks!", "You can find them here https://github.com/huggingface/transformers/tree/master/examples/pytorch/language-modeling", "Thank you! ", "> It is actually due to #8604, where we removed several deprecated arguments. The `run_language_modeling.py` script is deprecated in favor of `language-modeling/run_{clm, plm, mlm}.py`.\r\n> \r\n> Is it possible for you to switch to one of these newer scripts? If not, the fix is to change `max_len` to `model_max_length`. We welcome PRs to fix it, but we won't be maintaining that script ourselves as there exists better alternatives now (which run on TPU too 🙂)\r\n\r\nChange `max_len` to `model_max_length` where?" ]
1,606
1,663
1,606
CONTRIBUTOR
null
## Environment info - `transformers` version: 4.0.0-rc-1 - Platform: Linux-4.9.0-14-amd64-x86_64-with-debian-9.13 - Python version: 3.6.10 - PyTorch version (GPU?): 1.8.0a0+4ed7f36 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: 8-core TPU training - **Using TPU** ### Who can help albert, bert, GPT2, XLM: @LysandreJik ## Information Model I am using (Bert, XLNet ...): bert and roberta The problem arises when using: * [X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [X] an official GLUE/SQUaD task: mlm * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 2 examples of failing commands: ``` E 2020-11-18T17:38:08.657584093Z python examples/xla_spawn.py \ E 2020-11-18T17:38:08.657588780Z --num_cores 8 \ E 2020-11-18T17:38:08.657593609Z examples/contrib/legacy/run_language_modeling.py \ E 2020-11-18T17:38:08.657598646Z --logging_dir ./tensorboard-metrics \ E 2020-11-18T17:38:08.657604088Z --cache_dir ./cache_dir \ E 2020-11-18T17:38:08.657609492Z --train_data_file /datasets/wikitext-103-raw/wiki.train.raw \ E 2020-11-18T17:38:08.657614614Z --do_train \ E 2020-11-18T17:38:08.657619772Z --do_eval \ E 2020-11-18T17:38:08.657624531Z --eval_data_file /datasets/wikitext-103-raw/wiki.valid.raw \ E 2020-11-18T17:38:08.657629731Z --overwrite_output_dir \ E 2020-11-18T17:38:08.657641827Z --output_dir language-modeling \ E 2020-11-18T17:38:08.657647203Z --logging_steps 100 \ E 2020-11-18T17:38:08.657651823Z --save_steps 3000 \ E 2020-11-18T17:38:08.657656739Z --overwrite_cache \ E 2020-11-18T17:38:08.657661282Z --tpu_metrics_debug \ E 2020-11-18T17:38:08.657667598Z --mlm --model_type=bert \ E 2020-11-18T17:38:08.657672545Z --model_name_or_path bert-base-cased \ E 2020-11-18T17:38:08.657677441Z --num_train_epochs 3 \ E 2020-11-18T17:38:08.657682320Z --per_device_train_batch_size 16 \ E 2020-11-18T17:38:08.657687053Z --per_device_eval_batch_size 16 ``` ``` 2020-11-18T17:51:49.357234955Z Traceback (most recent call last): E 2020-11-18T17:51:49.357239554Z File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 329, in _mp_start_fn E 2020-11-18T17:51:49.357245350Z _start_fn(index, pf_cfg, fn, args) E 2020-11-18T17:51:49.357249851Z File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 323, in _start_fn E 2020-11-18T17:51:49.357254654Z fn(gindex, *args) E 2020-11-18T17:51:49.357272443Z File "/transformers/examples/contrib/legacy/run_language_modeling.py", line 359, in _mp_fn E 2020-11-18T17:51:49.357277658Z main() E 2020-11-18T17:51:49.357281928Z File "/transformers/examples/contrib/legacy/run_language_modeling.py", line 279, in main E 2020-11-18T17:51:49.357287863Z data_args.block_size = tokenizer.max_len E 2020-11-18T17:51:49.357292355Z AttributeError: 'BertTokenizerFast' object has no attribute 'max_len' E ``` ``` E 2020-11-18T06:47:53.910306819Z python examples/xla_spawn.py \ E 2020-11-18T06:47:53.910310176Z --num_cores 8 \ E 2020-11-18T06:47:53.910314263Z examples/contrib/legacy/run_language_modeling.py \ E 2020-11-18T06:47:53.910319173Z --logging_dir ./tensorboard-metrics \ E 2020-11-18T06:47:53.910322683Z --cache_dir ./cache_dir \ E 2020-11-18T06:47:53.910325895Z --train_data_file /datasets/wikitext-103-raw/wiki.train.raw \ E 2020-11-18T06:47:53.910329170Z --do_train \ E 2020-11-18T06:47:53.910332491Z --do_eval \ E 2020-11-18T06:47:53.910335626Z --eval_data_file /datasets/wikitext-103-raw/wiki.valid.raw \ E 2020-11-18T06:47:53.910340314Z --overwrite_output_dir \ E 2020-11-18T06:47:53.910343710Z --output_dir language-modeling \ E 2020-11-18T06:47:53.910347004Z --logging_steps 100 \ E 2020-11-18T06:47:53.910350089Z --save_steps 3000 \ E 2020-11-18T06:47:53.910353259Z --overwrite_cache \ E 2020-11-18T06:47:53.910356297Z --tpu_metrics_debug \ E 2020-11-18T06:47:53.910359351Z --mlm --model_type=roberta \ E 2020-11-18T06:47:53.910362484Z --tokenizer=roberta-base \ E 2020-11-18T06:47:53.910365650Z --num_train_epochs 5 \ E 2020-11-18T06:47:53.910368797Z --per_device_train_batch_size 8 \ E 2020-11-18T06:47:53.910371843Z --per_device_eval_batch_size 8 ``` ``` 2020-11-18T06:48:27.357394365Z Traceback (most recent call last): E 2020-11-18T06:48:27.357399685Z File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 329, in _mp_start_fn E 2020-11-18T06:48:27.357405353Z _start_fn(index, pf_cfg, fn, args) E 2020-11-18T06:48:27.357426600Z File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 323, in _start_fn E 2020-11-18T06:48:27.357448514Z fn(gindex, *args) E 2020-11-18T06:48:27.357454250Z File "/transformers/examples/contrib/legacy/run_language_modeling.py", line 359, in _mp_fn E 2020-11-18T06:48:27.357460262Z main() E 2020-11-18T06:48:27.357465843Z File "/transformers/examples/contrib/legacy/run_language_modeling.py", line 279, in main E 2020-11-18T06:48:27.357471227Z data_args.block_size = tokenizer.max_len E 2020-11-18T06:48:27.357477576Z AttributeError: 'RobertaTokenizerFast' object has no attribute 'max_len' E ``` The timing of this issue lines up with https://github.com/huggingface/transformers/pull/8586 Tests started failing on the evening of Nov 17, a few hours after that PR was submitted
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8739/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8739/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8738
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8738/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8738/comments
https://api.github.com/repos/huggingface/transformers/issues/8738/events
https://github.com/huggingface/transformers/pull/8738
749,075,961
MDExOlB1bGxSZXF1ZXN0NTI1OTU0NjIy
8,738
Fix max length in run_plm script
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,606
1,606
1,606
COLLABORATOR
null
# What does this PR do? The XLNet tokenizer has a ridiculously high maximum sequence length, so the `run_plm` was failing without setting the `max_seq_length` argument. This PR fixes that by setting default of 512 to it. Fixes #8674
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8738/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8738/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8738", "html_url": "https://github.com/huggingface/transformers/pull/8738", "diff_url": "https://github.com/huggingface/transformers/pull/8738.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8738.patch", "merged_at": 1606165352000 }
https://api.github.com/repos/huggingface/transformers/issues/8737
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8737/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8737/comments
https://api.github.com/repos/huggingface/transformers/issues/8737/events
https://github.com/huggingface/transformers/pull/8737
749,070,627
MDExOlB1bGxSZXF1ZXN0NTI1OTUwMzY1
8,737
consistent ignore keys + make private
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "oh, boy, there is also `authorized_unexpected_keys`\r\n\r\nhttps://github.com/huggingface/transformers/blob/49759c0cda29ab614b81e0869972c99f2edba7aa/src/transformers/modeling_tf_utils.py#L346-L354\r\n", "Indeed, very nice catch! How should we rename that one? ", "Current `authorized_missing_keys` and `authorized_unexpected_keys` do the same thing overall, just 2 different categories.\r\n\r\nperhaps?\r\n\r\n```\r\n - authorized_missing_keys => _keys_to_ignore_on_load_missing\r\n - authorized_unexpected_keys => _keys_to_ignore_on_load_unexpected\r\n - keys_to_never_save => _keys_to_ignore_on_save\r\n```", "For me `authorized_unexpected_keys` should be the `_keys_to_ignore_on_load`: they are in the state dict but we ignore them.\r\nThe `authorized_missing_keys` should have another name such as `_keys_missing_to_ignore_on_load` or just `_keys_missing_to_ignore`.\r\n\r\nRe- documentation. We usually document private stuff in comments in the code, so I think we should remove the public documentation and change it in comments.", "We were writing at the same time @stas00 , your names are better than mine. Go ahead!", "You can safely ignore the failed connections. It's been happening since the change to git-based repos. We're looking into fixing it with @julien-c, it happens very often.", "@LysandreJik, I trust you will document this breaking change - I just don't know where I'd do that...", "Yes, I'm currently documenting all breaking changes in the release notes." ]
1,606
1,606
1,606
CONTRIBUTOR
null
This PR addresses https://github.com/huggingface/transformers/issues/7258 (the proposal has evolved a bit since the initial PR, this comments reflects the current state) * [x] renames optional model attributes: ``` - authorized_missing_keys => _keys_to_ignore_on_load_missing - authorized_unexpected_keys => _keys_to_ignore_on_load_unexpected - keys_to_never_save => _keys_to_ignore_on_save ``` to (1) make them consistent (2) make them private * [x] removes these from public API docstring (documents them privately as comments in place) This is a breaking change. Fixes https://github.com/huggingface/transformers/issues/7258 @LysandreJik, @sgugger p.s. if we want to postpone it for v5, this PR was a quick one liner: ``` find . -type d -name ".git" -prune -o -type f -exec perl -pi -e 's|authorized_missing_keys|_keys_to_ignore_on_load_missing|g; s|authorized_unexpected_keys|_keys_to_ignore_on_load_unexpected|g'; s|keys_to_never_save|_keys_to_ignore_on_save|g' {} \; ``` and then manually adjusting the docs.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8737/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8737/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8737", "html_url": "https://github.com/huggingface/transformers/pull/8737", "diff_url": "https://github.com/huggingface/transformers/pull/8737.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8737.patch", "merged_at": 1606163593000 }
https://api.github.com/repos/huggingface/transformers/issues/8736
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8736/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8736/comments
https://api.github.com/repos/huggingface/transformers/issues/8736/events
https://github.com/huggingface/transformers/issues/8736
749,054,684
MDU6SXNzdWU3NDkwNTQ2ODQ=
8,736
[trainer] `model` argument is not the same depending on n_gpus
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Note that this all internal and a user only interacts/get used with that if they subclass `Trainer` and override the `prediction_step` method. I would keep it simple since it's touching a small part of our users that should be experienced enough to be able to read the docstrings, and just detail in the docstrings with a proper warning what this `model` argument represents.\r\n\r\nI can also live with 4 if it's the solution selected.", "As I didn't participate in the design of Trainer, and I don't know whether it's meant to be easily sub-classable or not - I currently can only think of some ideas and I trust you guys to choose the most suitable solution. I hope it makes sense.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Well, we have implemented number 4, so closing this." ]
1,606
1,611
1,611
CONTRIBUTOR
null
Extracting the discussion from https://github.com/huggingface/transformers/pull/8716 Summary of the issue: `prediction_step()` has a `model` argument which is a normal model with n_gpu < 2, and a wrapped DataParallel model with n_gpu > 1. So the API suffers from ambiguity here. The user has to really use `self.model` to be able to call methods like `model.config()` or `model.generate()`, which can't be called on the wrapped model. But it's very likely they will use `model` instead since it'll act like `self.model` unless under multi_gpu. And why do we even have that `model` argument then? Possible solutions discussed: 1. monkeypatch `torch.nn.DataParallel` to expand its API to support all the methods of the original model transparently by installing a catch all `__getattr__` and remap all the failed method look ups to delegate to `self.module`. 2. not to call the function argument `model` anymore, since it isn't under multi gpu, but is something else. 3. remove the `model` argument completely + document to always use `self.model` - currently in `seq2seq_trainer.py `once we switch to `self.model`, `prediction_step()` no longer needs `model` as an argument (but is it always the case?) 4. pass `self.model `as the `model` arg, and making the wrapped model available via `self.wrapped_model` if the user needs it. Summary of discussion around proposed solutions: 1. too magical 2. proposed calling it `wrapped_model`, but it's just as confusing since most of the time it's not. 3. need to check whether wrapped model is every needed inside user functions. 4. was not discussed yet @sgugger, @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8736/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8736/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8735
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8735/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8735/comments
https://api.github.com/repos/huggingface/transformers/issues/8735/events
https://github.com/huggingface/transformers/issues/8735
749,041,393
MDU6SXNzdWU3NDkwNDEzOTM=
8,735
Model can't be downloaded
{ "login": "moniquebm", "id": 60358442, "node_id": "MDQ6VXNlcjYwMzU4NDQy", "avatar_url": "https://avatars.githubusercontent.com/u/60358442?v=4", "gravatar_id": "", "url": "https://api.github.com/users/moniquebm", "html_url": "https://github.com/moniquebm", "followers_url": "https://api.github.com/users/moniquebm/followers", "following_url": "https://api.github.com/users/moniquebm/following{/other_user}", "gists_url": "https://api.github.com/users/moniquebm/gists{/gist_id}", "starred_url": "https://api.github.com/users/moniquebm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/moniquebm/subscriptions", "organizations_url": "https://api.github.com/users/moniquebm/orgs", "repos_url": "https://api.github.com/users/moniquebm/repos", "events_url": "https://api.github.com/users/moniquebm/events{/privacy}", "received_events_url": "https://api.github.com/users/moniquebm/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @moniquebm, \r\n\r\nI cannot reproduce this error on master.\r\n\r\nit looks like you are working with an old version of transformers. Could you try updating `transformers` to `3.5.0` to see if the error persists?\r\n\r\nAnother reason why this doesn't work might be that you have a directory locally that is also called `monilouise/ner_pt_br`, so that instead of downloading from the model hub `from.pretrained()` tries to load a local model. You can check whether this might be the problem by running the command from a different directory or checking whether you have a local dir called `monilouise`.", "@patrickvonplaten is 100% right on his first guess that it's due to using transformers < `v3.5.x`\r\n\r\nWe backport new git-based models back to the previous S3 bucket (for models to be usable on previous versions of the library) automatically, however there was a hiccup yesterday that crashed the process (it's currently sync'ing again).", "Hi @patrickvonplaten and @julien-c , \r\n\r\nI've just updated transformers version and it worked. \r\n\r\nThanks!" ]
1,606
1,606
1,606
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.1.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.8.5 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) ### Who can help @julien-c ## To reproduce I've recently shared a finetunned model - monilouise/ner_pt_br - and despite its model card is already available at https://github.com/huggingface/transformers/tree/master/model_cards, the following error occurs when I try to download it: OSError: Can't load config for 'monilouise/ner_pt_br'. Make sure that: - 'monilouise/ner_pt_br' is a correct model identifier listed on 'https://huggingface.co/models' - or 'monilouise/ner_pt_br' is the correct path to a directory containing a config.json file BUT the model has config.json available at https://huggingface.co/monilouise/ner_pt_br/tree/main. I used the following code to download the model: ```python from transformers import BertForTokenClassification model = BertForTokenClassification.from_pretrained('monilouise/ner_pt_br') ``` Is there anything else missing from the recommended sharing procedures? Thanks in advance.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8735/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8735/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8734
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8734/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8734/comments
https://api.github.com/repos/huggingface/transformers/issues/8734/events
https://github.com/huggingface/transformers/pull/8734
749,036,008
MDExOlB1bGxSZXF1ZXN0NTI1OTIyMzM0
8,734
Change default cache path
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,606
1,606
1,606
COLLABORATOR
null
# What does this PR do? In Datasets, the default cache path ends up in `~/.cache/huggingface/datasets`, controlled by the environment variable `HF_HOME`. This PR uses the same env variable for the default cache path. To avoid breaking changes: - it still honors old environment variable names, if set - if none is set, it moves the cache folder from the old location to the new one with a warning.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8734/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8734/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8734", "html_url": "https://github.com/huggingface/transformers/pull/8734", "diff_url": "https://github.com/huggingface/transformers/pull/8734.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8734.patch", "merged_at": 1606157805000 }
https://api.github.com/repos/huggingface/transformers/issues/8733
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8733/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8733/comments
https://api.github.com/repos/huggingface/transformers/issues/8733/events
https://github.com/huggingface/transformers/issues/8733
749,024,245
MDU6SXNzdWU3NDkwMjQyNDU=
8,733
[proposal] do not load all 3rd party packaged unless needed
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It's hard to debate without seeing actual code on this. Am I 100% happy with the current implementation? Not really. But it's simple enough that the code stays easy to read. I'm afraid something more dynamic (like importing tf only when instantiating a TFModel for instance) would mean harder code. So I reserve my judgement on seeing an actual PoC to evaluate the benefits of a different approach vs the code complexity it introduces.", "Ok, gave it a go and worked on a PoC here: https://github.com/sgugger/lazy_init\r\nIt lazily loads objects when they are actually imported, so won't load TF/PyTorch until you try to import your first model (which should speed up the `import transformers` a lot and avoid unnecessary verbosity). Let me know if you have any comments on it @stas00 !", "Looks awesome, @sgugger! Thank you for doing it!\r\n\r\nSo how do you feel about it now that you have coded it? Will this make things unnecessarily complex and possibly introduce unexpected issues?\r\n\r\nPerhaps start with just tf/pt, see how it feels - and then expand to other modules if the first experiment flows well?\r\n", "Since it's limited to the inits, I'm fine with it. The idea is to collect feedback this week and start implementing it in Transformers next week.", "next week ping ;)", "Since we need to perform some special checks for the datasets library, we first need datasets to implement some version of this. Then we can roll it out to transformers. Will ping the datasets team :-)" ]
1,606
1,610
1,610
CONTRIBUTOR
null
This is a proposal to not load everything with `import transformers`, but instead load things as they are needed. ## Background of the need * For example what is realistic usage pattern for `tensorflow` in transformers - I know we have `USE_TF=False,` but perhaps it can be made by default `False` and only load it if it's actually needed based on usage patterns and not with `import transformers`? Also there was a particular segfault with [tf/cuda-11 vs pt/cuda-10](https://github.com/pytorch/pytorch/issues/46807) - the 2 couldn't be loaded together - the issue didn't get resolved. * Same goes for integration packages (`wandb`, `comet_ml`) and probably a bunch of other packages some of which are quite big. The problem is that each of these packages tends to have various issues, e.g. [fetching old libraries](https://github.com/wandb/client/issues/1498), [impacting init](https://github.com/huggingface/transformers/pull/8410), messing with `sys.path` and [overriding global warning settings](https://github.com/mlflow/mlflow/issues/3704) (`mlflow` was imported by PL - a seq2seq issue). Last week I was hunting all these down - and most have been fixed by now I think. The problem with integrations specifically is that currently we don't assert if say `comet_ml` is misconfigured, we issue a warning which gets lost in the ocean of warnings and nobody sees it. If, for example, the user were to say "use comet_ml" and it were misconfigured their program would have died loud and clear. Much cleaner and faster for the user. * Relying on "well, it's installed, let's load it" is not always working, since often modules get installed as dependencies of dependencies and aren't necessarily the right versions or configured or else, especially if `transformers` did not specify these modules as explicit dependencies and doesn't know the requirements (versions) were enforced. * And a lot of these packages emit a lot of noise, especially if one uses more recent python and packages - deprecation warnings are many. `tf` as always takes the first place, but other packages are there too. * Loading time is important too, especially when one doesn't run a 1-10h program, but is debugging a program that fails to start. e.g. loading `tf` may take several seconds, depending on the hardware. ## Implementation Clearly `transformers` wants to be easy to use. So perhaps by default `import transformers` should remain load-it-all-I-want-things-simple. And we need `import transformers_lean_and_mean_and_clean` which wouldn't load anything by default and ask the user to specify what components she really wants. I haven't yet thought specifically of how this could be implemented but wanted to see whether others feel that a more efficient way is needed. on slack @thomwolf proposed looking at how [Optuna](https://github.com/optuna/optuna) implements lazy loading of packages. @LysandreJik, @sgugger, @patrickvonplaten, @thomwolf
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8733/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8733/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8732
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8732/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8732/comments
https://api.github.com/repos/huggingface/transformers/issues/8732/events
https://github.com/huggingface/transformers/issues/8732
749,000,096
MDU6SXNzdWU3NDkwMDAwOTY=
8,732
[Benchmark] V100/A100 benchmarks, dashboard concept
{ "login": "tlkh", "id": 5409617, "node_id": "MDQ6VXNlcjU0MDk2MTc=", "avatar_url": "https://avatars.githubusercontent.com/u/5409617?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tlkh", "html_url": "https://github.com/tlkh", "followers_url": "https://api.github.com/users/tlkh/followers", "following_url": "https://api.github.com/users/tlkh/following{/other_user}", "gists_url": "https://api.github.com/users/tlkh/gists{/gist_id}", "starred_url": "https://api.github.com/users/tlkh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tlkh/subscriptions", "organizations_url": "https://api.github.com/users/tlkh/orgs", "repos_url": "https://api.github.com/users/tlkh/repos", "events_url": "https://api.github.com/users/tlkh/events{/privacy}", "received_events_url": "https://api.github.com/users/tlkh/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "That's incredible @tlkh!", "By the way, did you try to use the [benchmarking tool](https://huggingface.co/transformers/benchmarks.html) that we have in the library to run this evaluation?", "> By the way, did you try to use the benchmarking tool that we have in the library to run this evaluation?\r\n\r\nI did take a look at the tools, but I rolled my own in the end:\r\n\r\n- I also wanted to write my own training script and benchmark using that, also doing a test drive of the end-to-end PyTorch Lightning + HuggingFace Datasets & Transformers library\r\n- I wanted to do profiling and capture more metrics than just the time and memory (mostly out of curiosity)\r\n\r\nIn hindsight, I think the time and memory are the most important metrics and those are captured by the benchmarking tool in the library. \r\n\r\nI do think putting the benchmarks into the dashboard format would be valuable to let people visualize and see which configurations would work best for them. Can treat the one I made as a proof of concept, and we can see where to go from here if interested. ", "Very cool. I was planning on running benchmarks on exactly these cards as well but now I don't need to anymore! Is it possible to update the app in such a way that you can also see a side-by-side comparison of the V100 and the A100? I imagine that some people would like to see the benefit of the A100 easily without having to change the view for each graph. A side-by-side bar chart would be cool!", "> Very cool. I was planning on running benchmarks on exactly these cards as well but now I don't need to anymore! Is it possible to update the app in such a way that you can also see a side-by-side comparison of the V100 and the A100? I imagine that some people would like to see the benefit of the A100 easily without having to change the view for each graph. A side-by-side bar chart would be cool!\r\n\r\nThat's possible. In any case, the raw results are all in a CSV file [here](https://github.com/tlkh/transformers-benchmarking/blob/main/results.csv) so you can easily do comparisons for any scenarios you're interested in. ", "Very cool. For a minute I thought Streamlit had shipped GPU instances in their cloud and the benchmarks were computed directly there 😉 ", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,606
1,614
1,614
CONTRIBUTOR
null
# 🖥 Benchmarking `transformers` I benchmarked a couple of models (training) on V100 and A100 and wrapped everything into a Streamlit dashboard [link here](https://share.streamlit.io/tlkh/transformers-benchmarking/main/app.py). This dashboard shows the measured performance of GPUs when training various configurations of Transformer networks, showing throughput (seq/s) and GPU memory (VRAM) usage. The idea is to allow users have an easy reference for choosing model configuration (model size/batch size/sequence length) and GPU model. This is kind of a weekend project done out of curiosity. If there is potential, perhaps a more serious effort can be undertaken here. ## Benchmark Which part of `transformers` did you benchmark? Model training: `distilroberta-base`, `roberta-base`, `roberta-large` via `AutoModelForSequenceClassification` ## Set-up What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use? GPUs: V100 16GB, A100 40GB Single GPU only. More information and code: https://github.com/tlkh/transformers-benchmarking ## Results https://share.streamlit.io/tlkh/transformers-benchmarking/main/app.py
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8732/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8732/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8731
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8731/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8731/comments
https://api.github.com/repos/huggingface/transformers/issues/8731/events
https://github.com/huggingface/transformers/pull/8731
748,969,773
MDExOlB1bGxSZXF1ZXN0NTI1ODY4Nzg3
8,731
[Pegasus] Refactor Tokenizer
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Checked all slow and fast tests on GPU.", "> You removed the links to google/sentencepiece but you kept the `Based on SentencePiece.`.\r\n> \r\n> It seems to me that if we reference SentencePiece then it's good to keep a link to the library, no? Based on SentencePiece means that it's based on the library IMO, maybe you wanted to say it's based on Unigram instead?\r\n> \r\n> Great changes, thanks for taking care of it!\r\n\r\nGood point! I also think it would be nicer to have a link to it...For now, the text is always:\r\n\r\n```\r\nConstruct a \"fast\" ALBERT tokenizer (backed by HuggingFace's `tokenizers` library). Based on SentencePiece.\r\n```\r\n\r\n=> So similar to what other FastTokenizers have written in their comments. But I agree that it could be confusing as \"SentencePiece\" doesn't really exist as an entity in `tokenizers` ... I think the \"fast\" sentencepiece tokenizers are either `BPE` or `Unigram` in tokenizers, no ? @thomwolf @n1t0 . Should I change the comments and link to their respective `tokenizers` model instead? So to \r\n\r\n```\r\nConstruct a \"fast\" ALBERT tokenizer (backed by HuggingFace's `tokenizers` library). Based on `Unigram <link to unigram in tokenizers>`__ . \r\n```", "I think your proposal makes a lot of sense!" ]
1,606
1,619
1,606
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #8689, #8594, #8536 This PR refactors the Pegasus Tokenizer. 1st: It decouples the tokenizer from the Reformer Tokenizer because they don't really have much in common. 2nd: Pegasus' masked tokens are added. As stated in the [paper](https://arxiv.org/abs/1912.08777), PEGASUS has two masked tokens which are required for pre-training. Those two tokens `<mask_1>` and `<mask_2>` are added according to https://github.com/google-research/pegasus/blob/master/pegasus/ops/pretrain_parsing_ops.cc#L66 . This should solve or at least enable a solution for all three issues above. 3rd: IMO, all special tokens - which are in the case of Pegasus the tokens 2 to 104 - should be added to the `additional_special_tokens`. This is done here as well. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8731/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8731/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8731", "html_url": "https://github.com/huggingface/transformers/pull/8731", "diff_url": "https://github.com/huggingface/transformers/pull/8731.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8731.patch", "merged_at": 1606665464000 }
https://api.github.com/repos/huggingface/transformers/issues/8730
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8730/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8730/comments
https://api.github.com/repos/huggingface/transformers/issues/8730/events
https://github.com/huggingface/transformers/pull/8730
748,950,278
MDExOlB1bGxSZXF1ZXN0NTI1ODUyOTIz
8,730
fix rag index names in eval_rag.py example
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,606
1,606
1,606
MEMBER
null
There was a mistake in the eval_rag.py parameters choices. As specified in the rag configuration (see [documentation](https://huggingface.co/transformers/model_doc/rag.html?highlight=rag#transformers.RagConfig)), one can choose between 'legacy', 'exact' and 'compressed'. The legacy index is the original index used for RAG/DPR while the other two use the `datasets` library indexing implementation. This issue was reported on the forum https://discuss.huggingface.co/t/rag-retriever-hf-vs-legacy-vs-exact-vs-compressed/2135/5
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8730/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8730/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8730", "html_url": "https://github.com/huggingface/transformers/pull/8730", "diff_url": "https://github.com/huggingface/transformers/pull/8730.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8730.patch", "merged_at": 1606233887000 }
https://api.github.com/repos/huggingface/transformers/issues/8729
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8729/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8729/comments
https://api.github.com/repos/huggingface/transformers/issues/8729/events
https://github.com/huggingface/transformers/pull/8729
748,929,017
MDExOlB1bGxSZXF1ZXN0NTI1ODM1Nzg2
8,729
Create README.md
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,606
1,606
1,606
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8729/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8729/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8729", "html_url": "https://github.com/huggingface/transformers/pull/8729", "diff_url": "https://github.com/huggingface/transformers/pull/8729.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8729.patch", "merged_at": 1606497556000 }
https://api.github.com/repos/huggingface/transformers/issues/8728
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8728/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8728/comments
https://api.github.com/repos/huggingface/transformers/issues/8728/events
https://github.com/huggingface/transformers/pull/8728
748,827,304
MDExOlB1bGxSZXF1ZXN0NTI1NzUxODYw
8,728
Flax Masked Language Modeling training example
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,606
1,607
1,607
MEMBER
null
Include a training example running with Flax/JAX framework. (cc @avital @marcvanzee) TODOs: - [x] Make the collator working with Numpy/JAX array - [x] Make sure the training actually works on larger scale - [x] Make it possible to train from scratch - [x] Support TPU (`bfloat16`) - [ ] Support GPU amp (`float16`) - [x] Improve overall UX
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8728/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8728/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8728", "html_url": "https://github.com/huggingface/transformers/pull/8728", "diff_url": "https://github.com/huggingface/transformers/pull/8728.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8728.patch", "merged_at": 1607530437000 }
https://api.github.com/repos/huggingface/transformers/issues/8727
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8727/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8727/comments
https://api.github.com/repos/huggingface/transformers/issues/8727/events
https://github.com/huggingface/transformers/pull/8727
748,720,197
MDExOlB1bGxSZXF1ZXN0NTI1NjYyODM4
8,727
[model_cards]: control input examples of Geotrend models
{ "login": "amineabdaoui", "id": 17952908, "node_id": "MDQ6VXNlcjE3OTUyOTA4", "avatar_url": "https://avatars.githubusercontent.com/u/17952908?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amineabdaoui", "html_url": "https://github.com/amineabdaoui", "followers_url": "https://api.github.com/users/amineabdaoui/followers", "following_url": "https://api.github.com/users/amineabdaoui/following{/other_user}", "gists_url": "https://api.github.com/users/amineabdaoui/gists{/gist_id}", "starred_url": "https://api.github.com/users/amineabdaoui/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amineabdaoui/subscriptions", "organizations_url": "https://api.github.com/users/amineabdaoui/orgs", "repos_url": "https://api.github.com/users/amineabdaoui/repos", "events_url": "https://api.github.com/users/amineabdaoui/events{/privacy}", "received_events_url": "https://api.github.com/users/amineabdaoui/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,606
1,606
1,606
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8727/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8727/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8727", "html_url": "https://github.com/huggingface/transformers/pull/8727", "diff_url": "https://github.com/huggingface/transformers/pull/8727.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8727.patch", "merged_at": 1606147791000 }
https://api.github.com/repos/huggingface/transformers/issues/8726
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8726/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8726/comments
https://api.github.com/repos/huggingface/transformers/issues/8726/events
https://github.com/huggingface/transformers/issues/8726
748,681,378
MDU6SXNzdWU3NDg2ODEzNzg=
8,726
It seem do not support convert multilabel classification model to onnx ?
{ "login": "MrRace", "id": 10300313, "node_id": "MDQ6VXNlcjEwMzAwMzEz", "avatar_url": "https://avatars.githubusercontent.com/u/10300313?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MrRace", "html_url": "https://github.com/MrRace", "followers_url": "https://api.github.com/users/MrRace/followers", "following_url": "https://api.github.com/users/MrRace/following{/other_user}", "gists_url": "https://api.github.com/users/MrRace/gists{/gist_id}", "starred_url": "https://api.github.com/users/MrRace/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MrRace/subscriptions", "organizations_url": "https://api.github.com/users/MrRace/orgs", "repos_url": "https://api.github.com/users/MrRace/repos", "events_url": "https://api.github.com/users/MrRace/events{/privacy}", "received_events_url": "https://api.github.com/users/MrRace/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[ { "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false } ]
[ "Hi @MrRace, \r\n\r\nthanks for reporting the issue. \r\n\r\nWhat model are you using as multilabels classification support?", "> Hi @MrRace,\r\n> \r\n> thanks for reporting the issue.\r\n> \r\n> What model are you using as multilabels classification support?\r\n\r\nThanks for your reply. I use BERT to do multilabels classification. And want to export the fine tuned model to onnx. ", "Looking at it!", "> Looking at it!\r\n\r\nLooking forward to your reply!", "Hi @mfuntowicz Is there any suggestion for it?\r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,606
1,619
1,619
NONE
null
I have fine tune my multilabel classification model(pytorch),and try to use the `from transformers.convert_graph_to_onnx import convert` to convert the pytorch model to onnx model. As the `pipeline_name` is pre-defined, and it seems not be suited to multilabel classification, therefor I try use `pipeline_name="sentiment-analysis"`, however when reload the onnx model, its prediction result seems wrong. Could you tell how should i do to get the right result? Thanks a lot!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8726/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8726/timeline
completed
null
null