url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/11538 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11538/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11538/comments | https://api.github.com/repos/huggingface/transformers/issues/11538/events | https://github.com/huggingface/transformers/pull/11538 | 873,718,783 | MDExOlB1bGxSZXF1ZXN0NjI4NDYxNTAw | 11,538 | [Wav2vec2] Fixed tokenization mistakes while adding single-char tokens to tokenizer | {
"login": "Muktan",
"id": 31338369,
"node_id": "MDQ6VXNlcjMxMzM4MzY5",
"avatar_url": "https://avatars.githubusercontent.com/u/31338369?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Muktan",
"html_url": "https://github.com/Muktan",
"followers_url": "https://api.github.com/users/Muktan/followers",
"following_url": "https://api.github.com/users/Muktan/following{/other_user}",
"gists_url": "https://api.github.com/users/Muktan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Muktan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Muktan/subscriptions",
"organizations_url": "https://api.github.com/users/Muktan/orgs",
"repos_url": "https://api.github.com/users/Muktan/repos",
"events_url": "https://api.github.com/users/Muktan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Muktan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @Muktan,\r\n\r\nThanks a lot for working on this! Could you add a test that shows how your code solves the issue? :-)\r\n\r\nIt should be in `tests/test_tokenization_wav2vec2.py`",
"Welcome @patrickvonplaten, I will add a test that shows how the code solves the issue."
] | 1,619 | 1,631 | 1,620 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #10622
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11538/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11538/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11538",
"html_url": "https://github.com/huggingface/transformers/pull/11538",
"diff_url": "https://github.com/huggingface/transformers/pull/11538.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11538.patch",
"merged_at": 1620055152000
} |
https://api.github.com/repos/huggingface/transformers/issues/11537 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11537/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11537/comments | https://api.github.com/repos/huggingface/transformers/issues/11537/events | https://github.com/huggingface/transformers/pull/11537 | 873,611,574 | MDExOlB1bGxSZXF1ZXN0NjI4MzkxMjk3 | 11,537 | [Flax] Add FlaxBart models | {
"login": "stancld",
"id": 46073029,
"node_id": "MDQ6VXNlcjQ2MDczMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/46073029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stancld",
"html_url": "https://github.com/stancld",
"followers_url": "https://api.github.com/users/stancld/followers",
"following_url": "https://api.github.com/users/stancld/following{/other_user}",
"gists_url": "https://api.github.com/users/stancld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stancld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stancld/subscriptions",
"organizations_url": "https://api.github.com/users/stancld/orgs",
"repos_url": "https://api.github.com/users/stancld/repos",
"events_url": "https://api.github.com/users/stancld/events{/privacy}",
"received_events_url": "https://api.github.com/users/stancld/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @stancld,\r\n\r\nOne important thing we should implement before merging is the caching mechanismh similar to how it's done in GPT2: https://github.com/huggingface/transformers/blob/0b0a598452b02278075a75f84b5ca7bb457224ad/src/transformers/models/gpt2/modeling_flax_gpt2.py#L139. @patil-suraj, could you help here maybe? :-) ",
"Sure @patrickvonplaten !\r\n\r\n@stancld let me know if you need help here or I could take this if you are okay with it :)",
"@patrickvonplaten a couple of last questions about the API\r\n\r\n- Right now the `FlaxBartPretrainedModel.__call__` method also accepts the `encoder_outputs`, but think it would be cleaner to not do that as we already have `encode/decode` methods and let the user use `encode` or `decode` if they want to run just one part of the model. So we could make `decode` method available for every model (right now it's only available for the `ForConditionalGeneration` model) and it'll return the decoder outputs and for `*ForConditionalGeneration` models, it'll also return the `logits`.\r\n\r\n- The `decode` method returns `FlaxSeq2SeqLM` output, which includes both the encoder and decoder outputs, but when calling `decode` the user already has `encoder_outputs`, so maybe we should just return decoder outputs since the `decode` method only runs the decoder.\r\n\r\nWhat do you think? \r\n",
"IMO:\r\n\r\n1) Agree, happy to remove `encoder_outptus` as an input argument from `call` & make `decode` available for all models\r\n2) Yes, it doesn't make too much sense to include all the encoder relevant output here!"
] | 1,619 | 1,623 | 1,623 | CONTRIBUTOR | null | # What does this PR do?
This PR adds Flax implementation of BART and classes for various downstream tasks. Fixes #11478.
Most of the code is inspired by the Flax implementation of BERT and PyTorch implementation of BART.
From Suraj:
A couple of important points to note:
- The seq2seq API is slightly different from the PyTorch BART model in that the `__call__` method of `FlaxBART` does not accept `encoder_outputs`. In PT if `encoder_outputs` is passed then the encoder is skipped and only the decoder is called. This is not supported in `FlaxBART` to prevent any unintended issues during JIT compilation as skipping module or passing different inputs to function causes re-compilation. Also, the idiomatic way of accessing intermediate modules in Flax models is to expose explicit methods. So the API is as follows
- the `__call__` method expects both the encoder and decoder inputs and does forward pass through both the modules
- Every model has a `encode` and `decode` method which should be called if one wants to run just the encoder or decoder. The `decode` method only returns the decoder outputs and for `*ForConditionalGeneration` modules it also returns the `logits`
```python
# runs encoder and decoder
model(input_ids, decoder_input_ids)
# just run the encoder
encoder_outputs = model.encode(input_ids)
# run the decoder
decoder_outputs = model.decode(decoder_input_ids, encoder_outputs)
```
- For now `past_key_values` caching is only implemented in the self-attention layer in the decoder i.e it's not implemented for the cross-attention layer.
<hr>
**Reviewers:** @patrickvonplaten @sgugger @patil-suraj (and whoever else in the community) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11537/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11537/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11537",
"html_url": "https://github.com/huggingface/transformers/pull/11537",
"diff_url": "https://github.com/huggingface/transformers/pull/11537.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11537.patch",
"merged_at": 1623663968000
} |
https://api.github.com/repos/huggingface/transformers/issues/11536 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11536/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11536/comments | https://api.github.com/repos/huggingface/transformers/issues/11536/events | https://github.com/huggingface/transformers/issues/11536 | 873,587,853 | MDU6SXNzdWU4NzM1ODc4NTM= | 11,536 | Adafactor gives RuntimeError: tensors must be 2-D | {
"login": "TJKlein",
"id": 7634373,
"node_id": "MDQ6VXNlcjc2MzQzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7634373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TJKlein",
"html_url": "https://github.com/TJKlein",
"followers_url": "https://api.github.com/users/TJKlein/followers",
"following_url": "https://api.github.com/users/TJKlein/following{/other_user}",
"gists_url": "https://api.github.com/users/TJKlein/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TJKlein/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TJKlein/subscriptions",
"organizations_url": "https://api.github.com/users/TJKlein/orgs",
"repos_url": "https://api.github.com/users/TJKlein/repos",
"events_url": "https://api.github.com/users/TJKlein/events{/privacy}",
"received_events_url": "https://api.github.com/users/TJKlein/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Got the same problem. Have you solved it yet?",
"Finally I got to solve this problem. This error is caused by 3-D parameters. When the optimizer gets a `[dim1, dim2, dim3]` parameter, [transformers/optimization.py Line 544](https://github.com/huggingface/transformers/blob/master/src/transformers/optimization.py#L544) sets `state[\"exp_avg_sq_row\"]` as `[dim1, dim2]` and `state[\"exp_avg_sq_col\"]` as `[dim1, dim3]`. Then the two parameters in [line 508](https://github.com/huggingface/transformers/blob/master/src/transformers/optimization.py#L508) become `[dim1, dim2, 1]` and `[1, dim1, dim3]`, and the error occurs.\r\n\r\nTo solve this issue, I create my own adafactor optimizer and change line 506-508 to \r\n```\r\nr_factor = (exp_avg_sq_row / exp_avg_sq_row.mean(dim=-1, keepdim=True)).rsqrt_().unsqueeze(-1)\r\nc_factor = exp_avg_sq_col.unsqueeze(-2).rsqrt()\r\nreturn torch.mul(r_factor, c_factor)\r\n```\r\naccording to [fairseq's implementation](https://github.com/pytorch/fairseq/blob/main/fairseq/optim/adafactor.py#L159).",
"Actually having the same problem",
"@ybch14 - do you think this could also be fixed in `transformers` Adafactor implementation?",
"> @ybch14 - do you think this could also be fixed in `transformers` Adafactor implementation?\r\n\r\nDefinitely, just change line 506-508 of [transformers/optimization.py](https://github.com/huggingface/transformers/blob/master/src/transformers/optimization.py#506) as I mentioned above then all done! I'm creating my custom optimizer just because I'm not familiar with pull request process and in a hurry with my development needs. I would really appreciate it if you can help initiate a pull request.\r\n\r\nI will attach my local test code here to help your local test:\r\n```\r\nimport torch\r\nimport torch.nn as nn\r\nimport torch.nn.functional as F\r\nfrom transformers.optimization import Adafactor\r\n\r\nclass Model(nn.Module):\r\n def __init__(self):\r\n super(Model, self).__init__()\r\n self.w = nn.Parameter(torch.randn(2, 3, 4), requires_grad=True)\r\n\r\n def forward(self):\r\n return self.w.mean().sigmoid()\r\n\r\ndevice = torch.device(\"cuda\")\r\ntarget = torch.tensor(1.).to(device)\r\nmodel = Model().to(device)\r\ny = model()\r\nloss = F.binary_cross_entropy(y, target)\r\nloss.backward()\r\noptimizer = Adafactor(model.parameters(), scale_parameter=True, relative_step=True, warmup_init=True, lr=None)\r\noptimizer.step()\r\n```\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Thanks a lot for your help here @ybch14 ! I've opened a PR to fix it just like you suggested and it seems to work just fine :-)",
"BTW, we have some guidelines here on how you can open pull requests: \r\nhttps://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md",
"@patrickvonplaten Thank you for your PR and hope pytorch gets better :)"
] | 1,619 | 1,639 | 1,639 | NONE | null | ## Environment info
- `transformers` version: 4.2.2 (also tried with the latest version v.4.5.1)
- Platform: Linux-4.4.0-1127-aws-x86_64-with-debian-stretch-sid
- Python version: 3.6.13
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
### Who can help
@sgugger @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
In my code, I replaced AdamW (which is working just fine) with **Adafactor** and then I get an error (see below). The code is using also gradient checkpointing. Using **Adafactor from FairSeq** works **well**
```
# Replacing AdamW
# optimizer = AdamW([{'params': model.parameters()}], lr=args.lr, eps=args.epsilon)
# with Adafactor
optimizer = Adafactor(
[{'params': model.parameters()}], lr=None,
eps=(1e-30, 1e-3),
clip_threshold=1.0,
decay_rate=-0.8,
beta1=None,
weight_decay=0.0,
relative_step=True,
scale_parameter=True,
warmup_init=True
)
```
Output:
```
home/ubuntu/transformers/src/transformers/optimization.py:557: UserWarning: This overload of add_ is deprecated:
add_(Number alpha, Tensor other)
Consider using one of the following signatures instead:
add_(Tensor other, *, Number alpha) (Triggered internally at /opt/conda/conda-bld/pytorch_1607370116979/work/torch/csrc/utils/python_arg_parser.cpp:882.)
exp_avg_sq_row.mul_(beta2t).add_(1.0 - beta2t, update.mean(dim=-1))
0%|▎ | 19/6858 [00:37<3:42:15, 1.95s/it]
Traceback (most recent call last):
File "main.py", line 519, in <module>
main()
File "main.py", line 510, in main
train(allincl_model, epoch, optimizer, scheduler, criterion)
File "main.py", line 384, in train
optimizer.step()
File "/home/ubuntu/transformers/src/transformers/optimization.py", line 561, in step
update = self._approx_sq_grad(exp_avg_sq_row, exp_avg_sq_col)
File "/home/ubuntu/transformers/src/transformers/optimization.py", line 492, in _approx_sq_grad
return torch.mm(r_factor.unsqueeze(-1), c_factor.unsqueeze(0))
RuntimeError: tensors must be 2-D
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11536/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11536/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11535 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11535/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11535/comments | https://api.github.com/repos/huggingface/transformers/issues/11535/events | https://github.com/huggingface/transformers/pull/11535 | 873,585,086 | MDExOlB1bGxSZXF1ZXN0NjI4MzcyODMw | 11,535 | Vectorized Numpy based functions to Torch based Functions for SpecAugment. | {
"login": "01-vyom",
"id": 46242526,
"node_id": "MDQ6VXNlcjQ2MjQyNTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/46242526?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/01-vyom",
"html_url": "https://github.com/01-vyom",
"followers_url": "https://api.github.com/users/01-vyom/followers",
"following_url": "https://api.github.com/users/01-vyom/following{/other_user}",
"gists_url": "https://api.github.com/users/01-vyom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/01-vyom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/01-vyom/subscriptions",
"organizations_url": "https://api.github.com/users/01-vyom/orgs",
"repos_url": "https://api.github.com/users/01-vyom/repos",
"events_url": "https://api.github.com/users/01-vyom/events{/privacy}",
"received_events_url": "https://api.github.com/users/01-vyom/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Following are the average test results for the `_compute_mask_indices` function when ran 100 times. Each X.1 subtest case is calculated with `attention_mask = None` and each X.2 subtest case is calculated with `attention_mask` calculated with the following code:\r\n```\r\nattention_mask = torch.ones((batch_size, sequence_length), device=torch_device, dtype=torch.long)\r\nattention_mask[:, -sequence_length // 2 :] = 0\r\n```\r\n1) Test - 100 times\r\nbatch_size = 4\r\nsequence_length = 60\r\nmask_prob = 0.5\r\nmask_length = 1\r\nTest **1.1** - \r\nResult - seconds \r\nNew Code GPU: 0.002872414588928223\r\nNew Code CPU: 0.0006633639335632324\r\nOld Code: 0.0003826594352722168\r\nTest **1.2** - \r\nResult - seconds\r\nNew Code GPU: 0.002973439693450928\r\nNew Code CPU: 0.0006422805786132813\r\nOld Code: 0.0004153728485107422\r\n\r\n2) Test - 100 times\r\nbatch_size = 100\r\nsequence_length = 60\r\nmask_prob = 0.5\r\nmask_length = 1\r\nTest **2.1** - \r\nResult - seconds \r\nNew Code GPU: 0.0663988971710205\r\nNew Code CPU: 0.014422652721405029\r\nOld Code: 0.008053600788116455\r\nTest **2.2** - \r\nResult - seconds\r\nNew Code GPU: 0.06568058252334595\r\nNew Code CPU: 0.01404146671295166\r\nOld Code: 0.008796172142028809\r\n\r\n3) Test - 100 times\r\nbatch_size = 1000\r\nsequence_length = 60\r\nmask_prob = 0.5\r\nmask_length = 1\r\nTest **3.1** - \r\nResult - seconds \r\nNew Code GPU: 0.6623778533935547\r\nNew Code CPU: 0.14311392545700075\r\nOld Code: 0.08917582988739013\r\nTest **3.2** - \r\nResult - seconds\r\nNew Code GPU: 0.6566315603256225\r\nNew Code CPU: 0.13569485664367675\r\nOld Code: 0.08646429538726806\r\n\r\n4) Test - 100 times\r\nbatch_size = 4\r\nsequence_length = 1000\r\nmask_prob = 0.5\r\nmask_length = 1\r\nTest **4.1** - \r\nResult - seconds \r\nNew Code GPU: 0.0031879472732543944\r\nNew Code CPU: 0.0013749027252197266\r\nOld Code: 0.00248842716217041\r\nTest **4.2** - \r\nResult - seconds\r\nNew Code GPU: 0.0031322765350341795\r\nNew Code CPU: 0.0010571050643920898\r\nOld Code: 0.0015622496604919434\r\n\r\n5) Test - 100 times\r\nbatch_size = 4\r\nsequence_length = 60\r\nmask_prob = 0.5\r\nmask_length = 4\r\nTest **5.1** - \r\nResult - seconds \r\nNew Code GPU: 0.003424525260925293\r\nNew Code CPU: 0.0008220672607421875\r\nOld Code: 0.0003489851951599121\r\nTest **5.2** - \r\nResult - seconds\r\nNew Code GPU: 0.0034962940216064454\r\nNew Code CPU: 0.0007469034194946289\r\nOld Code: 0.0003824186325073242\r\n\r\n6) Test - 100 times\r\nbatch_size = 4\r\nsequence_length = 1000\r\nmask_prob = 0.5\r\nmask_length = 4\r\nTest **6.1** - \r\nResult - seconds \r\nNew Code GPU: 0.003502027988433838\r\nNew Code CPU: 0.0014672994613647461\r\nOld Code: 0.0017711663246154786\r\nTest **6.2** - \r\nResult - seconds\r\nNew Code GPU: 0.0034971165657043455\r\nNew Code CPU: 0.0011277437210083009\r\nOld Code: 0.0011361241340637207\r\n\r\n7) Test - 100 times\r\nbatch_size = 128\r\nsequence_length = 1000\r\nmask_prob = 0.5\r\nmask_length = 4\r\nTest **7.1** - \r\nResult - seconds \r\nNew Code GPU: 0.10527128219604492\r\nNew Code CPU: 0.04762232780456543\r\nOld Code: 0.052808206081390384\r\nTest **7.2** - \r\nResult - seconds\r\nNew Code GPU: 0.1032623028755188\r\nNew Code CPU: 0.03513101100921631\r\nOld Code: 0.03523270606994629",
"Hey @01-vyom,\r\n\r\nIt looks like the git history is messed up :-/ Sorry about that! This often happens when one does a wrong `git merge` -> could you maybe open a new PR with a clear git history / git diff? Thanks a lot!",
"Ok sure",
"I am closing this PR.",
"@patrickvonplaten created the new PR."
] | 1,619 | 1,620 | 1,620 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #10459
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11535/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11535/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11535",
"html_url": "https://github.com/huggingface/transformers/pull/11535",
"diff_url": "https://github.com/huggingface/transformers/pull/11535.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11535.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11534 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11534/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11534/comments | https://api.github.com/repos/huggingface/transformers/issues/11534/events | https://github.com/huggingface/transformers/issues/11534 | 872,941,605 | MDU6SXNzdWU4NzI5NDE2MDU= | 11,534 | How to run transformer model like t5-small, facebook/bart-large-cnn without loading pretrained weights? | {
"login": "xuyeliu",
"id": 31730733,
"node_id": "MDQ6VXNlcjMxNzMwNzMz",
"avatar_url": "https://avatars.githubusercontent.com/u/31730733?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xuyeliu",
"html_url": "https://github.com/xuyeliu",
"followers_url": "https://api.github.com/users/xuyeliu/followers",
"following_url": "https://api.github.com/users/xuyeliu/following{/other_user}",
"gists_url": "https://api.github.com/users/xuyeliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xuyeliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xuyeliu/subscriptions",
"organizations_url": "https://api.github.com/users/xuyeliu/orgs",
"repos_url": "https://api.github.com/users/xuyeliu/repos",
"events_url": "https://api.github.com/users/xuyeliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/xuyeliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"For this you could either initialize a random model, save it and pass it's path as `model_name_or_path` arg.\r\nOr modify the script to create a random model instead of pre-trained. i.e to init the model use\r\n\r\n```\r\nmodel = AutoModelForSeq2SeqLM(config)\r\n```\r\ninstead of using `.from_pretrained`.",
"> For this you could either initialize a random model, save it and pass it's path as `model_name_or_path` arg.\r\n> Or modify the script to create a random model instead of pre-trained. i.e to init the model use\r\n> \r\n> ```\r\n> model = AutoModelForSeq2SeqLM(config)\r\n> ```\r\n> \r\n> instead of using `.from_pretrained`.\r\n\r\nThanks for your swift reply. I already try to use model = AutoModelForSeq2SeqLM(config)\r\ninstead of using .from_pretrained. But it has bugs:\r\n```{r}\r\n File \"examples/pytorch/summarization/run_summarization.py\", line 358, in main\r\n model = AutoModelForSeq2SeqLM(config)\r\nTypeError: __init__() takes 1 positional argument but 2 were given\r\n```\r\nIt seems I should use ```model = T5ForConditionalGeneration(config = config)``` or ```model = BartForConditionalGeneration(config = config)```\r\nwhen I want to train a Bart or T5 model from scratch without loading pretrained weights. Is that right? Thank you very much!",
"Ohh sorry,\r\n\r\nIt should be `AutoModelForSeq2SeqLM.from_config(...)`\r\n\r\nand yeah, you could also use the individual classes if you want.",
"> Ohh sorry,\r\n> \r\n> It should be `AutoModelForSeq2SeqLM.from_config(...)`\r\n> \r\n> and yeah, you could also use the individual classes if you want.\r\n\r\nThanks for your reply. One quick question, If I use ```AutoModelForSeq2SeqLM.from_config(...)```, when I mentioned t5-small or t5-base or t5-large, is it the same amone these models? Also If I use model = ```T5ForConditionalGeneration(config = config)```, which model I am using? t5-small or t5-base or t5-large? Thank you very much!",
"> which model I am using? t5-small or t5-base or t5-large? \r\n\r\nThis depends on the `config`, this initializes models according the values in the `config`, so if the `config` is of `t5-small` the model will be of `t5-small` size with random weights.",
"> > which model I am using? t5-small or t5-base or t5-large?\r\n> \r\n> This depends on the `config`, this initializes models according the values in the `config`, so if the `config` is of `t5-small` the model will be of `t5-small` size with random weights.\r\n\r\nI see, thank you so much! Last question, sorry for asking so many times lol. I am trying to train T5-large from scratch, but it is very slow even though I use gpu. Do you know how to run run_summarization. py with multi_gpu? Thank you very much! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,619 | 1,623 | 1,623 | NONE | null | Same with the title. When using run_summarization.py, how to run transformer models like t5-small, facebook/bart-large-cnn without loading pre-trained weights? I only want to train their original model architecture without pre-trained model. Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11534/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11534/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11533 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11533/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11533/comments | https://api.github.com/repos/huggingface/transformers/issues/11533/events | https://github.com/huggingface/transformers/pull/11533 | 872,848,920 | MDExOlB1bGxSZXF1ZXN0NjI3NzE0ODk2 | 11,533 | Update training tutorial | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Re: freezing layers, someone pointed me to your answer here as well: From your deepspeed talk: https://youtu.be/RG-yV5zgqjQ?t=2450\n\nI guess it's still a bit counterintuitive that the randomly initialized head doesn't cause havoc on the weights of base layers? Just to make sure, it's not that you unfreeze the whole thing after only training the head first? You just train the entire thing unfrozen from the very beginning? \n\nI've shared this with several folks and there was enough surprise that it could be worth mentioning this explicitly in this tutorial, as this is something I think lots of folks may not know! \n\nP.S. if it is the case that you unfreeze the entire thing from the very beginning this is such a surprising result I feel like it's worth a blog post or something! ",
"This is a surprise for someone who comes from the fastai community but insisting a lot on this for a user who don't even know what freezing layers is won't be helpful either, which is why I'm not mentioning it in the new version. I'll try to find a compromise between the two :-)\r\n\r\nAnd yes, Transformers model are usually fine-tuned without freezing anything, which is what is done in all the research papers for GLUE/Squad etc. Training only the randomly initialized head usually does not achieve anything good and the state it ends in is so bad you can't recover by fine-tuning the whole model after.",
"@sgugger thank you for clarifying, and thanks for your patience! I'm really glad I learned about this today as I have been doing it wrong all along. \n\nSeems like an interesting research project to find out \"why\" "
] | 1,619 | 1,620 | 1,620 | COLLABORATOR | null | # What does this PR do?
This PRs rewrites the training tutorial that needed a bit of a refresher. It uses a simple example on the IMDB dataset (for basic text classification) with fine-tuning using:
- Trainer
- Keras
- Raw training loop in PyTorch
One last section with the raw training loop in TensorFlow could be added in a follow-up PR. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11533/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11533/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11533",
"html_url": "https://github.com/huggingface/transformers/pull/11533",
"diff_url": "https://github.com/huggingface/transformers/pull/11533.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11533.patch",
"merged_at": 1620062326000
} |
https://api.github.com/repos/huggingface/transformers/issues/11532 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11532/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11532/comments | https://api.github.com/repos/huggingface/transformers/issues/11532/events | https://github.com/huggingface/transformers/issues/11532 | 872,766,868 | MDU6SXNzdWU4NzI3NjY4Njg= | 11,532 | Files not accessible via IPv6 | {
"login": "leezu",
"id": 946903,
"node_id": "MDQ6VXNlcjk0NjkwMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/946903?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leezu",
"html_url": "https://github.com/leezu",
"followers_url": "https://api.github.com/users/leezu/followers",
"following_url": "https://api.github.com/users/leezu/following{/other_user}",
"gists_url": "https://api.github.com/users/leezu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leezu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leezu/subscriptions",
"organizations_url": "https://api.github.com/users/leezu/orgs",
"repos_url": "https://api.github.com/users/leezu/repos",
"events_url": "https://api.github.com/users/leezu/events{/privacy}",
"received_events_url": "https://api.github.com/users/leezu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@n1t0 might be knowledgeable about this :)",
"@n1t0 depending where huggingface.co is hosted, solving this issue may just boil down to turning on dual-stack support in the hosting provider's console.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"This issue hasn't been addressed ",
"Might be interesting for @sterchelen as well as @n1t0 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"This issue hasn't been addressed",
"we are looking into this",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"This issue hasn't been addressed\r\n\r\nJust commenting to keep the bot from closing the issue.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"This issue hasn't been addressed. This can only be addressed by the huggingface team.",
"@leezu Can you try again now?",
"Thank you @n1t0. I verified `AutoTokenizer.from_pretrained` and `AutoModel.from_pretrained` from an IPv6-only instance (no IPv4 route to internet) works now. Thank you for enabling dual-stack support on your end!",
"Yay great job on this @n1t0! You should tweet about it :)",
"same issure!\r\n\r\nfailed: Cannot assign requsted address."
] | 1,619 | 1,700 | 1,629 | NONE | null | In certain cases (such as [1]), users only have access to the internet via IPv6. Unfortunately huggingface.co (or the domain hosting the files) does not have AAAA records and is not reachable from IPv6, causing `ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.` when triggering downloads:
```
from transformers import AutoTokenizer
AutoTokenizer.from_pretrained('facebook/mbart-large-cc25')
```
[1] https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html#task-networking-vpc-dual-stack "One of the benefits of using a VPC in dual-stack mode is that tasks that are assigned an IPv6 address are able to access the internet as long as the VPC is configured with either an internet gateway or an egress-only internet gateway. NAT gateways are not needed." | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11532/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11532/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11531 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11531/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11531/comments | https://api.github.com/repos/huggingface/transformers/issues/11531/events | https://github.com/huggingface/transformers/issues/11531 | 872,760,987 | MDU6SXNzdWU4NzI3NjA5ODc= | 11,531 | Adding custom tokens makes the T5Tokenizer always strip spaces | {
"login": "suflaj",
"id": 77863921,
"node_id": "MDQ6VXNlcjc3ODYzOTIx",
"avatar_url": "https://avatars.githubusercontent.com/u/77863921?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/suflaj",
"html_url": "https://github.com/suflaj",
"followers_url": "https://api.github.com/users/suflaj/followers",
"following_url": "https://api.github.com/users/suflaj/following{/other_user}",
"gists_url": "https://api.github.com/users/suflaj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/suflaj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/suflaj/subscriptions",
"organizations_url": "https://api.github.com/users/suflaj/orgs",
"repos_url": "https://api.github.com/users/suflaj/repos",
"events_url": "https://api.github.com/users/suflaj/events{/privacy}",
"received_events_url": "https://api.github.com/users/suflaj/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"The issue still persists and tokenizers in general still act weird with special tokens and whitespace.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hello @LysandreJik, \r\nis there any update on this? We are also facing issues with added tokens for both, Rust and Python tokenizers, when using the default mt5 tokenizer.\r\n\r\nSimilar to the issues above, we experience inconsistent behavior with spaces in the immediate surroundings of added tokens.\r\n```\r\ntokenizer_fast = MT5TokenizerFast.from_pretrained(\"google/mt5-base\")\r\ntokenizer = MT5Tokenizer.from_pretrained(\"google/mt5-base\")\r\n\r\ntokenizer_fast.add_tokens(\"<new_token>\")\r\ntokenizer.add_tokens(\"<new_token>\")\r\n\r\ntext = \"This is a test <new_token>.\"\r\n\r\ntokens = tokenizer_fast.tokenize(text)\r\nprint(tokens)\r\ntokenizer_fast.convert_tokens_to_string(tokens)\r\n```\r\n`['▁This', '▁is', '▁', 'a', '▁test', '▁', '<new_token>', '▁', '.']` \r\n`'This is a test <new_token> .'`\r\n\r\nFor the fast tokenizer, a space is inserted after the added token.\r\n\r\nFor the slow one, also spaces in front of added tokens are removed:\r\n```\r\ntokens = tokenizer.tokenize(text)\r\nprint(tokens)\r\ntokenizer.convert_tokens_to_string(tokens)\r\n```\r\n`['▁This', '▁is', '▁', 'a', '▁test', '<new_token>', '▁', '.']` \r\n`'This is a test<new_token> .'`\r\n\r\nAt least for the Python tokenizer, I believe the problem lies in the way how texts with added tokens are passed to the underlying sentence_piece tokenizer. The texts are basically split by added tokens and the remaining parts are individually passed to sp. By default, the sp tokenizer adds a space at the start of each sequence and removes them at the end:\r\n```\r\ntokenizer.sp_model.encode(\"A test \", out_type=str)\r\n```\r\n`['▁A', '▁test']` \r\n\r\nWhen tokens are converted back into a single string, only the space at the very first position is removed, but not in case there is an added token in front of it\r\n```\r\ntokenizer.sp_model.decode_pieces(['▁This', '▁is', '▁', 'a', '▁test', '<new_token>', '▁', '.'])\r\n```\r\n`'This is a test<new_token> .'` \r\n\r\nFor the slow tokenizer, we could modify the tokens manually to e.g. take into account spaces in the original string. Unfortunately we lack the Rust skills to do this for the fast tokenizer.\r\n\r\nAre there any plans to adjust this in the near future (since this issue still has the WIP tag)?",
"Pinging @SaulLu ",
"Hey! This is being talked in the PR linked above! Sorry for the late reply",
"Regarding the default MT5 problem with addition of a space, this is being handled here: #24565. The problem is not because of striping left right for ponctuation, but `rstrip` and `lstrip` are indeed ingored",
"Fixing the rust tokenizer: it's a hack so I might have to change the rust code, but for now the following will strip anything on the right and left, giving the expected results. \r\n```python \r\nclass T5Converter(SpmConverter):\r\n def vocab(self, proto):\r\n num_extra_ids = self.original_tokenizer._extra_ids\r\n vocab = [(piece.piece, piece.score) for piece in proto.pieces]\r\n vocab += [(f\"<extra_id_{i}>_\", 0.0) for i in range(num_extra_ids - 1, -1, -1)]\r\n return vocab\r\n ..........\r\n``` \r\nI tested:\r\n```python \r\n>>> from transformers import AutoTokenizer\r\n>>> tokenizer=AutoTokenizer.from_pretrained(\"google/mt5-small\", from_slow = True)\r\n>>> tokenizer.tokenize(\"Hello, <extra_id_0>, \")\r\n['▁Hello', ',', '▁<extra_id_0>', ',', '▁']\r\n```\r\n"
] | 1,619 | 1,695 | 1,695 | NONE | null | ## Environment info
- `transformers` version: 4.5.1
- Platform: Linux-3.10.0-957.5.1.el7.x86_64-x86_64-with-centos-7.6.1810-Core
- Python version: 3.6.13
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
If it helps, here's also my `pip-chill`:
```text
black==19.10b0
corrupt-text==0.0.1
en-core-web-sm==3.0.0
fairseq==1.0.0a0+f6f220e
flake8==3.9.0
pep8==1.7.1
pip-chill==1.0.1
rope==0.14.0
sentencepiece==0.1.95
torchtext==0.8.0
transformers==4.5.1
wikiextractor==3.0.5
```
Note that `corrupt-text` is a custom library, and the problem persists even when it's uninstalled. It has nothing to do with the problem, as can be seen in the **to reproduce** section.
### Who can help
Since it's a tokenizer issue, probably @LysandreJik.
## Information
I'm using the `T5Tokenizer`. After adding custom tokens, if the input is tokenized and they're found in the text, they will have stripped spaces around them even if I explicitly give the `add_tokens` and `add_special_tokens` a list of `AddedToken` objects with `lstrip` and `rstrip` explicitly set to `False`.
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
Check out the **to reproduce** section to get an example of a code that doesn't work.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
It's not really relevant for this problem but the code is, once again, in the **to reproduce** section.
This is likely related to https://github.com/huggingface/transformers/issues/7901.
## To reproduce
Try running this code:
```python
from transformers import T5Tokenizer
from tokenizers import AddedToken
text = "Bruh doits <do_not_touch>"
tokenizer = T5Tokenizer.from_pretrained("t5-small")
tokenizer.add_tokens([AddedToken("doits", lstrip=False, rstrip=False)])
tokenizer.add_special_tokens(
{
"additional_special_tokens": [
AddedToken("<do_not_touch>", lstrip=False, rstrip=False)
]
}
)
tokens = tokenizer.tokenize(text)
ids = tokenizer(
text,
add_special_tokens=False,
padding=False,
truncation=False,
return_attention_mask=False,
)["input_ids"]
print(f"Text: {text}")
print(f"Tokens: {tokens}")
print(f"IDs: {ids}")
print(f"Text after: {tokenizer.convert_tokens_to_string(tokens)}")
```
You will get this:
```text
Text: Bruh doits <do_not_touch>
Tokens: ['▁', 'Bru', 'h', 'doits', '<do_not_touch>']
IDs: [3, 9465, 107, 32100, 32101]
Text after: Bruhdoits<do_not_touch>
```
## Expected behavior
We should get:
```text
Text: Bruh doits <do_not_touch>
Tokens: ['▁', 'Bru', 'h', '▁', 'doits', '▁', '<do_not_touch>']
IDs: [3, 9465, 107, 3, 32100, 3, 32101]
Text after: Bruh doits <do_not_touch>
```
EDIT: Updated the code to have `rstrip=False`, since I made the mistake originally, but still acts the same. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11531/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11531/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11530 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11530/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11530/comments | https://api.github.com/repos/huggingface/transformers/issues/11530/events | https://github.com/huggingface/transformers/issues/11530 | 872,755,969 | MDU6SXNzdWU4NzI3NTU5Njk= | 11,530 | generate text with inputs_embeds (instead of input_ids) for T5. | {
"login": "nrjvarshney",
"id": 19836137,
"node_id": "MDQ6VXNlcjE5ODM2MTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/19836137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nrjvarshney",
"html_url": "https://github.com/nrjvarshney",
"followers_url": "https://api.github.com/users/nrjvarshney/followers",
"following_url": "https://api.github.com/users/nrjvarshney/following{/other_user}",
"gists_url": "https://api.github.com/users/nrjvarshney/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nrjvarshney/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nrjvarshney/subscriptions",
"organizations_url": "https://api.github.com/users/nrjvarshney/orgs",
"repos_url": "https://api.github.com/users/nrjvarshney/repos",
"events_url": "https://api.github.com/users/nrjvarshney/events{/privacy}",
"received_events_url": "https://api.github.com/users/nrjvarshney/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"One possible solution is to get the `encoder_outputs` by passing `inputs_embeds` to `encoder` and then passing that `encoder_outputs` to `.generate`, so for example\r\n\r\n```\r\nwith torch.no_grad():\r\n encoder_outputs = model.get_encoder()(inputs_embeds=input_embeds)\r\n\r\ngen_ids = model.generate(input_ids=None, encoder_outputs=encoder_outputs)\r\n```",
"Thanks. "
] | 1,619 | 1,621 | 1,621 | NONE | null | model.generate() supports input_ids only
```
outs = model.model.generate(input_ids=batch['source_ids'],
attention_mask=batch['source_mask'],
output_scores=True,
max_length=model.model_arguments.max_output_seq_length)
preds_cleaned = [model.tokenizer.decode(ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
for ids in outs]
```
It would be good to have the functionality of generating text from embeddings.
model.forward() allows passing inputs_embeds instead of input_ids.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11530/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11530/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11529 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11529/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11529/comments | https://api.github.com/repos/huggingface/transformers/issues/11529/events | https://github.com/huggingface/transformers/issues/11529 | 872,733,100 | MDU6SXNzdWU4NzI3MzMxMDA= | 11,529 | Deberta v2 Fast Tokenizer | {
"login": "ShubhamSanghvi",
"id": 26190273,
"node_id": "MDQ6VXNlcjI2MTkwMjcz",
"avatar_url": "https://avatars.githubusercontent.com/u/26190273?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ShubhamSanghvi",
"html_url": "https://github.com/ShubhamSanghvi",
"followers_url": "https://api.github.com/users/ShubhamSanghvi/followers",
"following_url": "https://api.github.com/users/ShubhamSanghvi/following{/other_user}",
"gists_url": "https://api.github.com/users/ShubhamSanghvi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ShubhamSanghvi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShubhamSanghvi/subscriptions",
"organizations_url": "https://api.github.com/users/ShubhamSanghvi/orgs",
"repos_url": "https://api.github.com/users/ShubhamSanghvi/repos",
"events_url": "https://api.github.com/users/ShubhamSanghvi/events{/privacy}",
"received_events_url": "https://api.github.com/users/ShubhamSanghvi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Any kind soul please add fast tokenizer for Deberta V2. Would be really helpful and thanks in advance!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,619 | 1,623 | 1,623 | CONTRIBUTOR | null | Fast tokenizers for deberta models were requested in #10498. For the deberta (v1) model, they were implemented in #11387.
Deberta v2 fast tokenizers are yet to be implemented. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11529/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11529/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11528 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11528/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11528/comments | https://api.github.com/repos/huggingface/transformers/issues/11528/events | https://github.com/huggingface/transformers/pull/11528 | 872,471,855 | MDExOlB1bGxSZXF1ZXN0NjI3Mzc1OTQz | 11,528 | Adds Flax BERT finetuning example on GLUE | {
"login": "marcvanzee",
"id": 180100,
"node_id": "MDQ6VXNlcjE4MDEwMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/180100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marcvanzee",
"html_url": "https://github.com/marcvanzee",
"followers_url": "https://api.github.com/users/marcvanzee/followers",
"following_url": "https://api.github.com/users/marcvanzee/following{/other_user}",
"gists_url": "https://api.github.com/users/marcvanzee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marcvanzee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marcvanzee/subscriptions",
"organizations_url": "https://api.github.com/users/marcvanzee/orgs",
"repos_url": "https://api.github.com/users/marcvanzee/repos",
"events_url": "https://api.github.com/users/marcvanzee/events{/privacy}",
"received_events_url": "https://api.github.com/users/marcvanzee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,619 | 1,620 | 1,620 | CONTRIBUTOR | null | # What does this PR do?
Adds Flax BERT finetuning example which finetunes on one of the GLUE tasks.
I evaluated all tasks 5 times and added the average runs, the best run. and stdev in a table in the README. I used the seed of the best run as the default.
I also ran all experiments on three devices: 8 Cloud TPU-v3, 1 Cloud TPU-v3, 1 P100 GPU. I compared the runtimes and put them in another table in the README.
This PR was discussed over Slack with @patrickvonplaten and @sgugger .
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11528/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11528/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11528",
"html_url": "https://github.com/huggingface/transformers/pull/11528",
"diff_url": "https://github.com/huggingface/transformers/pull/11528.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11528.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11527 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11527/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11527/comments | https://api.github.com/repos/huggingface/transformers/issues/11527/events | https://github.com/huggingface/transformers/pull/11527 | 872,444,398 | MDExOlB1bGxSZXF1ZXN0NjI3MzUxOTMw | 11,527 | Run model templates on master | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,619 | 1,619 | 1,619 | MEMBER | null | It currently runs on branches, but not on `master`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11527/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11527/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11527",
"html_url": "https://github.com/huggingface/transformers/pull/11527",
"diff_url": "https://github.com/huggingface/transformers/pull/11527.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11527.patch",
"merged_at": 1619786833000
} |
https://api.github.com/repos/huggingface/transformers/issues/11526 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11526/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11526/comments | https://api.github.com/repos/huggingface/transformers/issues/11526/events | https://github.com/huggingface/transformers/pull/11526 | 872,433,763 | MDExOlB1bGxSZXF1ZXN0NjI3MzQyODMy | 11,526 | Add Stas and Suraj as authors | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thank you, Sylvain ❤️ ",
"Thank you, guys! That feels good!"
] | 1,619 | 1,619 | 1,619 | COLLABORATOR | null | # What does this PR do?
In recognition of all your hard work and the amazing stuff you've added to the lib, adding @stas00 and @patil-suraj to the authors of the lib 🤗 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11526/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11526/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11526",
"html_url": "https://github.com/huggingface/transformers/pull/11526",
"diff_url": "https://github.com/huggingface/transformers/pull/11526.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11526.patch",
"merged_at": 1619787793000
} |
https://api.github.com/repos/huggingface/transformers/issues/11525 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11525/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11525/comments | https://api.github.com/repos/huggingface/transformers/issues/11525/events | https://github.com/huggingface/transformers/pull/11525 | 872,340,486 | MDExOlB1bGxSZXF1ZXN0NjI3MjYyNTM4 | 11,525 | Adding support for `pipeline("automatic-speech-recognition")`. | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Ping\r\n",
"@patrickvonplaten could make another review , I think the previous PR that reads `config.architectures` alleviated any issue for this PR.\r\n\r\nwhat do you think ?",
"@sgugger\r\n\r\nMaybe a small sanity check if you don't mind (code has significantly changed since last review) ? Should be for the better."
] | 1,619 | 1,625 | 1,625 | CONTRIBUTOR | null | # What does this PR do?
Implements the default load logic to make `AutomaticSpeechRecognitionPipeline` work
like other pipelines with `pipeline(task="automatic-speech-recognition", ...)`.
The main issue with current implementation is `"config"` choice for AutoModel. It would be great to have the
possibility to have something like `AutoModelFor` that would implement
the same logic (Load the config, check Architectures and load the first
one)
Alternatives:
- Implement `AutoModelForCTC`, `AutoModelForConditionalGeneration`, allow the `ALLOWED_TASKS` mapping
to accept iterables, and try to load models accordingly.
This might enable better switch handling case here: https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/automatic_speech_recognition.py#L141 with actual `isinstance` instead of the dummy string check.
This would change `ALLOWED_TASKS` logic but might be closer to existing code.
Other discussion could include the `Mixin` which wasn't used here. The main reason is because the mixin supposes
that TF is enabled, but the ASR models do not have a TF alternative right now. Still imported the main tests.
Better care of error raised could be added for the `feature_extractor` missing or incorrect.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11525/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11525/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11525",
"html_url": "https://github.com/huggingface/transformers/pull/11525",
"diff_url": "https://github.com/huggingface/transformers/pull/11525.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11525.patch",
"merged_at": 1625666808000
} |
https://api.github.com/repos/huggingface/transformers/issues/11524 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11524/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11524/comments | https://api.github.com/repos/huggingface/transformers/issues/11524/events | https://github.com/huggingface/transformers/pull/11524 | 872,264,297 | MDExOlB1bGxSZXF1ZXN0NjI3MTk2NDI5 | 11,524 | [examples, translation/summerization] resize token embeds | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,619 | 1,619 | 1,619 | MEMBER | null | # What does this PR do?
Resize token embedding in the summarization and translation examples.
Fixes #11518 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11524/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11524/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11524",
"html_url": "https://github.com/huggingface/transformers/pull/11524",
"diff_url": "https://github.com/huggingface/transformers/pull/11524.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11524.patch",
"merged_at": 1619786821000
} |
https://api.github.com/repos/huggingface/transformers/issues/11523 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11523/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11523/comments | https://api.github.com/repos/huggingface/transformers/issues/11523/events | https://github.com/huggingface/transformers/issues/11523 | 872,202,929 | MDU6SXNzdWU4NzIyMDI5Mjk= | 11,523 | Distributed multi-node support for CPU cluster | {
"login": "ddkalamk",
"id": 8791375,
"node_id": "MDQ6VXNlcjg3OTEzNzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8791375?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ddkalamk",
"html_url": "https://github.com/ddkalamk",
"followers_url": "https://api.github.com/users/ddkalamk/followers",
"following_url": "https://api.github.com/users/ddkalamk/following{/other_user}",
"gists_url": "https://api.github.com/users/ddkalamk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ddkalamk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ddkalamk/subscriptions",
"organizations_url": "https://api.github.com/users/ddkalamk/orgs",
"repos_url": "https://api.github.com/users/ddkalamk/repos",
"events_url": "https://api.github.com/users/ddkalamk/events{/privacy}",
"received_events_url": "https://api.github.com/users/ddkalamk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,619 | 1,623 | 1,623 | NONE | null | # 🚀 Feature request
## Motivation
Current distributed run supports only multi-GPU/TPU run. This feature requests adding support for distributed CPU runs using MPI/GLOO or recently added custom backend e.g. intel oneCCL backend using [torch-ccl](https://github.com/intel/torch-ccl) plugin.
## Your contribution
An example use for Intel oneCCL backend (using Intel torch-ccl) can be found at https://github.com/ddkalamk/transformers/blob/pcl-v4.0.0/examples/question-answering/run_squad.py#L746
Assumption is we launch the application using MPI (similar to horovod) and based on environment variable initialize distributed backend.
Here is another and more comprehensive use case from Facebook DLRM workload:
https://github.com/facebookresearch/dlrm/blob/master/extend_distributed.py#L59
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11523/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11523/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11522 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11522/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11522/comments | https://api.github.com/repos/huggingface/transformers/issues/11522/events | https://github.com/huggingface/transformers/issues/11522 | 872,175,464 | MDU6SXNzdWU4NzIxNzU0NjQ= | 11,522 | Compute probability of target sentences given an input | {
"login": "fferlito",
"id": 26039242,
"node_id": "MDQ6VXNlcjI2MDM5MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/26039242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fferlito",
"html_url": "https://github.com/fferlito",
"followers_url": "https://api.github.com/users/fferlito/followers",
"following_url": "https://api.github.com/users/fferlito/following{/other_user}",
"gists_url": "https://api.github.com/users/fferlito/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fferlito/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fferlito/subscriptions",
"organizations_url": "https://api.github.com/users/fferlito/orgs",
"repos_url": "https://api.github.com/users/fferlito/repos",
"events_url": "https://api.github.com/users/fferlito/events{/privacy}",
"received_events_url": "https://api.github.com/users/fferlito/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,619 | 1,623 | 1,623 | NONE | null | I need to test the probabilities that the model would produce certain outputs. Let me give you an example:
I have a source sentence X, and several possible target sentences Y1, Y2, Y3, Y4 , ...
I want to know if I can compute the probability that the model would give to each of the translation Y, given X.
Is there a function to compute these values? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11522/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11522/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11521 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11521/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11521/comments | https://api.github.com/repos/huggingface/transformers/issues/11521/events | https://github.com/huggingface/transformers/issues/11521 | 872,172,201 | MDU6SXNzdWU4NzIxNzIyMDE= | 11,521 | How to set up a custom tokenizer for distilbart | {
"login": "neptune233",
"id": 46469291,
"node_id": "MDQ6VXNlcjQ2NDY5Mjkx",
"avatar_url": "https://avatars.githubusercontent.com/u/46469291?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neptune233",
"html_url": "https://github.com/neptune233",
"followers_url": "https://api.github.com/users/neptune233/followers",
"following_url": "https://api.github.com/users/neptune233/following{/other_user}",
"gists_url": "https://api.github.com/users/neptune233/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neptune233/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neptune233/subscriptions",
"organizations_url": "https://api.github.com/users/neptune233/orgs",
"repos_url": "https://api.github.com/users/neptune233/repos",
"events_url": "https://api.github.com/users/neptune233/events{/privacy}",
"received_events_url": "https://api.github.com/users/neptune233/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!"
] | 1,619 | 1,620 | 1,620 | NONE | null | Hi, I am currently using the distillbart. I pretrained a bart-large model with my own Chinese corpus and I just simply map each character to an id when I do the pre-training. In the distillition.py, the path or name of the tokenizer should be defined. My question is how can I build a custom tokenizer class which can be used in the code? Assume that we get a word dictionary which map character to the input id when I do the pre-training.
Thank you. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11521/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11521/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11520 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11520/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11520/comments | https://api.github.com/repos/huggingface/transformers/issues/11520/events | https://github.com/huggingface/transformers/pull/11520 | 872,131,589 | MDExOlB1bGxSZXF1ZXN0NjI3MDg1MTY1 | 11,520 | [Master] Make style | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,619 | 1,619 | 1,619 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fix copies on master
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11520/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11520/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11520",
"html_url": "https://github.com/huggingface/transformers/pull/11520",
"diff_url": "https://github.com/huggingface/transformers/pull/11520.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11520.patch",
"merged_at": 1619769298000
} |
https://api.github.com/repos/huggingface/transformers/issues/11519 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11519/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11519/comments | https://api.github.com/repos/huggingface/transformers/issues/11519/events | https://github.com/huggingface/transformers/issues/11519 | 872,101,481 | MDU6SXNzdWU4NzIxMDE0ODE= | 11,519 | RoBERTa adds two sep tokens | {
"login": "david-waterworth",
"id": 5028974,
"node_id": "MDQ6VXNlcjUwMjg5NzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5028974?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/david-waterworth",
"html_url": "https://github.com/david-waterworth",
"followers_url": "https://api.github.com/users/david-waterworth/followers",
"following_url": "https://api.github.com/users/david-waterworth/following{/other_user}",
"gists_url": "https://api.github.com/users/david-waterworth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/david-waterworth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/david-waterworth/subscriptions",
"organizations_url": "https://api.github.com/users/david-waterworth/orgs",
"repos_url": "https://api.github.com/users/david-waterworth/repos",
"events_url": "https://api.github.com/users/david-waterworth/events{/privacy}",
"received_events_url": "https://api.github.com/users/david-waterworth/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> Is this intentional? \r\n\r\nYes. It is in line with the original implementation. (Check [link](https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.md) for more information).\r\n\r\n> I ran into two issues - this, and the fact that as per the comment \"RoBERTa does not make use of token type ids, therefore a list of zeros is returned.\" - this comment also doesn't appear correct, from what I can see Roberta's Tokenizer simply does not return token_type_ids I've not figured out why yet\r\n\r\nThe comment is also correct. The original RoBERTa doesn't even have a token_type layer and the huggingface Roberta has one which is just full of zeros (i.e. does nothing as long as you don't do something with it) and only exists due to legacy reasons (check #2871).\r\n\r\n",
"Thanks, it appears I can put a BERT tokeniser in front of a RoBERTa mlm model and get why I want.",
"You can put every tokenizer in front of RoBERTa, but when you use the pre-trained weights you should stick to the original one as it will otherwise lead to garbage. :)\r\n",
"Yeah I understand that but I'm training from scratch."
] | 1,619 | 1,619 | 1,619 | NONE | null | ## Environment info
- `transformers` version: 4.5.1
- Platform: Linux-5.4.0-72-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.6
- PyTorch version (GPU?): NA
- Tensorflow version (GPU?): NA
- Using GPU in script?: NA
- Using distributed or parallel set-up in script?: NA
## Information
I'm using RoBERTa. I noticed when I pass a text pair, two <eos> tokens are added - see the linked code
https://github.com/huggingface/transformers/blob/f37f2adb68b186f175a81a870cc526349385b9a8/src/transformers/models/roberta/tokenization_roberta_fast.py#L230
This differs to the BERT implementation
https://github.com/huggingface/transformers/blob/60d5bda4fd0381075a300dc11903c76df694bd1c/src/transformers/models/bert/tokenization_bert_fast.py#L255
Is this intentional? I'm trying to create a hybrid BERT/RoBERTa style training strategy. I want to pass two sentances but I don't want to use NSP so I was hoping to use my existing custom RoBERTa tokenizer. I ran into two issues - this, and the fact that as per the comment "RoBERTa does not make use of token type ids, therefore a list of zeros is returned." - this comment also doesn't appear correct, from what I can see Roberta's Tokenizer simply does not return `token_type_ids` I've not figured out why yet
EDIT: seems the default for `return_token_type_ids` is different for BERT (true) and RoBERTa (false).
Also as far as I can see RoBETa will use `token_type_ids` if they're provided, It's just that the tokeniser has been coded to return all zeros.
https://github.com/huggingface/transformers/blob/f37f2adb68b186f175a81a870cc526349385b9a8/src/transformers/models/roberta/modeling_roberta.py#L79
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11519/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11519/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11518 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11518/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11518/comments | https://api.github.com/repos/huggingface/transformers/issues/11518/events | https://github.com/huggingface/transformers/issues/11518 | 871,864,329 | MDU6SXNzdWU4NzE4NjQzMjk= | 11,518 | BART summarization, tokenizer not working | {
"login": "shizhediao",
"id": 18120087,
"node_id": "MDQ6VXNlcjE4MTIwMDg3",
"avatar_url": "https://avatars.githubusercontent.com/u/18120087?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shizhediao",
"html_url": "https://github.com/shizhediao",
"followers_url": "https://api.github.com/users/shizhediao/followers",
"following_url": "https://api.github.com/users/shizhediao/following{/other_user}",
"gists_url": "https://api.github.com/users/shizhediao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shizhediao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shizhediao/subscriptions",
"organizations_url": "https://api.github.com/users/shizhediao/orgs",
"repos_url": "https://api.github.com/users/shizhediao/repos",
"events_url": "https://api.github.com/users/shizhediao/events{/privacy}",
"received_events_url": "https://api.github.com/users/shizhediao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Did you resize the embeddings after adding the new tokens ? To resize the embedding \r\n```python3\r\nmodel.resize_token_embeddings(len(tokenizer))\r\n```",
"> Did you resize the embeddings after adding the new tokens ? To resize the embedding\r\n> \r\n> ```python\r\n> model.resize_token_embeddings(len(tokenizer))\r\n> ```\r\n\r\nThanks so much for your reply!\r\nThe issue has been solved by adding that line.\r\nJust curious, is it a common practice to add 'resize_token_emb' function? because previously, I have not seen this happens and was wondering why it is not included in the official run_summarization code.\r\n\r\nThanks!",
"Yes, the embeddings need to be resized after adding new tokens.\r\n\r\nAnd yes, you are right, the embedding should be resized in the example."
] | 1,619 | 1,619 | 1,619 | NONE | null | @patil-suraj
When I am running pytorch/summarization, the logs are as below:
```
Adding AddedToken(content='<s>', single_word=False, lstrip=False, rstrip=False, normalized=True) to the vocabulary
Adding AddedToken(content='</s>', single_word=False, lstrip=False, rstrip=False, normalized=True) to the vocabulary
Adding AddedToken(content='<pad>', single_word=False, lstrip=False, rstrip=False, normalized=True) to the vocabulary
Adding AddedToken(content='<mask>', single_word=False, lstrip=True, rstrip=False, normalized=True) to the vocabulary
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
https://huggingface.co/facebook/bart-large-cnn/resolve/main/pytorch_model.bin not found in cache or force_download set to True, downloading to /home/sdiaoaa/.cache/huggingface/transformers/tmpama_iuh8
```
But the embedding did not fit for the added tokens.
The embedding is still [50264] but the token_ides ranges from [0, 50269] | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11518/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11518/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11517 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11517/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11517/comments | https://api.github.com/repos/huggingface/transformers/issues/11517/events | https://github.com/huggingface/transformers/pull/11517 | 871,300,830 | MDExOlB1bGxSZXF1ZXN0NjI2MzQwMjk4 | 11,517 | rag import not on windows | {
"login": "flozi00",
"id": 47894090,
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flozi00",
"html_url": "https://github.com/flozi00",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"repos_url": "https://api.github.com/users/flozi00/repos",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"There's a lot of changes relative to `black` - could you install the version the repo uses with:\r\n```\r\npip install -U -e .[quality]\r\n```\r\n?\r\n\r\n\r\nAlso I would put this behind a `if is_faiss_available():` rather than a platform check. Could you show the error you obtain on Windows when using an auto model?\r\n",
" > Also I would put this behind a `if is_faiss_available():` rather than a platform check. Could you show the error you obtain on Windows when using an auto model?\r\n\r\nhow should that looks like ?\r\nputting faiss import into try catch block ?\r\n\r\nthe error is an \"dll not found error\" on windows\r\n\r\n```\r\nfrom transformers import ViTFeatureExtractor, ViTForImageClassification, AutoModel, AutoTokenizer\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\transformers\\__init__.py\", line 2487, in __getattr__\r\n return super().__getattr__(name)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\transformers\\file_utils.py\", line 1700, in __getattr__\r\n value = getattr(module, name)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\transformers\\file_utils.py\", line 1699, in __getattr__\r\n module = self._get_module(self._class_to_module[name])\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\transformers\\models\\auto\\__init__.py\", line 198, in _get_module\r\n return importlib.import_module(\".\" + module_name, self.__name__)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\importlib\\__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\transformers\\models\\auto\\modeling_auto.py\", line 199, in <module>\r\n from ..rag.modeling_rag import ( # noqa: F401 - need to import all RagModels to be in globals() function\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\transformers\\models\\rag\\modeling_rag.py\", line 29, in <module>\r\n from .retrieval_rag import RagRetriever\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\transformers\\models\\rag\\retrieval_rag.py\", line 42, in <module>\r\n import faiss\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\faiss\\__init__.py\", line 17, in <module>\r\n from .loader import *\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\faiss\\loader.py\", line 39, in <module>\r\n from .swigfaiss import *\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\faiss\\swigfaiss.py\", line 13, in <module>\r\n from . import _swigfaiss\r\nImportError: DLL load failed while importing _swigfaiss: Das angegebene Modul wurde nicht gefunden.\r\n```",
"What is your transformer version? \r\nI see this in your stack-trace:\r\n```\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\transformers\\models\\rag\\retrieval_rag.py\", line 42, in <module>\r\n import faiss\r\n```\r\nBut this should be behind the `is_faiss_available()` statement:\r\n\r\nhttps://github.com/huggingface/transformers/blob/db9dd09cf9d8f5de9a5293ec16e7b3d0c01dcbbb/src/transformers/models/rag/retrieval_rag.py#L31-L38",
"latest release and master branche\r\nthen it looks like enviroment bug, I dont know which library did, but my pip freeze tells me faiss-cpu is installed on my notebook.\r\nI removed and now it's working again, so closing this PR"
] | 1,619 | 1,619 | 1,619 | CONTRIBUTOR | null | just a small fix to use automodels on windows
faiss is not available on windows, and RAG is using faiss, so the import fails | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11517/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11517/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11517",
"html_url": "https://github.com/huggingface/transformers/pull/11517",
"diff_url": "https://github.com/huggingface/transformers/pull/11517.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11517.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11516 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11516/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11516/comments | https://api.github.com/repos/huggingface/transformers/issues/11516/events | https://github.com/huggingface/transformers/issues/11516 | 871,265,447 | MDU6SXNzdWU4NzEyNjU0NDc= | 11,516 | Run_summarization not working for mbart50 | {
"login": "Aniruddha-JU",
"id": 36475622,
"node_id": "MDQ6VXNlcjM2NDc1NjIy",
"avatar_url": "https://avatars.githubusercontent.com/u/36475622?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aniruddha-JU",
"html_url": "https://github.com/Aniruddha-JU",
"followers_url": "https://api.github.com/users/Aniruddha-JU/followers",
"following_url": "https://api.github.com/users/Aniruddha-JU/following{/other_user}",
"gists_url": "https://api.github.com/users/Aniruddha-JU/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aniruddha-JU/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aniruddha-JU/subscriptions",
"organizations_url": "https://api.github.com/users/Aniruddha-JU/orgs",
"repos_url": "https://api.github.com/users/Aniruddha-JU/repos",
"events_url": "https://api.github.com/users/Aniruddha-JU/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aniruddha-JU/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @Aniruddha-JU \r\n\r\nRight now the `run_summarization.py` does not support fine-tuning mBART for summarization, we need to set the proper language tokens for mBART50. For now, you could easily modify the script to adapt it for mBART50 by setting the correct language tokens, as is done in the translation example.\r\n\r\nhttps://github.com/huggingface/transformers/blob/master/examples/pytorch/translation/run_translation.py#L340-L380\r\n\r\nThe difference here would be that the source and target language will be similar.\r\n\r\nAlso, could you please post the full stack trace the error seems unrelated to mBART.",
"All the weights of MBartForConditionalGeneration were initialized from the model checkpoint at facebook/mbart-large-50.\r\nIf your task is similar to the task the model of the checkpoint was trained on, you can already use MBartForConditionalGeneration for predictions without further training.\r\n 0%| | 0/3 [00:00<?, ?ba/s]\r\nTraceback (most recent call last):\r\n File \"run_summarization.py\", line 596, in <module>\r\n main()\r\n File \"run_summarization.py\", line 428, in main\r\n train_dataset = train_dataset.map(\r\n File \"/home/aniruddha/anaconda3/envs/mbart/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 1474, in map\r\n return self._map_single(\r\n File \"/home/aniruddha/anaconda3/envs/mbart/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 174, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"/home/aniruddha/anaconda3/envs/mbart/lib/python3.8/site-packages/datasets/fingerprint.py\", line 340, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"/home/aniruddha/anaconda3/envs/mbart/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 1798, in _map_single\r\n batch = apply_function_on_filtered_inputs(\r\n File \"/home/aniruddha/anaconda3/envs/mbart/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 1706, in apply_function_on_filtered_inputs\r\n function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)\r\n File \"run_summarization.py\", line 409, in preprocess_function\r\n with tokenizer.as_target_tokenizer():\r\n File \"/home/aniruddha/anaconda3/envs/mbart/lib/python3.8/contextlib.py\", line 113, in __enter__\r\n return next(self.gen)\r\n File \"/home/aniruddha/anaconda3/envs/mbart/lib/python3.8/site-packages/transformers/models/mbart/tokenization_mbart50_fast.py\", line 210, in as_target_tokenizer\r\n self.set_tgt_lang_special_tokens(self.tgt_lang)\r\n File \"/home/aniruddha/anaconda3/envs/mbart/lib/python3.8/site-packages/transformers/models/mbart/tokenization_mbart50_fast.py\", line 235, in set_tgt_lang_special_tokens\r\n prefix_tokens_str = self.convert_ids_to_tokens(self.prefix_tokens)\r\n File \"/home/aniruddha/anaconda3/envs/mbart/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py\", line 295, in convert_ids_to_tokens\r\n index = int(index)\r\nTypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'",
"@patil-suraj ",
"For translation json format is not supporting. core-dumped is happening.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
" with self.tokenizer.as_target_tokenizer():\r\n File \"/home/rahulpal/anaconda3/envs/rebel/lib/python3.7/contextlib.py\", line 112, in __enter__\r\n return next(self.gen)\r\n File \"/home/rahulpal/anaconda3/envs/rebel/lib/python3.7/site-packages/transformers/models/mbart50/tokenization_mbart50_fast.py\", line 215, in as_target_tokenizer\r\n self.set_tgt_lang_special_tokens(self.tgt_lang)\r\n File \"/home/rahulpal/anaconda3/envs/rebel/lib/python3.7/site-packages/transformers/models/mbart50/tokenization_mbart50_fast.py\", line 240, in set_tgt_lang_special_tokens\r\n prefix_tokens_str = self.convert_ids_to_tokens(self.prefix_tokens)\r\n File \"/home/rahulpal/anaconda3/envs/rebel/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py\", line 307, in convert_ids_to_tokens\r\n index = int(index)\r\nTypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'\r\n"
] | 1,619 | 1,638 | 1,623 | NONE | null | - `transformers` 4.5.0
- Platform: linux:
- Python version: 1.7.1
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
@patil-suraj @LysandreJik
Models: mbart
I am running the run_summarization.py class using below commands:
python examples/pytorch/summarization/run_summarization.py --model_name_or_path facebook/mbart-large-50 --do_train --do_eval --do_predict --test_file /home/aniruddha/mbart/mbart_json/bendev_mbart.json --train_file /home/aniruddha/mbart/mbart_json/bentrain_mbart.json --validation_file /home/aniruddha/mbart/mbart_json/bendev_mbart.json --text_column text --summary_column summary --output_dir mbart50_bengali-summarization --per_device_train_batch_size=1 --per_device_eval_batch_size=2 --overwrite_output_dir true --source_prefix "summarize: " --predict_with_generate yes
My dataset in json below format: I am doing it for bengali language:
{"text": "I'm sitting here in a boring room. It's just another rainy Sunday afternoon. I'm wasting my time I got nothing to do. I'm hanging around I'm waiting for you. But nothing ever happens. And I wonder", "summary": "I'm sitting in a room where I'm waiting for something to happen"}
Error:
File "/home/aniruddha/anaconda3/envs/mbart/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 295, in convert_ids_to_tokens
index = int(index)
TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11516/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11516/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11515 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11515/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11515/comments | https://api.github.com/repos/huggingface/transformers/issues/11515/events | https://github.com/huggingface/transformers/issues/11515 | 871,252,650 | MDU6SXNzdWU4NzEyNTI2NTA= | 11,515 | Issues with TFGPT2ForSequenceClassification | {
"login": "cytwill",
"id": 38811872,
"node_id": "MDQ6VXNlcjM4ODExODcy",
"avatar_url": "https://avatars.githubusercontent.com/u/38811872?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cytwill",
"html_url": "https://github.com/cytwill",
"followers_url": "https://api.github.com/users/cytwill/followers",
"following_url": "https://api.github.com/users/cytwill/following{/other_user}",
"gists_url": "https://api.github.com/users/cytwill/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cytwill/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cytwill/subscriptions",
"organizations_url": "https://api.github.com/users/cytwill/orgs",
"repos_url": "https://api.github.com/users/cytwill/repos",
"events_url": "https://api.github.com/users/cytwill/events{/privacy}",
"received_events_url": "https://api.github.com/users/cytwill/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for the very in-detail issue description! @Rocketknight1 do you maybe want to give it a try here? Otherwise I'm happy to take a look :-)",
"Taking a look now!",
"Hi @cytwill, can you share a few lines of the data you're loading as X_train and y_train? If it's a private dataset, you can replace the text with random text - I just want to see the format of the data and try to reproduce the error here.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hi. I am currently experiencing the same issue as the OP where the classification layer seems to be inserted before the main GPT layer. I basically have the same model summary and a similar error so I thought I'd try to reopen this.\r\n\r\nI know it's not an ideal dataset for the model but here's a copy of the Fine Tuning with Keras tutorial to illustrate the problem: https://colab.research.google.com/drive/1UJdB5QG_6L1qeWxM8Fa-CuDZQR32cshL?usp=sharing\r\n\r\nBelow the tensorflow implementation is the pytorch version that seems to work well enough."
] | 1,619 | 1,645 | 1,623 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1
- Platform: Google Colab
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.4.1
- Using GPU in script?: NO, but tf automatically use it
- Using distributed or parallel set-up in script?:
### Who can help
@patrickvonplaten, @LysandreJik, @Rocketknight1
## Information
Model I am using (GPT2):
The problem arises when using:
* [ ] my own modified scripts: (give details below)
When using TFGPT2ForSequenceClassification, I found that the structure of the model is weird, see below:

Why is the classifier inserted before the GPT main layer? And when I load the PyTorch version, it looks different (inserted after the main layer):

Also, I tried to train this model as the tutorials of [fine-tuning on Bert with customized dataset](https://huggingface.co/transformers/custom_datasets.html) suggests, but failed as following, I loaded the pretrained classification model with 3 classes:
ValueError: in user code:
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:805 train_function *
return step_function(self, iterator)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:795 step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:1259 run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:2730 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:3417 _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:788 run_step **
outputs = model.train_step(data)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:758 train_step
self.compiled_metrics.update_state(y, y_pred, sample_weight)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/compile_utils.py:408 update_state
metric_obj.update_state(y_t, y_p, sample_weight=mask)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/utils/metrics_utils.py:90 decorated
update_op = update_state_fn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/metrics.py:177 update_state_fn
return ag_update_state(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/metrics.py:618 update_state **
matches = ag_fn(y_true, y_pred, **self._fn_kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:201 wrapper
return target(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/metrics.py:3315 sparse_categorical_accuracy
return math_ops.cast(math_ops.equal(y_true, y_pred), K.floatx())
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:201 wrapper
return target(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/math_ops.py:1679 equal
return gen_math_ops.equal(x, y, name=name)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/gen_math_ops.py:3179 equal
name=name)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/op_def_library.py:750 _apply_op_helper
attrs=attr_protos, op_def=op_def)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py:592 _create_op_internal
compute_device)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py:3536 _create_op_internal
op_def=op_def)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py:2016 __init__
control_input_ops, op_def)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py:1856 _create_c_op
raise ValueError(str(e))
ValueError: Dimensions must be equal, but are 3 and 512 for '{{node Equal}} = Equal[T=DT_FLOAT, incompatible_shape_error=true](Cast_1, Cast_2)' with input shapes: [?,3], [?,512].
The tasks I am working on is:
* [ ] my own task or dataset: (give details below)
The task is a multi-label classification task, where the label of each sample could be represented as a 3-dim vector like [0,0,0], [0,1,0], [1,1,0], etc.
## To reproduce
Steps to reproduce the behavior:
1. load the GPT2Tokenizer, TFGPT2ForSequenceClassification with num_labels=3
```
my_gpt_tokenizer = GPT2TokenizerFast.from_pretrained('openai-gpt')
my_gpt_model = TFGPT2ForSequenceClassification.from_pretrained('openai-gpt',num_labels=3)
```
2. add pad token to the tokenizer, tokenize the text as the tutorials did and transfer them into dataset objects
```
my_gpt_tokenizer.add_special_tokens({'pad_token': '[PAD]'})
gpt_train_encodings = my_gpt_tokenizer(X_train, truncation=True, padding=True)
gpt_test_encodings = my_gpt_tokenizer(X_test, truncation=True, padding=True)
gpt_train_dataset = tf.data.Dataset.from_tensor_slices((dict(gpt_train_encodings),y_train))
gpt_test_dataset = tf.data.Dataset.from_tensor_slices((dict(gpt_test_encodings),y_test))
```
3. train the model:
```
optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5)
my_gpt_model.compile(optimizer=optimizer, loss="binary_crossentropy", metrics=['accuracy'])
history = my_gpt_model.fit(gpt_train_dataset.shuffle(500).batch(10), epochs=2, batch_size=10, validation_data=gpt_test_dataset.batch(10))
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The model should be trained successfully as the Bert classification does. I tried the same code on TFBertForSequenceClassification and TFDistilBertForSequenceClassification, which are all successful.
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11515/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11515/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11514 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11514/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11514/comments | https://api.github.com/repos/huggingface/transformers/issues/11514/events | https://github.com/huggingface/transformers/pull/11514 | 871,211,448 | MDExOlB1bGxSZXF1ZXN0NjI2MjY4Mjk3 | 11,514 | solved coefficient issue for the TF version of gelu_fast | {
"login": "michaelbenayoun",
"id": 25418079,
"node_id": "MDQ6VXNlcjI1NDE4MDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/25418079?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaelbenayoun",
"html_url": "https://github.com/michaelbenayoun",
"followers_url": "https://api.github.com/users/michaelbenayoun/followers",
"following_url": "https://api.github.com/users/michaelbenayoun/following{/other_user}",
"gists_url": "https://api.github.com/users/michaelbenayoun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michaelbenayoun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michaelbenayoun/subscriptions",
"organizations_url": "https://api.github.com/users/michaelbenayoun/orgs",
"repos_url": "https://api.github.com/users/michaelbenayoun/repos",
"events_url": "https://api.github.com/users/michaelbenayoun/events{/privacy}",
"received_events_url": "https://api.github.com/users/michaelbenayoun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,619 | 1,619 | 1,619 | MEMBER | null | # What does this PR do?
This PR solves a bug in the Tensorflow version of gelu_fast: the two coefficients being used to compute the approximation were swapped, making the computation inaccurate. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11514/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11514/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11514",
"html_url": "https://github.com/huggingface/transformers/pull/11514",
"diff_url": "https://github.com/huggingface/transformers/pull/11514.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11514.patch",
"merged_at": 1619725646000
} |
https://api.github.com/repos/huggingface/transformers/issues/11513 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11513/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11513/comments | https://api.github.com/repos/huggingface/transformers/issues/11513/events | https://github.com/huggingface/transformers/pull/11513 | 871,204,356 | MDExOlB1bGxSZXF1ZXN0NjI2MjYyNTQ1 | 11,513 | Improve task summary docs | {
"login": "hamelsmu",
"id": 1483922,
"node_id": "MDQ6VXNlcjE0ODM5MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1483922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hamelsmu",
"html_url": "https://github.com/hamelsmu",
"followers_url": "https://api.github.com/users/hamelsmu/followers",
"following_url": "https://api.github.com/users/hamelsmu/following{/other_user}",
"gists_url": "https://api.github.com/users/hamelsmu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hamelsmu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hamelsmu/subscriptions",
"organizations_url": "https://api.github.com/users/hamelsmu/orgs",
"repos_url": "https://api.github.com/users/hamelsmu/repos",
"events_url": "https://api.github.com/users/hamelsmu/events{/privacy}",
"received_events_url": "https://api.github.com/users/hamelsmu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"heh not sure why CI failed, just see this error message\r\n\r\n> Received \"killed\" signal",
"@sgugger sorry for the double comment (I commented on an old commit by accident), what I meant to say I made the changes you suggested, LMK if this does a good job of conveying the message!",
"Thanks again!"
] | 1,619 | 1,619 | 1,619 | CONTRIBUTOR | null | This PR makes various improvements to the [Summary of Tasks](file:///Users/hamelsmu/github/transformers/docs/_build/html/task_summary.html#named-entity-recognition) docs.
Instead of providing a summary of changes at the top, I added a comment to all my changes below to give more context behind why I suggested the change.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11513/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11513/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11513",
"html_url": "https://github.com/huggingface/transformers/pull/11513",
"diff_url": "https://github.com/huggingface/transformers/pull/11513.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11513.patch",
"merged_at": 1619788007000
} |
https://api.github.com/repos/huggingface/transformers/issues/11512 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11512/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11512/comments | https://api.github.com/repos/huggingface/transformers/issues/11512/events | https://github.com/huggingface/transformers/issues/11512 | 871,202,837 | MDU6SXNzdWU4NzEyMDI4Mzc= | 11,512 | Piece A | {
"login": "triangle4rouge",
"id": 75120681,
"node_id": "MDQ6VXNlcjc1MTIwNjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/75120681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/triangle4rouge",
"html_url": "https://github.com/triangle4rouge",
"followers_url": "https://api.github.com/users/triangle4rouge/followers",
"following_url": "https://api.github.com/users/triangle4rouge/following{/other_user}",
"gists_url": "https://api.github.com/users/triangle4rouge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/triangle4rouge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/triangle4rouge/subscriptions",
"organizations_url": "https://api.github.com/users/triangle4rouge/orgs",
"repos_url": "https://api.github.com/users/triangle4rouge/repos",
"events_url": "https://api.github.com/users/triangle4rouge/events{/privacy}",
"received_events_url": "https://api.github.com/users/triangle4rouge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could you elaborate more? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,619 | 1,623 | 1,623 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11512/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11512/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/11511 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11511/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11511/comments | https://api.github.com/repos/huggingface/transformers/issues/11511/events | https://github.com/huggingface/transformers/pull/11511 | 871,182,458 | MDExOlB1bGxSZXF1ZXN0NjI2MjQzNzI5 | 11,511 | Fix do_eval default value in training_args.py | {
"login": "bonniehyeon",
"id": 50580028,
"node_id": "MDQ6VXNlcjUwNTgwMDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/50580028?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bonniehyeon",
"html_url": "https://github.com/bonniehyeon",
"followers_url": "https://api.github.com/users/bonniehyeon/followers",
"following_url": "https://api.github.com/users/bonniehyeon/following{/other_user}",
"gists_url": "https://api.github.com/users/bonniehyeon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bonniehyeon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bonniehyeon/subscriptions",
"organizations_url": "https://api.github.com/users/bonniehyeon/orgs",
"repos_url": "https://api.github.com/users/bonniehyeon/repos",
"events_url": "https://api.github.com/users/bonniehyeon/events{/privacy}",
"received_events_url": "https://api.github.com/users/bonniehyeon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,619 | 1,622 | 1,619 | CONTRIBUTOR | null | # What does this PR do?
According to <do_eval> description, when <evaluation_strategy> is different from 'no' it will be set 'True'.
But the <do_eval>'s defualt setting is None. So, this code can't be executed unless the the user set <do_eval = False>.
`if self.do_eval is False and self.evaluation_strategy != IntervalStrategy.NO : self.do_eval = True `
I think it will be better to change <do_eval>'s defulat value 'None' into 'False'
- How I FOUND IT.
I was trying to use <training_args.do_eval> in my script.
BUT it didn't worked even the <evaluation_strategy> was set to 'steps'.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11511/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11511/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11511",
"html_url": "https://github.com/huggingface/transformers/pull/11511",
"diff_url": "https://github.com/huggingface/transformers/pull/11511.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11511.patch",
"merged_at": 1619786112000
} |
https://api.github.com/repos/huggingface/transformers/issues/11510 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11510/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11510/comments | https://api.github.com/repos/huggingface/transformers/issues/11510/events | https://github.com/huggingface/transformers/pull/11510 | 871,173,953 | MDExOlB1bGxSZXF1ZXN0NjI2MjM2NzQ5 | 11,510 | [Examples] Added support for test-file in QA examples with no trainer | {
"login": "bhadreshpsavani",
"id": 26653468,
"node_id": "MDQ6VXNlcjI2NjUzNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhadreshpsavani",
"html_url": "https://github.com/bhadreshpsavani",
"followers_url": "https://api.github.com/users/bhadreshpsavani/followers",
"following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}",
"gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions",
"organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs",
"repos_url": "https://api.github.com/users/bhadreshpsavani/repos",
"events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"One More thing i want to mention is below code,\r\nhttps://github.com/huggingface/transformers/blob/ad1f7bef13f03287af00f819605d696138a5e6ec/examples/pytorch/question-answering/run_qa_no_trainer.py#L543-L547\r\ni changed to\r\n```python\r\n eval_dataset.set_format(type=\"torch\", columns=[\"attention_mask\", \"input_ids\"])\r\n eval_dataloader = DataLoader(eval_dataset, collate_fn=data_collator, batch_size=args.per_device_eval_batch_size)\r\n\r\n if args.do_predict:\r\n predict_dataset.set_format(type=\"torch\", columns=[\"attention_mask\", \"input_ids\"])\r\n```\r\nbecause somehow for local files `token_type_ids` was giving an error. at below line\r\nhttps://github.com/huggingface/transformers/blob/ad1f7bef13f03287af00f819605d696138a5e6ec/examples/pytorch/question-answering/run_qa_no_trainer.py#L543\r\nFor the dataset, it was working fine.\r\nWhen I remove the `token_type_ids` script run successfully for both part!",
"In the Readme.md of question-answering, i think there is a typo! in below line\r\n```\r\nexport TASK_NAME=mrpc\r\n```",
"Hi @sgugger,\r\nI have removed two columns since the eval_dataset is having following features,\r\n```\r\n['attention_mask', 'example_id', 'input_ids', 'offset_mapping']\r\n```\r\nand dataloader also had issue with `offset_mapping`",
"Hi @sgugger,\r\nThere is an error in the post_processing of `examples/pytorch/question-answering/run_qa_beam_search_no_trainer.py`\r\n```\r\nTraceback (most recent call last):\r\n File \"transformers/examples/pytorch/question-answering/run_qa_beam_search_no_trainer.py\", line 815, in <module>\r\n main()\r\n File \"transformers/examples/pytorch/question-answering/run_qa_beam_search_no_trainer.py\", line 746, in main\r\n prediction = post_processing_function(eval_examples, eval_dataset, outputs_numpy)\r\n File \"transformers/examples/pytorch/question-answering/run_qa_beam_search_no_trainer.py\", line 577, in post_processing_function\r\n prefix=stage,\r\n File \"/content/transformers/examples/pytorch/question-answering/utils_qa.py\", line 323, in postprocess_qa_predictions_with_beam_search\r\n feature_null_score = cls_logits[feature_index]\r\nIndexError: index 1 is out of bounds for dimension 0 with size 1\r\n100% 9/9 [01:42<00:00, 11.35s/it]\r\n```\r\nIt can be reproduced using this [colab](https://github.com/bhadreshpsavani/UnderstandingNLP/blob/master/PostProcessingErrorInQAWithBeamSearchWithNoTrainer.ipynb),\r\n\r\nWhen I checked i found that `cls_logits` is coming like `tensor([-0.3511])` instead it should have length equal to number of samples (Five in this case) like `[-0.45879194 -0.46871808 -0.3622135 -0.4451167 -0.4400767 ]` \r\n",
"Hi @sgugger,\r\nPlease let me know if the above changes don't seem fine.\r\nI think there was a typo earlier it should be like this. Please correct me if I am wrong! ",
"Yes, thanks for catching that last problem! I believe the last thing to do is to remove the lines that reset the columns of the `eval_dataset` and `test_dataset` in the post processing ([here](https://github.com/huggingface/transformers/blob/1b0af5f4ed01c227179589722cd658d68f90be6a/examples/pytorch/question-answering/run_qa_beam_search_no_trainer.py#L736) and [there](https://github.com/huggingface/transformers/blob/1b0af5f4ed01c227179589722cd658d68f90be6a/examples/pytorch/question-answering/run_qa_beam_search_no_trainer.py#L794) in run_qa_beam_search_no_trainer, but they also are in run_qa_no_trainer)",
"Sure @sgugger,\r\nI forgot that, Thanks!",
"Thanks a lot for your work on this!",
"Thank you, @sgugger, for catching parts I missed in my review! Much appreciated!",
"Hi!\r\nThere is a typo in line 794 in run_qa_no_trainer.py : \r\n end_logits = accelerator.pad_across_processes(start_logits, dim=1, pad_index=-100)\r\nwhich should be:\r\n end_logits = accelerator.pad_across_processes(end_logits, dim=1, pad_index=-100)\r\n\r\nI'm not sure if it has been corrected in the latest version of transformers. I guess it's still there in https://github.com/huggingface/transformers/blob/main/examples/pytorch/question-answering/run_qa_no_trainer.py",
"Hi @JiaQiSJTU ,\n\nThanks for pointing out,\n\nLet me check on it.",
"Hi @sgugger,\r\n\r\nShall I fix this typo?\r\nhttps://github.com/huggingface/transformers/blob/ffd19ee1de36188c6208855160b5ff930caa00c0/examples/pytorch/question-answering/run_qa_no_trainer.py#L794",
"Yes please!"
] | 1,619 | 1,648 | 1,619 | CONTRIBUTOR | null | # What does this PR do?
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] this was discuss in another [PR](https://github.com/huggingface/transformers/pull/11380#issuecomment-824930263)
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger @stas00 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11510/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11510/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11510",
"html_url": "https://github.com/huggingface/transformers/pull/11510",
"diff_url": "https://github.com/huggingface/transformers/pull/11510.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11510.patch",
"merged_at": 1619787770000
} |
https://api.github.com/repos/huggingface/transformers/issues/11509 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11509/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11509/comments | https://api.github.com/repos/huggingface/transformers/issues/11509/events | https://github.com/huggingface/transformers/issues/11509 | 871,054,857 | MDU6SXNzdWU4NzEwNTQ4NTc= | 11,509 | I-BERT: expected str, bytes or os.PathLike object, not NoneType | {
"login": "fdlci",
"id": 73292708,
"node_id": "MDQ6VXNlcjczMjkyNzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/73292708?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fdlci",
"html_url": "https://github.com/fdlci",
"followers_url": "https://api.github.com/users/fdlci/followers",
"following_url": "https://api.github.com/users/fdlci/following{/other_user}",
"gists_url": "https://api.github.com/users/fdlci/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fdlci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fdlci/subscriptions",
"organizations_url": "https://api.github.com/users/fdlci/orgs",
"repos_url": "https://api.github.com/users/fdlci/repos",
"events_url": "https://api.github.com/users/fdlci/events{/privacy}",
"received_events_url": "https://api.github.com/users/fdlci/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! I believe this checkpoint does not have a slow tokenizer, only a fast tokenizer. Can you try with:\r\n\r\n```py\r\nfrom transformers import RobertaTokenizerFast\r\ntokenizer = RobertaTokenizerFast.from_pretrained('kssteven/ibert-roberta-base')\r\n```",
"Thank you! It loaded the tokenizer without showing an error."
] | 1,619 | 1,620 | 1,620 | NONE | null | Hi, I have an issue when running this code provided by the HF documentation:
>>> from transformers import RobertaTokenizer, IBertForTokenClassification
>>> import torch
>>> tokenizer = RobertaTokenizer.from_pretrained('kssteven/ibert-roberta-base')
When running this i get the following error for the tokenizer:
TypeError: expected str, bytes or os.PathLike object, not NoneType | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11509/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11509/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11508 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11508/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11508/comments | https://api.github.com/repos/huggingface/transformers/issues/11508/events | https://github.com/huggingface/transformers/issues/11508 | 870,992,704 | MDU6SXNzdWU4NzA5OTI3MDQ= | 11,508 | Help understanding how to build a dataset for language as with the old TextDataset | {
"login": "danieldiezmallo",
"id": 46021411,
"node_id": "MDQ6VXNlcjQ2MDIxNDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/46021411?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danieldiezmallo",
"html_url": "https://github.com/danieldiezmallo",
"followers_url": "https://api.github.com/users/danieldiezmallo/followers",
"following_url": "https://api.github.com/users/danieldiezmallo/following{/other_user}",
"gists_url": "https://api.github.com/users/danieldiezmallo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danieldiezmallo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danieldiezmallo/subscriptions",
"organizations_url": "https://api.github.com/users/danieldiezmallo/orgs",
"repos_url": "https://api.github.com/users/danieldiezmallo/repos",
"events_url": "https://api.github.com/users/danieldiezmallo/events{/privacy}",
"received_events_url": "https://api.github.com/users/danieldiezmallo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,619 | 1,619 | 1,619 | NONE | null | I understand this issue should be on the Datasets library, so it's been created there https://github.com/huggingface/datasets/issues/
Hello,
I am trying to load a custom dataset that I will then use for language modeling. The dataset consists of a text file that has a whole document in each line, meaning that each line overpasses the normal 512 tokens limit of most tokenizers.
I would like to understand what is the process to build a text dataset that tokenizes each line, having previously split the documents in the dataset into lines of a "tokenizable" size, as the [old TextDataset](https://github.com/huggingface/transformers/blob/master/src/transformers/data/datasets/language_modeling.py) class would do, where you only had to do the following, and a tokenized dataset without text loss would be available to pass to a DataCollator:
```
model_checkpoint = 'distilbert-base-uncased'
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
from transformers import TextDataset
dataset = TextDataset(
tokenizer=tokenizer,
file_path="path/to/text_file.txt",
block_size=512,
)
```
For now, what I have is the following, which, of course, throws an error because each line is longer than the maximum block size in the tokenizer:
```
import datasets
dataset = datasets.load_dataset('path/to/text_file.txt')
model_checkpoint = 'distilbert-base-uncased'
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
def tokenize_function(examples):
return tokenizer(examples["text"])
tokenized_datasets = dataset.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"])
tokenized_datasets
```
So what would be the "standard" way of creating a dataset in the way it was done before?
Thank you very much for the help :))
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11508/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11508/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11507 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11507/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11507/comments | https://api.github.com/repos/huggingface/transformers/issues/11507/events | https://github.com/huggingface/transformers/issues/11507 | 870,935,455 | MDU6SXNzdWU4NzA5MzU0NTU= | 11,507 | Fine-Tuning TFGPT2LMHeadModel / What to pass to fit | {
"login": "demongolem-biz",
"id": 79917829,
"node_id": "MDQ6VXNlcjc5OTE3ODI5",
"avatar_url": "https://avatars.githubusercontent.com/u/79917829?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/demongolem-biz",
"html_url": "https://github.com/demongolem-biz",
"followers_url": "https://api.github.com/users/demongolem-biz/followers",
"following_url": "https://api.github.com/users/demongolem-biz/following{/other_user}",
"gists_url": "https://api.github.com/users/demongolem-biz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/demongolem-biz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/demongolem-biz/subscriptions",
"organizations_url": "https://api.github.com/users/demongolem-biz/orgs",
"repos_url": "https://api.github.com/users/demongolem-biz/repos",
"events_url": "https://api.github.com/users/demongolem-biz/events{/privacy}",
"received_events_url": "https://api.github.com/users/demongolem-biz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
] | [
"When I run code supplied by another user from [another issue](https://github.com/huggingface/transformers/issues/2439), which supposedly worked at one point in time, I get a similar dimension mismatch. Is there a golden combination of tf and transformers I am supposed to be using?",
"Ah, the 16 above is the batch size, which must be 16. If I create a dataset with this batch size, then my model with train with the fit function. Like\r\n\r\n`dataset = tf.data.Dataset.from_tensor_slices((x, y)) ` \r\n`dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)`",
"Hi, Tensorflow maintainer here! Can you paste me a minimal example that reproduces the problem? You don't have to share your data or anything, just give me a few lines with a made-up tiny dataset that I can run here to recreate the problem - it'll make it much easier for me to track it down. Alternatively, if you're loading your data from HF datasets, that's fine too.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,619 | 1,623 | 1,623 | NONE | null | I am trying to fine-tune this pretrained model on my own data and I can't seem to get the format correct for what the model would like to see as input.
I am using TFGPT2LMHeadModel, GPT2Config and GPT2TokenizerFast.
When I do `model.fit(x,y, epochs=EPOCHS)`,
- If x and y are the outputs of tokenizing on GPT2TokenizerFast (i.e. `tokenized = tokenizer(data_list, return_tensors='tf', add_special_tokens = True, truncation=True, padding = 'longest')`), I get: `ValueError: Unsupported value type BatchEncoding returned by IteratorSpec, serialize`. I tried this because of what I saw on [this example code snippet](http://huggingface.co/transformers/model_doc/gpt2.html#tfgpt2lmheadmodel)
- If instead I choose x as `np.asarray(df['input_ids'].tolist()).astype('int32')` (and y is the corresponding thing for my label data) I get `InvalidArgumentError: Incompatible shapes: [32,154] vs. [2,32,16,154]` which looks a lot closer.
It seems like I have to choose the correct portions of the tokenizer output to feed to the fit function, but I am not choosing correctly. Could you please clarify this for me?
I am using tensorflow 2.4.1 and transformers 4.5.1. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11507/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11507/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11506 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11506/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11506/comments | https://api.github.com/repos/huggingface/transformers/issues/11506/events | https://github.com/huggingface/transformers/pull/11506 | 870,934,642 | MDExOlB1bGxSZXF1ZXN0NjI2MDQwNjAx | 11,506 | [WIP] Adding DETR | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for the review, yes you've made it a lot more clear for me now:\r\n* it's only `rgb2id` and `id2rgb` which are used from `panopticapi`, and it's only about 20 lines of code I see now. So indeed, we can just copy that code into the library (and cite the authors).\r\n* however, I think it's still useful to have a `task` attribute at the init of `DetrFeatureExtractor`, because then we can do input type checking depending on the task, and it will also be useful regarding postprocessing the outputs of DETR (the task attribute is also done in `LukeTokenizer` for example). \r\n* regarding `config.return_intermediate_layers`, there's indeed to reason for it to be user-facing anymore (it is in the original repo - but for other reasons), so let's remove that from the config.",
"Addressed most comments. Once the draft is done, I will create a new branch, squash all commits and open up a new PR, with the remaining comments copied.\r\n\r\n@sgugger I've also added dummies for timm. But CI doesn't seem to be happy, as timm is not installed on it."
] | 1,619 | 1,620 | 1,620 | CONTRIBUTOR | null | # What does this PR do?
I've made quite a lot of progress on implementing HuggingFace's version of [DETR](https://arxiv.org/abs/2005.12872). However, there are some remaining things to be discussed, mainly regarding `DetrFeatureExtractor`, which can be used to prepare images + annotations for the model.
## What I currently have
There are 3 models defined:
- `DetrModel`, which consists of a convolutional backbone + encoder-decoder Transformer, without any head on top.
- `DetrForObjectDetection`, which is `DetrModel` with 2 heads on top, namely a class labels classifier and a bounding box regressor.
- `DetrForSegmentation`, which is `DetrForObjectDetection` (yes you read that right, not `DetrModel`) with a mask head on top, for predicting segmentation masks.
Available notebooks:
- [inference notebook](https://colab.research.google.com/drive/1RWzoQHkGSfztcRcgTRcd3FJUDY4GVXVB?usp=sharing) of `DetrForObjectDetection`
- [fine-tuning notebook](https://drive.google.com/file/d/1NbG_DEPh2A87bpyQYvuutFXDvczkYAJ8/view?usp=sharing) - fine-tuning `DetrForObjectDetection` on a custom dataset (balloon dataset) - obtaining very good results!
- [inference notebook](https://colab.research.google.com/drive/1P-bz2ZBPNciT86gFQTl_qiD2LVPKqrSW?usp=sharing) of `DetrForSegmentation` (panoptic segmentation)
There's the feature extractor:
- `DetrFeatureExtractor`, which can be used to prepare images and annotations for the model. The API is is similar to `ViTFeatureExtractor` and `DeiTFeatureExtractor`: the input are image(s) + annotation(s), and the output is `pixel_values` and `pixel_mask`.
- Currently, it only supports preparing data for object detection, not for panoptic segmentation. It is based on [this code](https://github.com/facebookresearch/detr/blob/a54b77800eb8e64e3ad0d8237789fcbf2f8350c5/datasets/coco.py#L17) in the original implementation. Given an image and annotations in COCO format, it turns the annotations into the format expected by DETR, followed by normalization + resizing of the image and corresponding annotations.
## Questions
### 1: Supporting panoptic segmentation for DetrFeatureExtractor (done)
The problem is that if we also want to support panoptic segmentation, we rely on an external package named `panopticapi`, as it is used when preparing the annotations as can be seen [here](https://github.com/facebookresearch/detr/blob/a54b77800eb8e64e3ad0d8237789fcbf2f8350c5/datasets/coco_panoptic.py#L9). I don't know how we can add this dependency, because I assume that people don't want to install this package if they want to use DetrFeatureExtractor for object detection. How can I handle an optional dependency?
I think there are 2 options here: either 1) add a `task` argument to the feature extractor and raise an error if task = "panoptic segmentation" and panopticapi is not available, or 2) create two different feature extractors (one for object detection, and one for panoptic segmentation). The first option would look something like
```
if task == "panoptic_segmentation":
if not is_panopticapi_available():
raise ImportError("Panopticapi is required for the feature extractor.")
```
also, this can be at the init of the feature extractor, or at the call
### 2: DetrForPanopticSegmentation
`DetrForPanopticSegmentation` is a bit special in the sense that there are 2 ways to train this model, either 1) end-to-end, in which you train `DetrForObjectDetection` and the mask head altogether, or 2) in a 2-step process, in which you first train a `DetrForObjectDetection` model to predict bounding boxes + classes, and then in a second step you provide this model to DetrForPanopticSegmentation, freeze it and only train the mask head further for about 25 epochs. That's why there's a `box_model` argument in the `init` of `DetrForPanopticSegmentation`.
Also, `DetrForObjectDetection` itself only uses the last feature map of the convolutional backbone, but `DetrForPanopticSegmentation` does not, it also uses layers 2, 3 and 4 of a ResNet in the mask head. There's an attribute `return_intermediate_layers` of `DetrConfig`, which should be set to `False` for `DetrForObjectDetection` and `True` for `DetrForPanopticSegmentation`. Currently, I set `config.return_intermediate_layers` to `True` no matter what at the `init` of `DetrForPanopticSegmentation`, but I don't know if hard coding this value is allowed.
### 3: timm support
My implementation supports any convolutional backbone of the [timm](https://github.com/rwightman/pytorch-image-models) package. Should I add a `is_timm_available` check for the model (instead of `is_torch_available`)?
Supporting only object detection would make life easier, but as DETR also obtains very good results on panoptic segmentation, it would be good to support that too.
Would love to hear your opinions @sgugger @patrickvonplaten @LysandreJik @patil-suraj
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11506/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11506/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11506",
"html_url": "https://github.com/huggingface/transformers/pull/11506",
"diff_url": "https://github.com/huggingface/transformers/pull/11506.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11506.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11505 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11505/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11505/comments | https://api.github.com/repos/huggingface/transformers/issues/11505/events | https://github.com/huggingface/transformers/issues/11505 | 870,934,374 | MDU6SXNzdWU4NzA5MzQzNzQ= | 11,505 | encoder decoder in transformers | {
"login": "lytum",
"id": 38668257,
"node_id": "MDQ6VXNlcjM4NjY4MjU3",
"avatar_url": "https://avatars.githubusercontent.com/u/38668257?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lytum",
"html_url": "https://github.com/lytum",
"followers_url": "https://api.github.com/users/lytum/followers",
"following_url": "https://api.github.com/users/lytum/following{/other_user}",
"gists_url": "https://api.github.com/users/lytum/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lytum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lytum/subscriptions",
"organizations_url": "https://api.github.com/users/lytum/orgs",
"repos_url": "https://api.github.com/users/lytum/repos",
"events_url": "https://api.github.com/users/lytum/events{/privacy}",
"received_events_url": "https://api.github.com/users/lytum/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi there!\r\n\r\nIt would be nice if you ask such questions on the [forum](https://discuss.huggingface.co/). Use issues to report bugs and for feature requests or anything else that can't be discussed on the forum. Thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,619 | 1,623 | 1,623 | NONE | null | Thanks for your contribution for EncoderDecoderModel
I want to ask a question about the pool_layer of encoder.
Generally the default 'add_pooling_layer=True' in encoder, while the output of encoder in EncoderDecoder is without pool_layer. Is my understanding correct?
Now i want to add a classification layer in encoder, how should i do now?
Thanks in advance | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11505/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11505/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11504 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11504/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11504/comments | https://api.github.com/repos/huggingface/transformers/issues/11504/events | https://github.com/huggingface/transformers/issues/11504 | 870,805,724 | MDU6SXNzdWU4NzA4MDU3MjQ= | 11,504 | Issue in checkpointing | {
"login": "yes1234man",
"id": 59166627,
"node_id": "MDQ6VXNlcjU5MTY2NjI3",
"avatar_url": "https://avatars.githubusercontent.com/u/59166627?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yes1234man",
"html_url": "https://github.com/yes1234man",
"followers_url": "https://api.github.com/users/yes1234man/followers",
"following_url": "https://api.github.com/users/yes1234man/following{/other_user}",
"gists_url": "https://api.github.com/users/yes1234man/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yes1234man/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yes1234man/subscriptions",
"organizations_url": "https://api.github.com/users/yes1234man/orgs",
"repos_url": "https://api.github.com/users/yes1234man/repos",
"events_url": "https://api.github.com/users/yes1234man/events{/privacy}",
"received_events_url": "https://api.github.com/users/yes1234man/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,619 | 1,620 | 1,620 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.0
- Platform: -
- Python version: 3.8
- PyTorch version (GPU?): 3.7
- Tensorflow version (GPU?): -
- Using GPU in script?: -
- Using distributed or parallel set-up in script?: -
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger
## Information
Hi
I am observing reloading after checkpoint does not get the same results. I searched and as mentioned here https://github.com/huggingface/transformers/issues/11323#issuecomment-822729525 , trainer currently does not save the random states to reload them as well, which is important. Could you add these info in self.state and set random states also in the trainer in the resume? that would be great
thanks
## Expected behavior
After resume, one should get exact same results as training the models without break. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11504/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11504/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11503 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11503/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11503/comments | https://api.github.com/repos/huggingface/transformers/issues/11503/events | https://github.com/huggingface/transformers/pull/11503 | 870,775,203 | MDExOlB1bGxSZXF1ZXN0NjI1OTEyMTI3 | 11,503 | [Examples] Check key exists in datasets first | {
"login": "oToToT",
"id": 8341564,
"node_id": "MDQ6VXNlcjgzNDE1NjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/8341564?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oToToT",
"html_url": "https://github.com/oToToT",
"followers_url": "https://api.github.com/users/oToToT/followers",
"following_url": "https://api.github.com/users/oToToT/following{/other_user}",
"gists_url": "https://api.github.com/users/oToToT/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oToToT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oToToT/subscriptions",
"organizations_url": "https://api.github.com/users/oToToT/orgs",
"repos_url": "https://api.github.com/users/oToToT/repos",
"events_url": "https://api.github.com/users/oToToT/events{/privacy}",
"received_events_url": "https://api.github.com/users/oToToT/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @oToToT,\r\nThanks for the PR, The changes seem valid to me. It should be as per changes in this PR.\r\n@sgugger what's your view on these changes?",
"Hi, @sgugger\r\nI'm not familiar with how huggingface do with pull requests, did I miss something to make it be merged?\r\nOr I just need to stay and wait it.\r\n\r\nThanks!",
"Thanks for the ping! I just forgot to click the merge button 🤦 "
] | 1,619 | 1,620 | 1,620 | CONTRIBUTOR | null | # What does this PR do?
Correctly check the key exists before accessing it in some example scripts. I guess this is probably a mistake when writing example scripts.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
No tests since I didn't see any tests related to examples. Maybe someone could point it out for me.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Probably @bhadreshpsavani would like to review this according to the log from `git blame`.
Or @sgugger, @patil-suraj as this is about examples.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11503/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11503/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11503",
"html_url": "https://github.com/huggingface/transformers/pull/11503",
"diff_url": "https://github.com/huggingface/transformers/pull/11503.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11503.patch",
"merged_at": 1620589358000
} |
https://api.github.com/repos/huggingface/transformers/issues/11502 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11502/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11502/comments | https://api.github.com/repos/huggingface/transformers/issues/11502/events | https://github.com/huggingface/transformers/pull/11502 | 870,769,716 | MDExOlB1bGxSZXF1ZXN0NjI1OTA3NTk4 | 11,502 | Pin HuggingFace Hub dependency | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,619 | 1,619 | 1,619 | MEMBER | null | There might be some breaking changes in the HuggingFace Hub library as development continues, so pin the dependency. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11502/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11502/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11502",
"html_url": "https://github.com/huggingface/transformers/pull/11502",
"diff_url": "https://github.com/huggingface/transformers/pull/11502.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11502.patch",
"merged_at": 1619765870000
} |
https://api.github.com/repos/huggingface/transformers/issues/11501 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11501/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11501/comments | https://api.github.com/repos/huggingface/transformers/issues/11501/events | https://github.com/huggingface/transformers/issues/11501 | 870,767,988 | MDU6SXNzdWU4NzA3Njc5ODg= | 11,501 | Penalise n-gram repetition in generated sequences | {
"login": "KMFODA",
"id": 35491698,
"node_id": "MDQ6VXNlcjM1NDkxNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KMFODA",
"html_url": "https://github.com/KMFODA",
"followers_url": "https://api.github.com/users/KMFODA/followers",
"following_url": "https://api.github.com/users/KMFODA/following{/other_user}",
"gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions",
"organizations_url": "https://api.github.com/users/KMFODA/orgs",
"repos_url": "https://api.github.com/users/KMFODA/repos",
"events_url": "https://api.github.com/users/KMFODA/events{/privacy}",
"received_events_url": "https://api.github.com/users/KMFODA/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @KMFODA \r\n\r\nThe `generate` method already supports penalizing n-gram repetition, this can be done by passing the argument `no_repeat_ngram_size` , if it's passed it will ensure that all n-grams of the given size will only occur once.\r\n\r\nThis however does not mean that the different `num_returned_sequences` sequences will be diverse since AFAIK in beam search usually the sequences are very close to each other. \r\n\r\nYou could try using beam sampling bypassing `do_sample=True` which will use sampling in each beam, which could introduce some diversity.\r\n\r\nThere's also an option of using [diverse beam search](https://arxiv.org/abs/1610.02424) which can be enabled using the option `diversity_penalty`.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,619 | 1,623 | 1,623 | CONTRIBUTOR | null | # 🚀 Feature request
As per this [forum post](https://discuss.huggingface.co/t/force-decoder-to-avoid-repetition-between-generated-sentences/625) sometimes it's helpful to have a parameter that can increase the diversity amongst different generated sentences. This can be a penalty on the number of repeated n-grams between each generated sentence.
## Motivation
If a generation model is being used and `num_returned_sequences` is greater than `1` there are a number of use cases that make it beneficial to be able to have a parameter that can increase the diversity amongst generated sentences or at least avoid exact replicas. For example when trying to create different paraphrases of the same sentence or question.
Example:
Original Question ::
What is the expected close date of the opportunity
Paraphrased Questions Generated by T5::
0: What will be the expected close date of the opportunity?
1: What is the expected closing date for the opportunity that you are considering?
2: What is the expected close date of the opportunity?
3: What is the expected close date on the opportunity?
4: When would be the expected close date of the opportunity?
## Your contribution
I'm happy to submit a PR to work on this if it makes sense to add this as a feature. I would just appreciate a steer on where the best place would be to add this penalty.
@patil-suraj @patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11501/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11501/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11500 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11500/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11500/comments | https://api.github.com/repos/huggingface/transformers/issues/11500/events | https://github.com/huggingface/transformers/issues/11500 | 870,662,283 | MDU6SXNzdWU4NzA2NjIyODM= | 11,500 | not able load model from pipeline NotFound error | {
"login": "tiru1930",
"id": 12211287,
"node_id": "MDQ6VXNlcjEyMjExMjg3",
"avatar_url": "https://avatars.githubusercontent.com/u/12211287?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tiru1930",
"html_url": "https://github.com/tiru1930",
"followers_url": "https://api.github.com/users/tiru1930/followers",
"following_url": "https://api.github.com/users/tiru1930/following{/other_user}",
"gists_url": "https://api.github.com/users/tiru1930/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tiru1930/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tiru1930/subscriptions",
"organizations_url": "https://api.github.com/users/tiru1930/orgs",
"repos_url": "https://api.github.com/users/tiru1930/repos",
"events_url": "https://api.github.com/users/tiru1930/events{/privacy}",
"received_events_url": "https://api.github.com/users/tiru1930/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could you try with the full revision, i.e., `672273686ecedd6fbbd5c0593b17df082ab65e31`?",
"same issues @LysandreJik \r\n\r\n```\r\nmodel = pipeline('sentiment-analysis', model='textattack/bert-base-uncased-yelp-polarity',revision=\"672273686ecedd6fbbd5c0593b17df082ab65e31\")\r\n\r\n\r\n404 Client Error: Not Found for url: https://huggingface.co/textattack/bert-base-uncased-yelp-polarity/resolve/main/tf_model.h5\r\nTraceback (most recent call last):\r\n File \"/home/tiru/anaconda3/lib/python3.7/site-packages/transformers/modeling_tf_utils.py\", line 702, in from_pretrained\r\n local_files_only=local_files_only,\r\n File \"/home/tiru/anaconda3/lib/python3.7/site-packages/transformers/file_utils.py\", line 1007, in cached_path\r\n local_files_only=local_files_only,\r\n File \"/home/tiru/anaconda3/lib/python3.7/site-packages/transformers/file_utils.py\", line 1128, in get_from_cache\r\n r.raise_for_status()\r\n File \"/home/tiru/anaconda3/lib/python3.7/site-packages/requests/models.py\", line 940, in raise_for_status\r\n raise HTTPError(http_error_msg, response=self)\r\nrequests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/textattack/bert-base-uncased-yelp-polarity/resolve/main/tf_model.h5\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/tiru/anaconda3/lib/python3.7/site-packages/transformers/pipelines.py\", line 2936, in pipeline\r\n framework = framework or get_framework(model)\r\n File \"/home/tiru/anaconda3/lib/python3.7/site-packages/transformers/pipelines.py\", line 108, in get_framework\r\n model = TFAutoModel.from_pretrained(model, revision=revision)\r\n File \"/home/tiru/anaconda3/lib/python3.7/site-packages/transformers/models/auto/modeling_tf_auto.py\", line 561, in from_pretrained\r\n pretrained_model_name_or_path, *model_args, config=config, **kwargs\r\n File \"/home/tiru/anaconda3/lib/python3.7/site-packages/transformers/modeling_tf_utils.py\", line 711, in from_pretrained\r\n raise EnvironmentError(msg)\r\nOSError: Can't load weights for 'textattack/bert-base-uncased-yelp-polarity'. Make sure that:\r\n\r\n- 'textattack/bert-base-uncased-yelp-polarity' is a correct model identifier listed on 'https://huggingface.co/models'\r\n\r\n- or 'textattack/bert-base-uncased-yelp-polarity' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin.\r\n\r\n```\r\n```\r\n>>> model = pipeline('sentiment-analysis', model='textattack/bert-base-uncased-yelp-polarity')\r\n\r\n\r\n404 Client Error: Not Found for url: https://huggingface.co/textattack/bert-base-uncased-yelp-polarity/resolve/main/tf_model.h5\r\nTraceback (most recent call last):\r\n File \"/home/tiru/anaconda3/lib/python3.7/site-packages/transformers/modeling_tf_utils.py\", line 702, in from_pretrained\r\n local_files_only=local_files_only,\r\n File \"/home/tiru/anaconda3/lib/python3.7/site-packages/transformers/file_utils.py\", line 1007, in cached_path\r\n local_files_only=local_files_only,\r\n File \"/home/tiru/anaconda3/lib/python3.7/site-packages/transformers/file_utils.py\", line 1128, in get_from_cache\r\n r.raise_for_status()\r\n File \"/home/tiru/anaconda3/lib/python3.7/site-packages/requests/models.py\", line 940, in raise_for_status\r\n raise HTTPError(http_error_msg, response=self)\r\nrequests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/textattack/bert-base-uncased-yelp-polarity/resolve/main/tf_model.h5\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/tiru/anaconda3/lib/python3.7/site-packages/transformers/pipelines.py\", line 2936, in pipeline\r\n framework = framework or get_framework(model)\r\n File \"/home/tiru/anaconda3/lib/python3.7/site-packages/transformers/pipelines.py\", line 108, in get_framework\r\n model = TFAutoModel.from_pretrained(model, revision=revision)\r\n File \"/home/tiru/anaconda3/lib/python3.7/site-packages/transformers/models/auto/modeling_tf_auto.py\", line 561, in from_pretrained\r\n pretrained_model_name_or_path, *model_args, config=config, **kwargs\r\n File \"/home/tiru/anaconda3/lib/python3.7/site-packages/transformers/modeling_tf_utils.py\", line 711, in from_pretrained\r\n raise EnvironmentError(msg)\r\nOSError: Can't load weights for 'textattack/bert-base-uncased-yelp-polarity'. Make sure that:\r\n\r\n- 'textattack/bert-base-uncased-yelp-polarity' is a correct model identifier listed on 'https://huggingface.co/models'\r\n\r\n- or 'textattack/bert-base-uncased-yelp-polarity' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin.\r\n\r\n````",
"Ah, I think the error message could be clearer here; what I understand from this error is that you don't have PyTorch installed in your environment - only TensorFlow; however, that model does not have a TensorFlow checkpoint uploaded to the hub by `textattack`, only a PyTorch variant, so it fails at loading it.\r\n\r\nCould you install torch in your environment so as to benefit from the torch model? `pip install torch`\r\n\r\nRunning your code should work fine after this.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,619 | 1,623 | 1,623 | NONE | null | I am trying to load textattack ylep model in pipeline but it was saying below error
```
/home/tiru/anaconda3/envs/td-solutions/bin/python /snap/pycharm-community/236/plugins/python-ce/helpers/pydev/pydevconsole.py --mode=client --port=40643
import sys; print('Python %s on %s' % (sys.version, sys.platform))
sys.path.extend(['/home/tiru/Desktop/td-solutions', '/home/tiru/Desktop/td-solutions/td-inference-only/text_sentiment_classification/transformers_bert'])
PyDev console: starting.
Python 3.8.8 (default, Feb 24 2021, 21:46:12)
[GCC 7.3.0] on linux
from transformers import pipeline
2021-04-29 12:00:40.847050: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-04-29 12:00:40.847070: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
model = pipeline('sentiment-analysis', model='textattack/bert-base-uncased-yelp-polarity',revision="6722736")
404 Client Error: Not Found for url: https://huggingface.co/textattack/bert-base-uncased-yelp-polarity/resolve/main/tf_model.h5
Traceback (most recent call last):
File "/home/tiru/anaconda3/envs/td-solutions/lib/python3.8/site-packages/transformers/modeling_tf_utils.py", line 1191, in from_pretrained
resolved_archive_file = cached_path(
File "/home/tiru/anaconda3/envs/td-solutions/lib/python3.8/site-packages/transformers/file_utils.py", line 1036, in cached_path
output_path = get_from_cache(
File "/home/tiru/anaconda3/envs/td-solutions/lib/python3.8/site-packages/transformers/file_utils.py", line 1174, in get_from_cache
r.raise_for_status()
File "/home/tiru/anaconda3/envs/td-solutions/lib/python3.8/site-packages/requests/models.py", line 943, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/textattack/bert-base-uncased-yelp-polarity/resolve/main/tf_model.h5
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/home/tiru/anaconda3/envs/td-solutions/lib/python3.8/site-packages/transformers/pipelines/__init__.py", line 340, in pipeline
framework = framework or get_framework(model)
File "/home/tiru/anaconda3/envs/td-solutions/lib/python3.8/site-packages/transformers/pipelines/base.py", line 68, in get_framework
model = TFAutoModel.from_pretrained(model, revision=revision)
File "/home/tiru/anaconda3/envs/td-solutions/lib/python3.8/site-packages/transformers/models/auto/modeling_tf_auto.py", line 602, in from_pretrained
return TF_MODEL_MAPPING[type(config)].from_pretrained(
File "/home/tiru/anaconda3/envs/td-solutions/lib/python3.8/site-packages/transformers/modeling_tf_utils.py", line 1207, in from_pretrained
raise EnvironmentError(msg)
OSError: Can't load weights for 'textattack/bert-base-uncased-yelp-polarity'. Make sure that:
- 'textattack/bert-base-uncased-yelp-polarity' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'textattack/bert-base-uncased-yelp-polarity' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin.
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11500/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11500/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11499 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11499/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11499/comments | https://api.github.com/repos/huggingface/transformers/issues/11499/events | https://github.com/huggingface/transformers/pull/11499 | 870,458,725 | MDExOlB1bGxSZXF1ZXN0NjI1NjU3ODY3 | 11,499 | [DeepSpeed] fp32 support | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
}
] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"That's correct. \r\n\r\nEarlier I tried to use `cur_version>pre_version` so it'd already work with master version, but then people were reporting bugs because they were on some older master version, so while this is less convenient, it avoids invalid bug reports and saves time to all ;)\r\n\r\n "
] | 1,619 | 1,619 | 1,619 | CONTRIBUTOR | null | Things we need to sync with the upcoming `deepspeed==0.3.16` release:
- `zero.Init` now takes a config as an argument
- fp32-support integration, plus doc and tests
- start troubleshooting section
### Future TODO
will probably do in the next PR:
- switch `from_config()` to perform the same `zero.Init` as `from_pretrained` + add test.
### Blocking events
PRs waiting to be merged before this PR can be merged:
- [x] https://github.com/microsoft/DeepSpeed/pull/1008 `zero.Init(config=ds_config)` new arg
- [x] https://github.com/microsoft/DeepSpeed/pull/1004 fp32 support
- [x] new release is needed 0.3.16
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11499/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11499/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11499",
"html_url": "https://github.com/huggingface/transformers/pull/11499",
"diff_url": "https://github.com/huggingface/transformers/pull/11499.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11499.patch",
"merged_at": 1619812308000
} |
https://api.github.com/repos/huggingface/transformers/issues/11498 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11498/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11498/comments | https://api.github.com/repos/huggingface/transformers/issues/11498/events | https://github.com/huggingface/transformers/pull/11498 | 870,286,838 | MDExOlB1bGxSZXF1ZXN0NjI1NTA3MjM0 | 11,498 | [Flax] Add docstrings & model outputs | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,619 | 1,619 | 1,619 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adds docstring examples & `ModelOutputs` to Flax. This includes `all_hidden_states`.
The code necessary for `all_attentions` is added as well, but it requires a change in the official Flax library.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11498/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11498/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11498",
"html_url": "https://github.com/huggingface/transformers/pull/11498",
"diff_url": "https://github.com/huggingface/transformers/pull/11498.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11498.patch",
"merged_at": 1619690691000
} |
https://api.github.com/repos/huggingface/transformers/issues/11497 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11497/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11497/comments | https://api.github.com/repos/huggingface/transformers/issues/11497/events | https://github.com/huggingface/transformers/pull/11497 | 870,275,534 | MDExOlB1bGxSZXF1ZXN0NjI1NDk3MTcx | 11,497 | Reformat to make code clearer in tokenizer call | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"\"unuglify\" -> best branch name ever haha"
] | 1,619 | 1,619 | 1,619 | COLLABORATOR | null | # What does this PR do?
While reviewing a PR on the tokenizer call this morning, I had some trouble parsing what was happening, as black reformatted the tests in a way that is quite unreadable (IMO). This PR fixes that.
No logic is changed, it's just put in a more human-readable way (again maybe this is just me). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11497/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11497/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11497",
"html_url": "https://github.com/huggingface/transformers/pull/11497",
"diff_url": "https://github.com/huggingface/transformers/pull/11497.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11497.patch",
"merged_at": 1619697070000
} |
https://api.github.com/repos/huggingface/transformers/issues/11496 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11496/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11496/comments | https://api.github.com/repos/huggingface/transformers/issues/11496/events | https://github.com/huggingface/transformers/pull/11496 | 870,241,533 | MDExOlB1bGxSZXF1ZXN0NjI1NDY5MDAw | 11,496 | Update TF text classification example | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,619 | 1,619 | 1,619 | MEMBER | null | This updates the TF text classification example with several refactors, as well as multi-GPU and TPU support.
It's late so I'd like to do one more pass over everything before merging tomorrow, but I'm opening for reviews before I head off for the evening! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11496/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11496/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11496",
"html_url": "https://github.com/huggingface/transformers/pull/11496",
"diff_url": "https://github.com/huggingface/transformers/pull/11496.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11496.patch",
"merged_at": 1619786733000
} |
https://api.github.com/repos/huggingface/transformers/issues/11495 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11495/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11495/comments | https://api.github.com/repos/huggingface/transformers/issues/11495/events | https://github.com/huggingface/transformers/issues/11495 | 870,161,899 | MDU6SXNzdWU4NzAxNjE4OTk= | 11,495 | mbart encoder decoder model | {
"login": "md975",
"id": 25549182,
"node_id": "MDQ6VXNlcjI1NTQ5MTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/25549182?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/md975",
"html_url": "https://github.com/md975",
"followers_url": "https://api.github.com/users/md975/followers",
"following_url": "https://api.github.com/users/md975/following{/other_user}",
"gists_url": "https://api.github.com/users/md975/gists{/gist_id}",
"starred_url": "https://api.github.com/users/md975/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/md975/subscriptions",
"organizations_url": "https://api.github.com/users/md975/orgs",
"repos_url": "https://api.github.com/users/md975/repos",
"events_url": "https://api.github.com/users/md975/events{/privacy}",
"received_events_url": "https://api.github.com/users/md975/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey,\r\n\r\nCould you leave more details:\r\n- Your environment\r\n- Your code(or edited so i can try it)\r\n\r\nLooking here:\r\nhttps://huggingface.co/transformers/model_doc/mbart.html?highlight=config#transformers.MBartConfig\r\n\r\nIt doesnt seem to be using hidden_states. Depending how you use the model, you may be grabbing its output incorrectly.",
"Thanks.\r\nI'm using python 3.7, torch 1.7.1 and installed transformers from the source (4.6.0.dev0).\r\nI'm following the exact implementations from [here](https://colab.research.google.com/drive/1WIk2bxglElfZewOHboPFNj8H44_VAyKE?usp=sharing), with minor edits:\r\n```\r\nmodel = MBartForConditionalGeneration.from_pretrained(\"facebook/mbart-large-50\")\r\ntokenizer = MBart50TokenizerFast.from_pretrained(\"facebook/mbart-large-50\", src_lang=\"cs_CZ\", tgt_lang=\"cs_CZ\")\r\n```\r\nchanged the function _process_data_to_model_inputs_\r\n```\r\ndef process_data_to_model_inputs(batch):\r\n # tokenize the inputs and labels\r\n inputs = tokenizer(batch['src'], padding=True, truncation=True, return_tensors=\"pt\")\r\n with tokenizer.as_target_tokenizer():\r\n outputs = tokenizer(batch['tgt'], return_tensors=\"pt\", padding=True, truncation=True)\r\n labels = outputs.input_ids.clone()\r\n data = TensorDataset(torch.tensor(inputs['input_ids']), torch.tensor(inputs['attention_mask']),\r\n torch.tensor(outputs['input_ids']), torch.tensor(outputs['attention_mask']),\r\n torch.tensor(labels))\r\n\r\n dataloader = DataLoader(data, batch_size=batch_size)\r\n return dataloader\r\n\r\n\r\n```\r\nand then training:\r\n```\r\nbart2bart = EncoderDecoderModel.from_encoder_decoder_pretrained(\"facebook/mbart-large-50\", \"facebook/mbart-large-50\")\r\n\r\nfor i in range(EPOCH):\r\n bart2bart.train()\r\n for step, batch in enumerate(train_data):\r\n batch = tuple(t.to(device) for t in batch)\r\n b_input_ids, b_attention_masks_enc, b_input_ids_de, b_attention_masks_de, b_labels= batch\r\n outputs = bart2bart(input_ids=b_input_ids, attention_mask=b_attention_masks_enc,\r\n labels=b_labels, decoder_input_ids=b_input_ids_de, decoder_attention_mask=b_attention_masks_de)\r\n loss, logits = outputs.loss, outputs.logits\r\n optimizer.zero_grad()\r\n bart2bart.zero_grad()\r\n loss.backward()\r\n optimizer.step()\r\n \r\n```\r\nI'm very new to this, so I'm probably not using the model correctly as you mentioned. But I'm not sure how to fix it.",
"Hey,\r\n\r\nUnfortunately i dont use torch, just tensorflow functional API. However i did note that for EncoderDecoder there can be a special configuration procedure. See here:\r\nhttps://huggingface.co/transformers/model_doc/encoderdecoder.html#transformers.EncoderDecoderConfig\r\n\r\nIt is possible that the default config doesn't behave well with MBart as it does with Bert(they are significantly different).\r\n\r\nTry passing in the configs for your encoder and decoder (both MBart) or load config from pretrained, there is example code in the above link. It certainly an error in what the decoder expects.",
"I tried this, thanks!\r\nThe issue still remains though...it's not working.\r\n@patrickvonplaten any tips for using mbart for an Encoder-Decoder Model based on your example notebook for bert?",
"Hey,\r\n\r\nFair enough, one last thing id note is:\r\n\"The EncoderDecoderModel can be used to initialize a sequence-to-sequence model with any pretrained autoencoding model as the encoder and any pretrained autoregressive model as the decoder.\"\r\nfrom https://huggingface.co/transformers/model_doc/encoderdecoder.html\r\n\r\nI am not sure BART can be used for this. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,619 | 1,623 | 1,623 | NONE | null | Hi,
I've been following [this](https://colab.research.google.com/drive/1WIk2bxglElfZewOHboPFNj8H44_VAyKE?usp=sharing) to implement a bert2bert seq2seq model which works pretty well. Now I would like to change this to mbart (facebook/mbart-large-50) instead of bert.
I'm very new to this, but my assumption was that the same script should probably work for other transformers.
So I didn't change much, just initialized the tokenizer and also the model's encoder and decoder with mbart, however, I get the following error when passing the data to the bart2bart model during training:
> File "/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/python3.7/site-packages/transformers/models/encoder_decoder/modeling_encoder_decoder.py", line 442, in forward encoder_hidden_states=encoder_outputs.hidden_states, AttributeError: 'Seq2SeqModelOutput' object has no attribute 'hidden_states'
I'm probably making an obvious mistake but I'm not sure if I understand what the problem is and how I can fix it.
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11495/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11495/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11494 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11494/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11494/comments | https://api.github.com/repos/huggingface/transformers/issues/11494/events | https://github.com/huggingface/transformers/pull/11494 | 870,081,172 | MDExOlB1bGxSZXF1ZXN0NjI1MzM3MjMx | 11,494 | correct incorrect dimension comment in Longformer model | {
"login": "fredo838",
"id": 11276933,
"node_id": "MDQ6VXNlcjExMjc2OTMz",
"avatar_url": "https://avatars.githubusercontent.com/u/11276933?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fredo838",
"html_url": "https://github.com/fredo838",
"followers_url": "https://api.github.com/users/fredo838/followers",
"following_url": "https://api.github.com/users/fredo838/following{/other_user}",
"gists_url": "https://api.github.com/users/fredo838/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fredo838/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fredo838/subscriptions",
"organizations_url": "https://api.github.com/users/fredo838/orgs",
"repos_url": "https://api.github.com/users/fredo838/repos",
"events_url": "https://api.github.com/users/fredo838/events{/privacy}",
"received_events_url": "https://api.github.com/users/fredo838/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,619 | 1,619 | 1,619 | CONTRIBUTOR | null | This PR fixes a comment that incorrectly states the dimensions of a certain tensor in the `Longformer` model, confusing any reader trying to understand the code. The comment for the corresponding `TFLongformerX` is correct.
- [x ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
Tagging @patrickvonplaten (Longformer)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11494/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11494/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11494",
"html_url": "https://github.com/huggingface/transformers/pull/11494",
"diff_url": "https://github.com/huggingface/transformers/pull/11494.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11494.patch",
"merged_at": 1619768533000
} |
https://api.github.com/repos/huggingface/transformers/issues/11493 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11493/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11493/comments | https://api.github.com/repos/huggingface/transformers/issues/11493/events | https://github.com/huggingface/transformers/pull/11493 | 870,003,406 | MDExOlB1bGxSZXF1ZXN0NjI1MjcyNjU3 | 11,493 | [Docs] remove paragraph on CI from installation instructions | {
"login": "hamelsmu",
"id": 1483922,
"node_id": "MDQ6VXNlcjE0ODM5MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1483922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hamelsmu",
"html_url": "https://github.com/hamelsmu",
"followers_url": "https://api.github.com/users/hamelsmu/followers",
"following_url": "https://api.github.com/users/hamelsmu/following{/other_user}",
"gists_url": "https://api.github.com/users/hamelsmu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hamelsmu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hamelsmu/subscriptions",
"organizations_url": "https://api.github.com/users/hamelsmu/orgs",
"repos_url": "https://api.github.com/users/hamelsmu/repos",
"events_url": "https://api.github.com/users/hamelsmu/events{/privacy}",
"received_events_url": "https://api.github.com/users/hamelsmu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,619 | 1,619 | 1,619 | CONTRIBUTOR | null | Fixes #11479
@julien-c suggested to remove this paragraph in #11479
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11493/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11493/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11493",
"html_url": "https://github.com/huggingface/transformers/pull/11493",
"diff_url": "https://github.com/huggingface/transformers/pull/11493.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11493.patch",
"merged_at": 1619623002000
} |
https://api.github.com/repos/huggingface/transformers/issues/11492 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11492/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11492/comments | https://api.github.com/repos/huggingface/transformers/issues/11492/events | https://github.com/huggingface/transformers/pull/11492 | 869,949,607 | MDExOlB1bGxSZXF1ZXN0NjI1MjI3ODgz | 11,492 | Split checkpoint from model_name_or_path in examples | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi guys, just one comment/question from my side: instead of introducing new commandline options (allowing to resume from a checkpoint), what do you think about the following workflow: \r\n\r\na) load the model (with AutoModel)\r\nb) check if the loaded model is from type \"TokenClassification\"\r\n\r\n:thinking: ",
"> I'm also thinking - if we know we are going to resume from checkpoint for sure, then replace from_pretrained with from_config and save the double loading of the weights?\r\n\r\nI'm leaving this to your PoC on one of the seq2seq scripts if you don't mind? This PR is also kind of an urgent bug fix.",
"Oh, of course, I had no idea about the urgency. I will do it in another PR then."
] | 1,619 | 1,619 | 1,619 | COLLABORATOR | null | # What does this PR do?
There is currently a problem in the way we handle resuming from checkpoint in our PyTorch example scripts with `Trainer`: if the user wants to resume form a specific checkpoint, they have to pass `--model_name_or_path checkpoint_folder` which is a bit counter-intuitive but also poses the problem that sometimes the user is passing `--model_name_or_path local_pretrained_model`.
This has caused multiple issues that we tried to patch a bit as they came:
- in text classification or token classification, the `local_pretrained_model` might be pretrained with a different number of labels which made the loading fail (shape mismatch)
- more recently since we're not using `from_pretrained` anymore when loading the checkpoint (to keep the model as provided by the user, with potential frozen layers), the state dict of `local_pretrained_model` generally won't match: if the task is different, its head will have weights incompatible with the model at end.
This PR cleans things up by adding a new training argument called `--resume_from_checkpoint`. To resume training from an explicit checkpoint, the user now has to pass `--resume_from_checkpoint checkpoint_folder` as passing `--model_name_or_path some_local_folder` now only loads the model inside the local folder as a pretrained model and doesn't look for a checkpoint.
It's slightly breaking (not in the library, just the commands to run the examples) but cleaner IMO.
The training argument `resume_from_checkpoint` is used as a default for the argument of the same name in the `train` method.
Might fix #11485 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11492/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11492/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11492",
"html_url": "https://github.com/huggingface/transformers/pull/11492",
"diff_url": "https://github.com/huggingface/transformers/pull/11492.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11492.patch",
"merged_at": 1619735628000
} |
https://api.github.com/repos/huggingface/transformers/issues/11491 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11491/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11491/comments | https://api.github.com/repos/huggingface/transformers/issues/11491/events | https://github.com/huggingface/transformers/issues/11491 | 869,934,434 | MDU6SXNzdWU4Njk5MzQ0MzQ= | 11,491 | Tensorflow “Index out of bound” error when trying to use the TF Longformer transformer in a custom TF network | {
"login": "beelzmon",
"id": 9759259,
"node_id": "MDQ6VXNlcjk3NTkyNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9759259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/beelzmon",
"html_url": "https://github.com/beelzmon",
"followers_url": "https://api.github.com/users/beelzmon/followers",
"following_url": "https://api.github.com/users/beelzmon/following{/other_user}",
"gists_url": "https://api.github.com/users/beelzmon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/beelzmon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/beelzmon/subscriptions",
"organizations_url": "https://api.github.com/users/beelzmon/orgs",
"repos_url": "https://api.github.com/users/beelzmon/repos",
"events_url": "https://api.github.com/users/beelzmon/events{/privacy}",
"received_events_url": "https://api.github.com/users/beelzmon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think you should replace `transformer_outputs = transformer[1]` with `transformer_outputs = transformer[0]`. ",
"> I think you should replace `transformer_outputs = transformer[1]` with `transformer_outputs = transformer[0]`.\r\n\r\n@fredo838 Thanks for the reply, yes i have tried this -- in this case the output shape is (None,4096,768) as per config. I tried taking 64 of these entries but i get the exact same error, in a slightly different format.",
"how about also changing `[transformer_outputs[i] for i in hiddes_states_ind]` to `[transformer_outputs[:, i] for i in hiddes_states_ind]` (that way you index the token dimension, not the batch dimension)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,619 | 1,622 | 1,622 | NONE | null | I am trying to adapt the longformer transformer TF model from huggingface into a bigger three class classification model, i have gotten the model to compile but i cannot run a test example on it. The model and attempted output is as below:
>Environment info
>transformers version: 2.4.1
>Platform: Windows 10
>Python version: python 3.8
>PyTorch version (GPU?): N/A
>Tensorflow version (GPU?): 2.4.1
>Using GPU in script?: yes
>Using distributed or parallel set-up in script?: no
Who can help
@Rocketknight1 (tensorflow)
@sgugger (examples )
@patrickvonplaten (Longformer)
@konstantin_doncov (very similar design in answer https://stackoverflow.com/questions/63201036/add-additional-layers-to-the-huggingface-transformers )
Information
Model I am using: Longformer
The problem arises when using:
my own modified scripts: (give details below)
```python
import tensorflow as tf
import tensorflow.keras as keras
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.models import Model
from tensorflow.keras.layers import GaussianNoise,LocallyConnected2D,LocallyConnected1D,Input,MaxPooling1D,Dense,Dropout,BatchNormalization,LSTM,GRU,ConvLSTM2D,Flatten,LayerNormalization,TimeDistributed,Conv1D,Reshape,Masking
from tensorflow.keras import backend as K
import pathlib
from tensorflow.keras.callbacks import Callback
from tensorflow.keras import regularizers,callbacks
import numpy as np
from tensorflow.keras.layers import Concatenate
from transformers import TFLongformerModel, LongformerTokenizer
if __name__ == "__main__":
model_longformer = TFLongformerModel.from_pretrained('longformer-base-4096',output_hidden_states=True)
print(model_longformer.summary())
input_ids = tf.keras.Input(shape=(4096),dtype='int32')
attention_mask = tf.keras.Input(shape=(4096), dtype='int32')
opt=Adam()
transformer = model_longformer([input_ids, attention_mask])
transformer_outputs = transformer[1] #sequence output
print("Transformer output shape:")
print(transformer_outputs.shape)
#Grab the last 64 sequence entries, out of allegedly (,768). This is the bit
#that causes the error to mention the number '-63'
hidden_states_size = 64
hiddes_states_ind = list(range(-hidden_states_size, 0, 1))
selected_hidden_states = tf.keras.layers.concatenate(tuple([transformer_outputs[i] for i in hiddes_states_ind]))
print(selected_hidden_states.shape)
#array_hidden = np.asarray(selected_hiddes_states)
#flatter_longformer_1 = Flatten(array_hidden)
reshape_longformer_1 = Reshape((1,1,),input_shape=(49152,))(selected_hidden_states) #49152 = 64*768
rnn_cells = [tf.keras.layers.GRUCell(64,dropout=0.5,recurrent_dropout=0.25,kernel_regularizer=regularizers.l2(0.005)),tf.keras.layers.GRUCell(64,kernel_regularizer=regularizers.l2(0.005),dropout=0,recurrent_dropout=0)]
stacked_gru = tf.keras.layers.StackedRNNCells(rnn_cells)
gru_layer = tf.keras.layers.RNN(stacked_gru)(reshape_longformer_1)
bn_merge = BatchNormalization()(gru_layer)
drop_merge = Dropout(0.1)(bn_merge)
dense_1 = Dense(25,kernel_regularizer=regularizers.l2(0.0))(drop_merge) #0.015
bn_dense_1 = BatchNormalization()(dense_1)
drop_dense_1 = Dropout(0.1)(bn_dense_1)
dense_final = Dense(3, activation = "softmax")(drop_dense_1)
model = Model(inputs=[input_ids, attention_mask], outputs=dense_final)
model.compile(loss="categorical_crossentropy", optimizer=opt)
print(model.summary())
text_input = "Queensland detectives are investigating the death of a man after he died in hospital yesterday. 9News understands an altercation took place between the man - who lives at a unit complex in the Brisbane suburb of Stafford - and a group of friends while they were drinking last week. The altercation resulted in the man being stuck in the back of the head a number of times, with him then being rushed to hospital. The man died from the injuries in hospital yesterday."
tokenizer = LongformerTokenizer.from_pretrained("allenai/longformer-base-4096")
encoded_input = tokenizer(text_input, return_tensors='tf',padding='max_length',max_length=4096)
model([encoded_input['input_ids'],encoded_input['attention_mask']])
```
Which outputs:
```
Model: "tf_longformer_model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
longformer (TFLongformerMain multiple 148659456
=================================================================
Total params: 148,659,456
Trainable params: 148,659,456
Non-trainable params: 0
_________________________________________________________________
None
WARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
WARNING:tensorflow:From C:\ProgramData\Anaconda3\envs\tf2\lib\site-packages\tensorflow\python\ops\array_ops.py:5041: calling gather (from tensorflow.python.ops.array_ops) with validate_indices is deprecated and will be removed in a future version.
Instructions for updating:
The `validate_indices` argument has no effect. Indices are always validated on CPU and never validated on GPU.
Transformer output shape:
(None, 768)
(49152,)
Model: "model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 4096)] 0
__________________________________________________________________________________________________
input_2 (InputLayer) [(None, 4096)] 0
__________________________________________________________________________________________________
tf_longformer_model (TFLongform TFLongformerBaseMode 148659456 input_1[0][0]
input_2[0][0]
__________________________________________________________________________________________________
tf.__operators__.getitem (Slici (768,) 0 tf_longformer_model[0][14]
__________________________________________________________________________________________________
tf.__operators__.getitem_1 (Sli (768,) 0 tf_longformer_model[0][14]
__________________________________________________________________________________________________
EDITED OUT ANOTHER 62 SIMILAR LAYERS
__________________________________________________________________________________________________
tf.__operators__.getitem_63 (Sl (768,) 0 tf_longformer_model[0][14]
__________________________________________________________________________________________________
concatenate (Concatenate) (49152,) 0 tf.__operators__.getitem[0][0]
tf.__operators__.getitem_1[0][0]
EDITED ANOTHER 62 SIMILAR LINES
tf.__operators__.getitem_63[0][0]
__________________________________________________________________________________________________
reshape (Reshape) (49152, 1, 1) 0 concatenate[0][0]
__________________________________________________________________________________________________
rnn (RNN) (49152, 64) 37824 reshape[0][0]
__________________________________________________________________________________________________
batch_normalization (BatchNorma (49152, 64) 256 rnn[0][0]
__________________________________________________________________________________________________
dropout_49 (Dropout) (49152, 64) 0 batch_normalization[0][0]
__________________________________________________________________________________________________
dense (Dense) (49152, 25) 1625 dropout_49[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (49152, 25) 100 dense[0][0]
__________________________________________________________________________________________________
dropout_50 (Dropout) (49152, 25) 0 batch_normalization_1[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (49152, 3) 78 dropout_50[0][0]
==================================================================================================
Total params: 148,699,339
Trainable params: 148,699,161
Non-trainable params: 178
__________________________________________________________________________________________________
None
2021-04-29 08:53:45.368311: W tensorflow/core/framework/op_kernel.cc:1763] OP_REQUIRES failed at strided_slice_op.cc:108 : Invalid argument: slice index -63 of dimension 0 out of bounds.
Traceback (most recent call last):
File "c:\Automator_alpha\Just_longformer.py", line 60, in <module>
model([encoded_input['input_ids'],encoded_input['attention_mask']])
File "C:\ProgramData\Anaconda3\envs\tf2\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 1014, in __call__
outputs = call_fn(inputs, *args, **kwargs)
File "C:\ProgramData\Anaconda3\envs\tf2\lib\site-packages\tensorflow\python\keras\engine\functional.py", line 426, in call
return self._run_internal_graph(
File "C:\ProgramData\Anaconda3\envs\tf2\lib\site-packages\tensorflow\python\keras\engine\functional.py", line 562, in _run_internal_graph
outputs = node.layer(*args, **kwargs)
File "C:\ProgramData\Anaconda3\envs\tf2\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 1014, in __call__
outputs = call_fn(inputs, *args, **kwargs)
File "C:\ProgramData\Anaconda3\envs\tf2\lib\site-packages\tensorflow\python\keras\layers\core.py", line 1520, in _call_wrapper
return original_call(*new_args, **new_kwargs)
File "C:\ProgramData\Anaconda3\envs\tf2\lib\site-packages\tensorflow\python\keras\layers\core.py", line 1326, in _call_wrapper
return self._call_wrapper(*args, **kwargs)
File "C:\ProgramData\Anaconda3\envs\tf2\lib\site-packages\tensorflow\python\keras\layers\core.py", line 1358, in _call_wrapper
result = self.function(*args, **kwargs)
File "C:\ProgramData\Anaconda3\envs\tf2\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
return target(*args, **kwargs)
File "C:\ProgramData\Anaconda3\envs\tf2\lib\site-packages\tensorflow\python\ops\array_ops.py", line 1037, in _slice_helper
return strided_slice(
File "C:\ProgramData\Anaconda3\envs\tf2\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
return target(*args, **kwargs)
File "C:\ProgramData\Anaconda3\envs\tf2\lib\site-packages\tensorflow\python\ops\array_ops.py", line 1210, in strided_slice
op = gen_array_ops.strided_slice(
File "C:\ProgramData\Anaconda3\envs\tf2\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 10484, in strided_slice
_ops.raise_from_not_ok_status(e, name)
File "C:\ProgramData\Anaconda3\envs\tf2\lib\site-packages\tensorflow\python\framework\ops.py", line 6868, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: slice index -63 of dimension 0 out of bounds. [Op:StridedSlice] name: model/tf.__operators__.getitem/strided_slice/
```
I am using 4096 for the input layers as that was the input length specified in the longformer paper. I have tried using different value, not 64, i have tried iterating through the values without specifying index(with a for statement, in which the error says cannot iterate not knowing the first dimension).
I am new to this and feel like i am missing something basic. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11491/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11491/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11490 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11490/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11490/comments | https://api.github.com/repos/huggingface/transformers/issues/11490/events | https://github.com/huggingface/transformers/pull/11490 | 869,900,824 | MDExOlB1bGxSZXF1ZXN0NjI1MTg3Mzk5 | 11,490 | add importlib_metadata as dependency as it is required for py<3.8 | {
"login": "cdeepali",
"id": 70963368,
"node_id": "MDQ6VXNlcjcwOTYzMzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/70963368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cdeepali",
"html_url": "https://github.com/cdeepali",
"followers_url": "https://api.github.com/users/cdeepali/followers",
"following_url": "https://api.github.com/users/cdeepali/following{/other_user}",
"gists_url": "https://api.github.com/users/cdeepali/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cdeepali/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cdeepali/subscriptions",
"organizations_url": "https://api.github.com/users/cdeepali/orgs",
"repos_url": "https://api.github.com/users/cdeepali/repos",
"events_url": "https://api.github.com/users/cdeepali/events{/privacy}",
"received_events_url": "https://api.github.com/users/cdeepali/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Shouldn't this only be installed for python versions inferior to 3.8?",
"@LysandreJik, thanks for reviewing my PR.\r\n\r\nI think at the moment, we cannot use selectors with `noarch` python packages. Please see warning at - https://docs.conda.io/projects/conda-build/en/latest/resources/define-metadata.html?highlight=preprocess-selectors#architecture-independent-packages. ",
"I see! Let's go with it then. Thanks for the fix!",
"Could you open a PR against the `master` branch so that we can also apply that fix for future conda releases?",
"Also, are you savvy about `conda`? We're having an issue with our recent version releases: [failing suite](https://github.com/huggingface/transformers/runs/2334821335?check_suite_focus=true), do you have an idea of what might be the conflict happening here?",
"Thanks @LysandreJik for merging this one. Yes I will open a PR against master too. ",
"Added PR https://github.com/huggingface/transformers/pull/11591 for master branch.",
"> Also, are you savvy about `conda`? We're having an issue with our recent version releases: [failing suite](https://github.com/huggingface/transformers/runs/2334821335?check_suite_focus=true), do you have an idea of what might be the conflict happening here?\r\n\r\nI think this is failing because `python` version in the build env is `3.9` and we do not have `tokenizers` for `py39` on `HuggingFace` channel. \r\n```\r\n$ conda search tokenizers=0.10.2 -c HuggingFace\r\nLoading channels: done\r\n# Name Version Build Channel\r\ntokenizers 0.10.2 py35_0 HuggingFace\r\ntokenizers 0.10.2 py36_0 HuggingFace\r\ntokenizers 0.10.2 py37_0 HuggingFace\r\ntokenizers 0.10.2 py38_0 HuggingFace\r\n```\r\nLooks like anaconda upload failed for tokenizers 3.9 - https://github.com/huggingface/tokenizers/runs/2272898351#step:8:17\r\n\r\nI was able to build transformers locally with py38. ",
"Ah, I thought we were already building on 3.8, that's my bad. Thanks for your help!"
] | 1,619 | 1,620 | 1,620 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes https://github.com/huggingface/transformers/issues/11399
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11490/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11490/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11490",
"html_url": "https://github.com/huggingface/transformers/pull/11490",
"diff_url": "https://github.com/huggingface/transformers/pull/11490.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11490.patch",
"merged_at": 1620114313000
} |
https://api.github.com/repos/huggingface/transformers/issues/11489 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11489/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11489/comments | https://api.github.com/repos/huggingface/transformers/issues/11489/events | https://github.com/huggingface/transformers/pull/11489 | 869,837,653 | MDExOlB1bGxSZXF1ZXN0NjI1MTM1NjQx | 11,489 | Update README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,619 | 1,619 | 1,619 | CONTRIBUTOR | null | Add link to code
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11489/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11489/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11489",
"html_url": "https://github.com/huggingface/transformers/pull/11489",
"diff_url": "https://github.com/huggingface/transformers/pull/11489.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11489.patch",
"merged_at": 1619771399000
} |
https://api.github.com/repos/huggingface/transformers/issues/11488 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11488/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11488/comments | https://api.github.com/repos/huggingface/transformers/issues/11488/events | https://github.com/huggingface/transformers/issues/11488 | 869,793,715 | MDU6SXNzdWU4Njk3OTM3MTU= | 11,488 | TFLongformerForMaskedMLM example throws ValueError "shapes are incompatible" | {
"login": "fredo838",
"id": 11276933,
"node_id": "MDQ6VXNlcjExMjc2OTMz",
"avatar_url": "https://avatars.githubusercontent.com/u/11276933?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fredo838",
"html_url": "https://github.com/fredo838",
"followers_url": "https://api.github.com/users/fredo838/followers",
"following_url": "https://api.github.com/users/fredo838/following{/other_user}",
"gists_url": "https://api.github.com/users/fredo838/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fredo838/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fredo838/subscriptions",
"organizations_url": "https://api.github.com/users/fredo838/orgs",
"repos_url": "https://api.github.com/users/fredo838/repos",
"events_url": "https://api.github.com/users/fredo838/events{/privacy}",
"received_events_url": "https://api.github.com/users/fredo838/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! The model is working fine here, but the problem is that \"[MASK]\" and \"Paris\" are being tokenized as different numbers of tokens, which is where your shape error is coming from. Can you link me to the exact script you got this example from?",
"It's under this headline, here's the permalink: https://huggingface.co/transformers/model_doc/longformer.html#tflongformerformaskedlm",
"ah so it's probably just updating `inputs[\"labels\"] = tokenizer(\"The capital of France is Paris.\", return_tensors=\"tf\")[\"input_ids\"]\r\n` to `inputs[\"labels\"] = tokenizer(\"The capital of [MASK] is Paris.\", return_tensors=\"tf\")[\"input_ids\"]`, no?",
"I checked and you're absolutely right, the example as written does not work. I did some digging and the problem is that the mask sequence for this model is actually '\\<mask\\>' and not '[MASK]'. Therefore, 'Paris' actually does get correctly tokenized as one token but '[MASK]' does not get recognized as a special character and is 'spelled out' with three word-piece tokens instead. (You can see what splits the tokenizer chose by using `tokenizer.convert_ids_to_tokens()` on the tokenized inputs).\r\n\r\nThe example should work if you replace '[MASK]' with '\\<mask\\>'. Can you try that and let me know? If it works, we can make a PR to fix this example!",
"So now the following example:\r\n\r\n```from transformers import LongformerTokenizer, TFLongformerForMaskedLM\r\nimport tensorflow as tf\r\ntokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096')\r\nmodel = TFLongformerForMaskedLM.from_pretrained('allenai/longformer-base-4096')\r\ninputs = tokenizer(\"The capital of France is <mask>.\", return_tensors=\"tf\")\r\ninputs[\"labels\"] = tokenizer(\"The capital of France is Paris.\", return_tensors=\"tf\")[\"input_ids\"]\r\noutputs = model(inputs)\r\nloss = outputs.loss\r\nlogits = outputs.logits\r\npreds = tf.argmax(logits, axis=2)\r\npredicted_tokens = tokenizer.convert_ids_to_tokens(tf.squeeze(preds))\r\nprint(\"predicted_tokens: \", predicted_tokens)\r\n```\r\n\r\nyields:\r\n\r\n`['<s>', 'The', 'Ġcapital', 'Ġof', 'ĠFrance', 'Ġis', 'ĠParis', '.', '</s>']`\r\n\r\nSo at least we're doing something right, but there's still this weird `Ġ` character on every non-first token.",
"Ah, yes! The Ġ character is used to indicate word breaks. If you want to see the pure string output without it, try using the `decode()` method instead of `convert_ids_to_tokens()`.\r\n\r\nOther than that, though, your example looks good! I talked with people on the team and we can't use it directly, annoyingly - the examples are all built from the same template, so we can't easily change just one. Still, we can pass some arguments to make sure our example works for Longformer in future.\r\n\r\nThe relevant bit is [here](https://github.com/huggingface/transformers/blob/master/src/transformers/models/longformer/modeling_tf_longformer.py#L2080). If you'd like to try it yourself, you can submit a PR to add the argument `mask='<mask>'` to the `add_code_sample_docstrings` decorator. If that sounds like a lot of work, just let me know and I'll make the PR and credit you for spotting it!",
"@Rocketknight1 I added a PR (https://github.com/huggingface/transformers/pull/11559)",
"Closing this because we have the PR now!"
] | 1,619 | 1,620 | 1,620 | CONTRIBUTOR | null | An official example of the `TFLongFormerX` page does not work.
## Environment info
- `transformers` version: 2.4.1
- Platform: ubuntu 20.04
- Python version: python3.8
- PyTorch version (GPU?): N/A
- Tensorflow version (GPU?): 2.4.1
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@patrickvonplaten (Longformer)
@Rocketknight1 (tensorflow)
@sgugger (maintained examples )
## Information
Model I am using: Longformer
The problem arises when using:
* [x ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. `docker run -it --rm python:3.8 bash` (no gpus attached)
2. `python3 -m pip install pip --upgrade`
3. `python3 -m pip install transformers tensorflow`
4. `python3` -> launch interactive shell
5. run following lines:
```
from transformers import LongformerTokenizer, TFLongformerForMaskedLM
import tensorflow as tf
tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096')
model = TFLongformerForMaskedLM.from_pretrained('allenai/longformer-base-4096')
inputs = tokenizer("The capital of France is [MASK].", return_tensors="tf")
inputs["labels"] = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"]
outputs = model(inputs)
# loss = outputs.loss
# logits = outputs.logits
```
This throws following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1012, in __call__
outputs = call_fn(inputs, *args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 2140, in call
loss = None if inputs["labels"] is None else self.compute_loss(inputs["labels"], prediction_scores)
File "/usr/local/lib/python3.8/site-packages/transformers/modeling_tf_utils.py", line 158, in compute_loss
reduced_logits = tf.boolean_mask(tf.reshape(logits, (-1, shape_list(logits)[2])), active_loss)
File "/usr/local/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py", line 1831, in boolean_mask_v2
return boolean_mask(tensor, mask, name, axis)
File "/usr/local/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py", line 1751, in boolean_mask
shape_tensor[axis:axis + ndims_mask].assert_is_compatible_with(shape_mask)
File "/usr/local/lib/python3.8/site-packages/tensorflow/python/framework/tensor_shape.py", line 1134, in assert_is_compatible_with
raise ValueError("Shapes %s and %s are incompatible" % (self, other))
ValueError: Shapes (11,) and (9,) are incompatible
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11488/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11488/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11487 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11487/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11487/comments | https://api.github.com/repos/huggingface/transformers/issues/11487/events | https://github.com/huggingface/transformers/issues/11487 | 869,688,541 | MDU6SXNzdWU4Njk2ODg1NDE= | 11,487 | Importing problem | {
"login": "abetatos",
"id": 76526314,
"node_id": "MDQ6VXNlcjc2NTI2MzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/76526314?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abetatos",
"html_url": "https://github.com/abetatos",
"followers_url": "https://api.github.com/users/abetatos/followers",
"following_url": "https://api.github.com/users/abetatos/following{/other_user}",
"gists_url": "https://api.github.com/users/abetatos/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abetatos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abetatos/subscriptions",
"organizations_url": "https://api.github.com/users/abetatos/orgs",
"repos_url": "https://api.github.com/users/abetatos/repos",
"events_url": "https://api.github.com/users/abetatos/events{/privacy}",
"received_events_url": "https://api.github.com/users/abetatos/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"try the newest version 4.6.0.dev0\r\n\r\n\r\n\r\n",
"Could you install `sentencepiece` and try again? The `PegasusTokenizer` is based on the `sentencepiece` library.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Iam also facing the same issue"
] | 1,619 | 1,639 | 1,622 | NONE | null | - `transformers` version: 4.5.1
- It just cannot import the version., . cannot import name 'PegasusTokenizer' from 'transformers', python 3.8 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11487/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11487/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11486 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11486/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11486/comments | https://api.github.com/repos/huggingface/transformers/issues/11486/events | https://github.com/huggingface/transformers/pull/11486 | 869,617,597 | MDExOlB1bGxSZXF1ZXN0NjI0OTQ4NjI4 | 11,486 | Update `PreTrainedTokenizerBase` to check/handle batch length for `text_pair` parameter | {
"login": "hamelsmu",
"id": 1483922,
"node_id": "MDQ6VXNlcjE0ODM5MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1483922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hamelsmu",
"html_url": "https://github.com/hamelsmu",
"followers_url": "https://api.github.com/users/hamelsmu/followers",
"following_url": "https://api.github.com/users/hamelsmu/following{/other_user}",
"gists_url": "https://api.github.com/users/hamelsmu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hamelsmu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hamelsmu/subscriptions",
"organizations_url": "https://api.github.com/users/hamelsmu/orgs",
"repos_url": "https://api.github.com/users/hamelsmu/repos",
"events_url": "https://api.github.com/users/hamelsmu/events{/privacy}",
"received_events_url": "https://api.github.com/users/hamelsmu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks! Failure in the tests is unrelated (some connection problem), so merging."
] | 1,619 | 1,619 | 1,619 | CONTRIBUTOR | null | Consider the following example:
```py
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad")
text = r"""
🤗 Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert) provides general-purpose
architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet…) for Natural Language Understanding (NLU) and Natural
Language Generation (NLG) with over 32+ pretrained models in 100+ languages and deep interoperability between
TensorFlow 2.0 and PyTorch.
"""
questions = [
"How many pretrained models are available in 🤗 Transformers?",
"What does 🤗 Transformers provide?",
"🤗 Transformers provides interoperability between which frameworks?"
]
inp = tokenizer(text=questions,
text_pair=text,
add_special_tokens=True,
padding=True,
truncation=True,
return_tensors="pt")
print(inp.input_ids.shape)
```
**The error in the above example is that the parameter `text_pair` is a string, but is supposed to be a `List[str]` to match the batch size of `text`.**
Currently, this silently fails because when `text_pair` is a string it is treated as an iterable causing `zip(text, text_pair)` to erroneously build the wrong inputs to the model. This PR adds the following:
1. If `text_pair` is a string but the user passes in a batch of `text`, we convert the input for them automatically (For example when you want to ask multiple questions of the same passage).
2. Adds error checking to see if the batch length of `text` matches the batch length of `text_pair` ONLY when a batch of inputs is used.
@LysandreJik @sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11486/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11486/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11486",
"html_url": "https://github.com/huggingface/transformers/pull/11486",
"diff_url": "https://github.com/huggingface/transformers/pull/11486.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11486.patch",
"merged_at": 1619619078000
} |
https://api.github.com/repos/huggingface/transformers/issues/11485 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11485/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11485/comments | https://api.github.com/repos/huggingface/transformers/issues/11485/events | https://github.com/huggingface/transformers/issues/11485 | 869,594,228 | MDU6SXNzdWU4Njk1OTQyMjg= | 11,485 | run_mlm.py : Missing key(s) in state_dict & Unexpected key(s) in state_dict | {
"login": "TingNLP",
"id": 54096137,
"node_id": "MDQ6VXNlcjU0MDk2MTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/54096137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TingNLP",
"html_url": "https://github.com/TingNLP",
"followers_url": "https://api.github.com/users/TingNLP/followers",
"following_url": "https://api.github.com/users/TingNLP/following{/other_user}",
"gists_url": "https://api.github.com/users/TingNLP/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TingNLP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TingNLP/subscriptions",
"organizations_url": "https://api.github.com/users/TingNLP/orgs",
"repos_url": "https://api.github.com/users/TingNLP/repos",
"events_url": "https://api.github.com/users/TingNLP/events{/privacy}",
"received_events_url": "https://api.github.com/users/TingNLP/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The command runs for me and according to your logs, the `Trainer` is loading a local checkpoint named `roberta-base`. Do you have a local folder named `roberta-base`? It looks like it contains a checkpoint different from the actual `roberta-base` model, which messes up and creates the error. Could you move that folder and try again?",
"@sgugger \r\nYes, I create a local folder named `roberta-base`, but the `roberta-base` folder contents is download from `huggingface` (https://huggingface.co/roberta-base/tree/main)\r\n\r\nthe `language-modeling` folder screenshot as shown below:\r\n\r\n\r\nthe `roberta-base` folder screenshot as shown below:\r\n\r\n\r\nso i am confused...",
"I think it's linked to the bug #11492 is fixing. Should be merged today and then you can try on a source install!"
] | 1,619 | 1,619 | 1,619 | NONE | null | ## Environment info
- `transformers` version: 4.6.0.dev0
- Platform: Ubuntu 16.04.3 LTS
- Python version: Python 3.6.13 :: Anaconda, Inc.
- PyTorch version (GPU?): 1.8.1+cu102
- Tensorflow version (GPU?):
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: YES
### Who can help
@sgugger
## Information
Model I am using roberta:
The problem arises when using:
- [x] the official example scripts: run_mlm.py
The tasks I am working on is:
- [x] my own task or dataset: wikitext-2-raw-txt
(https://www.salesforce.com/products/einstein/ai-research/the-wikitext-dependency-language-modeling-dataset/)
## To reproduce
Steps to reproduce the behavior:
I follow the example
https://github.com/huggingface/transformers/tree/master/examples/pytorch/language-modeling
When I run
```
python run_mlm.py \
--output_dir tmp/test-mlm \
--model_name_or_path roberta-base \
--do_train \
--train_file wikitext-2-raw-txt/wiki.train.txt \
--do_eval \
--validation_file wikitext-2-raw-txt/wiki.valid.txt \
--line_by_line
```
and the error occurs
```
2021-04-28 16:18:24.068938: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
04/28/2021 16:18:25 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 4distributed training: False, 16-bits training: False
04/28/2021 16:18:25 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir=tmp/test-mlm, overwrite_output_dir=False, do_train=True, do_eval=True, do_predict=False, evaluation_strategy=IntervalStrategy.NO, prediction_loss_only=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, lr_scheduler_type=SchedulerType.LINEAR, warmup_ratio=0.0, warmup_steps=0, logging_dir=runs/Apr28_16-18-25_Devbox4, logging_strategy=IntervalStrategy.STEPS, logging_first_step=False, logging_steps=500, save_strategy=IntervalStrategy.STEPS, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level=O1, fp16_backend=auto, fp16_full_eval=False, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name=tmp/test-mlm, disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=[], deepspeed=None, label_smoothing_factor=0.0, adafactor=False, group_by_length=False, length_column_name=length, report_to=['tensorboard', 'wandb'], ddp_find_unused_parameters=None, dataloader_pin_memory=True, skip_memory_metrics=False, use_legacy_prediction_loop=False, push_to_hub=False, _n_gpu=4, mp_parameters=)
04/28/2021 16:18:26 - WARNING - datasets.builder - Using custom data configuration default-b1467a68ec9fe52f
04/28/2021 16:18:27 - WARNING - datasets.builder - Reusing dataset text (/home/A50442/.cache/huggingface/datasets/text/default-b1467a68ec9fe52f/0.0.0/e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5)
[INFO|configuration_utils.py:498] 2021-04-28 16:18:27,029 >> loading configuration file roberta-base/config.json
[INFO|configuration_utils.py:536] 2021-04-28 16:18:27,029 >> Model config RobertaConfig {
"architectures": [
"RobertaForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"transformers_version": "4.6.0.dev0",
"type_vocab_size": 1,
"use_cache": true,
"vocab_size": 50265
}
[INFO|configuration_utils.py:498] 2021-04-28 16:18:27,030 >> loading configuration file roberta-base/config.json
[INFO|configuration_utils.py:536] 2021-04-28 16:18:27,030 >> Model config RobertaConfig {
"architectures": [
"RobertaForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"transformers_version": "4.6.0.dev0",
"type_vocab_size": 1,
"use_cache": true,
"vocab_size": 50265
}
[INFO|tokenization_utils_base.py:1649] 2021-04-28 16:18:27,030 >> Didn't find file roberta-base/added_tokens.json. We won't load it.
[INFO|tokenization_utils_base.py:1649] 2021-04-28 16:18:27,030 >> Didn't find file roberta-base/special_tokens_map.json. We won't load it.
[INFO|tokenization_utils_base.py:1649] 2021-04-28 16:18:27,030 >> Didn't find file roberta-base/tokenizer_config.json. We won't load it.
[INFO|tokenization_utils_base.py:1713] 2021-04-28 16:18:27,030 >> loading file roberta-base/vocab.json
[INFO|tokenization_utils_base.py:1713] 2021-04-28 16:18:27,030 >> loading file roberta-base/merges.txt
[INFO|tokenization_utils_base.py:1713] 2021-04-28 16:18:27,031 >> loading file roberta-base/tokenizer.json
[INFO|tokenization_utils_base.py:1713] 2021-04-28 16:18:27,031 >> loading file None
[INFO|tokenization_utils_base.py:1713] 2021-04-28 16:18:27,031 >> loading file None
[INFO|tokenization_utils_base.py:1713] 2021-04-28 16:18:27,031 >> loading file None
[INFO|modeling_utils.py:1111] 2021-04-28 16:18:27,103 >> loading weights file roberta-base/pytorch_model.bin
[INFO|modeling_utils.py:1257] 2021-04-28 16:18:30,300 >> All model checkpoint weights were used when initializing RobertaForMaskedLM.
[INFO|modeling_utils.py:1266] 2021-04-28 16:18:30,300 >> All the weights of RobertaForMaskedLM were initialized from the model checkpoint at roberta-base.
If your task is similar to the task the model of the checkpoint was trained on, you can already use RobertaForMaskedLM for predictions without further training.
100%|██████████████████████████████████████████████████████████████████████████████████████| 37/37 [00:01<00:00, 18.82ba/s]
100%|████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 20.73ba/s]
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
[INFO|trainer.py:1027] 2021-04-28 16:18:34,809 >> Loading model from roberta-base).
Traceback (most recent call last):
File "run_mlm.py", line 496, in <module>
main()
File "run_mlm.py", line 459, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/A50442/anaconda3/envs/transformer/lib/python3.6/site-packages/transformers/trainer.py", line 1046, in train
self.model.load_state_dict(state_dict)
File "/home/A50442/anaconda3/envs/transformer/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1224, in load_state_dict
self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for RobertaForMaskedLM:
Missing key(s) in state_dict: "roberta.embeddings.position_ids", "lm_head.decoder.bias".
Unexpected key(s) in state_dict: "roberta.pooler.dense.weight", "roberta.pooler.dense.bias".
```
## Expected behavior
The expected behavior is that I will get a new pretrain language model based on my dataset
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11485/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11485/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11484 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11484/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11484/comments | https://api.github.com/repos/huggingface/transformers/issues/11484/events | https://github.com/huggingface/transformers/issues/11484 | 869,467,578 | MDU6SXNzdWU4Njk0Njc1Nzg= | 11,484 | MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50") Not working | {
"login": "RocKeTG",
"id": 24287627,
"node_id": "MDQ6VXNlcjI0Mjg3NjI3",
"avatar_url": "https://avatars.githubusercontent.com/u/24287627?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RocKeTG",
"html_url": "https://github.com/RocKeTG",
"followers_url": "https://api.github.com/users/RocKeTG/followers",
"following_url": "https://api.github.com/users/RocKeTG/following{/other_user}",
"gists_url": "https://api.github.com/users/RocKeTG/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RocKeTG/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RocKeTG/subscriptions",
"organizations_url": "https://api.github.com/users/RocKeTG/orgs",
"repos_url": "https://api.github.com/users/RocKeTG/repos",
"events_url": "https://api.github.com/users/RocKeTG/events{/privacy}",
"received_events_url": "https://api.github.com/users/RocKeTG/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I can load the model without any issues, I think the issue here is the `from_pretrained` call is hitting the cache and the model is not cached properly. You could force the download by passing `force_download=True`\r\n\r\n```python\r\nmodel = MBartForConditionalGeneration.from_pretrained(\"facebook/mbart-large-50\", force_download=True)\r\n```",
"I am also still facing the problem, can you please mention if there is verson specific",
"but is is successfully loading for BART-large model\r\n ",
"You could try deleting the cache in that case.",
"Not happening aging, same error is coming\r\n",
"See this [colab](https://colab.research.google.com/drive/1ENrFbZIxmK0ZtrtUduCADEZDZtg_QKZT?usp=sharing) it uses 4.5.0 and can load mbart. ",
"> I can load the model without any issues, I think the issue here is the `from_pretrained` call is hitting the cache and the model is not cached properly. You could force the download by passing `force_download=True`\r\n> \r\n> ```python\r\n> model = MBartForConditionalGeneration.from_pretrained(\"facebook/mbart-large-50\", force_download=True)\r\n> ```\r\n\r\nHi, do you know how to run transformer model like t5-small, facebook/bart-large-cnn without loading pre-trained weights? When using run_summarization.py, I only want to train their original model architecture without pre-trained model. Thank you very much!\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,619 | 1,622 | 1,622 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.0
- Platform: anaconda
- Python version: 3.7
- PyTorch version (GPU?): 1.1.0
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten
Models:
MBart50
## Information
Model I am using (Bert, XLNet ...):
MBart50
The problem arises when using:
Official script as in https://huggingface.co/transformers/master/model_doc/mbart.html#transformers.MBart50Tokenizer
The tasks I am working on is:
Official summarization task
## To reproduce
Steps to reproduce the behavior:
1. from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
2. model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50")
Error:
File "/home/aniruddha/anaconda3/envs/rupak_qg/lib/python3.6/site-packages/transformers/modeling_utils.py", line 1066, in from_pretrained f"Unable to load weights from pytorch checkpoint file for '{pretrained_model_name_or_path}' " OSError: Unable to load weights from pytorch checkpoint file for 'facebook/mbart-large-50' at '/home/aniruddha/.cache/huggingface/transformers/66cec75cd01a09243232a4dbb6e99525d2571fd2c73870343ad4573df28f5924.e61a75127adcaf4f5c0903618b64b779413423b5f661ece62a4839582b2b850a'If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
## Expected behavior
The model should load correctly.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11484/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11484/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11483 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11483/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11483/comments | https://api.github.com/repos/huggingface/transformers/issues/11483/events | https://github.com/huggingface/transformers/issues/11483 | 869,432,931 | MDU6SXNzdWU4Njk0MzI5MzE= | 11,483 | The performance of the huggingface QA model depend on the order in which it loads | {
"login": "kaka-42",
"id": 63441709,
"node_id": "MDQ6VXNlcjYzNDQxNzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/63441709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kaka-42",
"html_url": "https://github.com/kaka-42",
"followers_url": "https://api.github.com/users/kaka-42/followers",
"following_url": "https://api.github.com/users/kaka-42/following{/other_user}",
"gists_url": "https://api.github.com/users/kaka-42/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kaka-42/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kaka-42/subscriptions",
"organizations_url": "https://api.github.com/users/kaka-42/orgs",
"repos_url": "https://api.github.com/users/kaka-42/repos",
"events_url": "https://api.github.com/users/kaka-42/events{/privacy}",
"received_events_url": "https://api.github.com/users/kaka-42/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi there,\r\n\r\nPlease use the [forum](https://discuss.huggingface.co/) to ask these types of questions, use issues to report bugs or feature requests.\r\n\r\nThanks!",
"> Hi there,\r\n> \r\n> Please use the [forum](https://discuss.huggingface.co/) to ask these types of questions, use issues to report bugs or feature requests.\r\n> \r\n> Thanks!\r\n\r\nOk I will. Thank you!"
] | 1,619 | 1,619 | 1,619 | NONE | null | - `transformers` version: 4.4.2
- Python version: 3.7
I am implementing a paper that I read based on the Question Answering code "run_qa.py" on huggingface.
I added a few layer in the ELECTRA, and I trained and saved only the parameters for the added layer.
when I evaluate, I load that parameters and the rest were initialized by parameters of the pre-trained ELECTRA model.
```
def load_cda_qa_model(args, phase, checkpoint=None):
# assert phase == 'train' or phase == 'eval'
config = CONFIG_CLASSES[args.model_type].from_pretrained(args.model_name_or_path)
model = MODEL_FOR_QUESTION_ANSWERING[args.model_type].from_pretrained(checkpoint)
tmp_electra = MODEL_FOR_QUESTION_ANSWERING['electra'].from_pretrained(args.model_name_or_path, config=config)
electra_state_dict = tmp_electra.state_dict()
model_state_dict = model.state_dict()
for electra_key, electra_value in electra_state_dict.items():
model_state_dict[electra_key] = electra_value
model.load_state_dict(model_state_dict)
return model
```
the results of two cases are:
## case 1

## case 2

What I want to ask here is why the results change when the order of writing in the red and yellow parts seems to be no difference in code flow.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11483/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11483/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11482 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11482/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11482/comments | https://api.github.com/repos/huggingface/transformers/issues/11482/events | https://github.com/huggingface/transformers/issues/11482 | 869,393,396 | MDU6SXNzdWU4NjkzOTMzOTY= | 11,482 | [Docs] Clarify Subphrase classification? | {
"login": "hamelsmu",
"id": 1483922,
"node_id": "MDQ6VXNlcjE0ODM5MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1483922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hamelsmu",
"html_url": "https://github.com/hamelsmu",
"followers_url": "https://api.github.com/users/hamelsmu/followers",
"following_url": "https://api.github.com/users/hamelsmu/following{/other_user}",
"gists_url": "https://api.github.com/users/hamelsmu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hamelsmu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hamelsmu/subscriptions",
"organizations_url": "https://api.github.com/users/hamelsmu/orgs",
"repos_url": "https://api.github.com/users/hamelsmu/repos",
"events_url": "https://api.github.com/users/hamelsmu/events{/privacy}",
"received_events_url": "https://api.github.com/users/hamelsmu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The problem lies mostly on the model page in that case, it should show the text classification widget, not the masked LM widget. As for adding a model card or expanding its capabilities, the feedback should go on the [forums](https://discuss.huggingface.co/), this is not really an issue with transformers per se.\r\n\r\nThis is a great example of where pull requests on the model hub would be useful! (cc @julien-c )",
"Ok thanks I'll close this issue and open an appropriate PR/Issue in other places. I'll try to find where to update the filter and I'll put the model card on my todo list. \n\nThanks for the pointers",
"For anyone that finds this issue:\r\n\r\n - [here is the forum post that describes how to suggest model cards](https://discuss.huggingface.co/t/about-the-model-cards-category/2777)\r\n\r\n- For clarification `bert-base-cased-finetuned-mrpc` does indeed do masked LM but it additionally also does text classification (I tried doing both). But, it looks like models on the hub can only associated with one widget at a time? so It could be the case that models have hidden functionality, or does this particular model violate some kind of norm? ",
"- @hamelsmu The model card for this model should a minima reference the `mrpc` dataset, though we don't have it as a standalone dataset so the way to go would be to link to `glue` instead. (right @lhoestq?)\r\n- you can also add a `tags: - paraphrase-classification ` to the YAML (tags are pretty much open)\r\n- read the doc about the model hub here http://huggingface.co/docs (should it be linked more prominently from the transformers doc?)\r\n- As this is a \"legacy\" model (not inside an organization), it's hard to remember who trained it and therefore could answer more questions (the original BERT authors? Someone from HF?)\r\n- To your last question, most models only have one head so one task – for simplicity we enforce this constraint of only having one widget or Inference API endpoint per model",
"Yes for now we have to link to `glue`.\r\nThough I've noticed that many models use the `mrpc` tag that doesn't link to glue\r\n\r\nMaybe we can define a syntax that mentions both glue (to link to the glue dataset page) and MRPC (to mention which config if the glue dataset was used). Maybe `glue/mrpc` or `glue:mrpc`. Shall I open an issue on the website repo about this @julien-c ?"
] | 1,619 | 1,619 | 1,619 | CONTRIBUTOR | null | I am going through the docs linearly and am reading [Summary of The tasks](https://huggingface.co/transformers/task_summary.html).
The second section of [Sequence Classification](https://huggingface.co/transformers/task_summary.html) uses `bert-base-cased-finetuned-mrpc` to do paraphrase classification.
This is a bit opaque to me, as when I go to the [model page](https://huggingface.co/bert-base-cased-finetuned-mrpc) for that particular model, it doesn't really mention this capability?
How could I discover other models that have this capability? How do I verify what this model was fine-tuned on if I was searching for this information from the model hub? Is there some other documentation about this that I am missing?
Just trying to understand so I can help clarify the docs. Thanks!
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11482/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11482/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11481 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11481/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11481/comments | https://api.github.com/repos/huggingface/transformers/issues/11481/events | https://github.com/huggingface/transformers/pull/11481 | 869,365,108 | MDExOlB1bGxSZXF1ZXN0NjI0NzQzMjI5 | 11,481 | Fix checkpointing in SageMaker MP | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,619 | 1,620 | 1,620 | COLLABORATOR | null | # What does this PR do?
The merge of the two Trainers removed the call to `optimizer.state_dict()` being made only on `dp_rank` 0 processes. This PR adds it back. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11481/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11481/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11481",
"html_url": "https://github.com/huggingface/transformers/pull/11481",
"diff_url": "https://github.com/huggingface/transformers/pull/11481.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11481.patch",
"merged_at": 1620062307000
} |
https://api.github.com/repos/huggingface/transformers/issues/11480 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11480/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11480/comments | https://api.github.com/repos/huggingface/transformers/issues/11480/events | https://github.com/huggingface/transformers/issues/11480 | 869,337,241 | MDU6SXNzdWU4NjkzMzcyNDE= | 11,480 | Error In Running Predictions for run_text_classification.py | {
"login": "rajesh-dhiman",
"id": 18427643,
"node_id": "MDQ6VXNlcjE4NDI3NjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/18427643?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajesh-dhiman",
"html_url": "https://github.com/rajesh-dhiman",
"followers_url": "https://api.github.com/users/rajesh-dhiman/followers",
"following_url": "https://api.github.com/users/rajesh-dhiman/following{/other_user}",
"gists_url": "https://api.github.com/users/rajesh-dhiman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajesh-dhiman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajesh-dhiman/subscriptions",
"organizations_url": "https://api.github.com/users/rajesh-dhiman/orgs",
"repos_url": "https://api.github.com/users/rajesh-dhiman/repos",
"events_url": "https://api.github.com/users/rajesh-dhiman/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajesh-dhiman/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, thank you for the report! I'm examining it now and I'm hoping to push some updates to that script today.",
"I haven't been able to reproduce this issue, but I have a PR open to make modifications to the script. I'll let you know as soon as the updated version is available - would you be willing to check if the issue is still there once it is?",
"The script has been updated! Please let me know if you encounter the same problems.",
"@Rocketknight1 Thank you i am able to run predictions and it gives correct prediction for trained data.. \r\n\r\nHere is my Wish List if you can provide:\r\nI am trying to integrate run_text_classification.py to my program where i will provide it a sentence and it gives the prediction as per labelling. and my program uses that label for something useful.\r\n\r\n1. It writes to a file the prediction result, will that be possible if I import run_text_classification.py in my program and you exposed a function which takes input list of strings for sentences to classify with all other parameters required to run. And returns list of strings with predictions in same order. That way i do not have to read a file always for the result.\r\n\r\n2. Default fallback label: Meaning if i passed a sentence for classification, if model is not able to classify as per trained labelled data, it returns the Default fallback label set by user. Tat way i can know for which sentences i have to retrain the model\r\n\r\n@Rocketknight1 thanks in advance\r\n\r\nRajesh Dhiman\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n",
"Hm! Number 2 in particular is a fairly advanced ML topic - getting models to know which inputs they can and can't classify accurately is surprisingly hard. This is a fairly fundamental problem that people are still writing papers about, and not one we can really tackle well in an introductory example.\r\n\r\nYour suggestion for 1) is certainly possible, though, and we'll think about it! Our intention isn't to support every possible use case with the examples, though! We really just want to show one working example that shows off a lot of the features of the library, and we expect that users will have to modify the code themselves in a lot of cases.",
"@Rocketknight1 \r\nHi \r\nHow I can get the Confidence score in the prediction results.. i need to have that, Is there any option i can set in settings ",
"@Rocketknight1\r\ni got it.. Sorry it was a dumb Question.. I am a nerd..",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,619 | 1,626 | 1,626 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.0
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.8.0
- PyTorch version (GPU?): 1.7.1+cpu (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@Rocketknight1
## Information
I ran training successfully with below command
run_text_classification.py \
--model_name_or_path microsoft/xtremedistil-l6-h384-uncased \
--output_dir classificationoutput \
--do_train \
--train_file PreparedData.csv \
--do_eval \
--validation_file PreparedData.csv \
--num_train_epochs 100 \
--test_file PreparedData.csv
train file format
label,data
l1, my sentence 1
l1, my sentence 2
l2, my sentence 3
l2, my sentence 4
. . .
## To reproduce
Now after training i want to do some predictions so created PredictionData.csv with single column as below
data
my sentence 1
my sentence 2
my sentence 3
. . .
Then ran the prediction as below using the model and config from the output of training
%run run_text_classification.py \
--model_name_or_path C:\Users\xxxxxxxxx\classificationoutput\tf_model.h5 \
--config_name C:\Users\xxxxxxxxx\classificationoutput\config.json\
--output_dir classificationoutput \
--do_predict \
--test_file PredictionData.csv
## Got Error as below
INFO:__main__:Checkpoint detected, resuming training from checkpoint in classificationoutput. To avoid this behavior, change the `--output_dir` or add `--overwrite_output_dir` to train from scratch.
INFO:__main__:Training/evaluation parameters TrainingArguments(output_dir=classificationoutput, overwrite_output_dir=False, do_train=False, do_eval=None, do_predict=True, evaluation_strategy=IntervalStrategy.NO, prediction_loss_only=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, lr_scheduler_type=SchedulerType.LINEAR, warmup_ratio=0.0, warmup_steps=0, logging_dir=runs\Apr27_15-55-43_GC8SQLQ2E, logging_strategy=IntervalStrategy.STEPS, logging_first_step=False, logging_steps=500, save_strategy=IntervalStrategy.STEPS, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level=O1, fp16_backend=auto, fp16_full_eval=False, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name=classificationoutput, disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=[], deepspeed=None, label_smoothing_factor=0.0, adafactor=False, group_by_length=False, length_column_name=length, report_to=['tensorboard'], ddp_find_unused_parameters=None, dataloader_pin_memory=True, skip_memory_metrics=False, _n_gpu=0, mp_parameters=)
INFO:__main__:Loading a local file for test: PredictionData.csv
WARNING:datasets.builder:Using custom data configuration default-5a3e83535773f703
Downloading and preparing dataset csv/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to C:\Users\xxxxxxxxx\.cache\huggingface\datasets\csv\default-5a3e83535773f703\0.0.0\2dc6629a9ff6b5697d82c25b73731dd440507a69cbce8b425db50b751e8fcfd0...
Dataset csv downloaded and prepared to C:\Users\xxxxxxxxx\.cache\huggingface\datasets\csv\default-5a3e83535773f703\0.0.0\2dc6629a9ff6b5697d82c25b73731dd440507a69cbce8b425db50b751e8fcfd0. Subsequent calls will reuse this data.
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
~\run_text_classification.py in <module>
535
536 if __name__ == "__main__":
--> 537 main()
~\run_text_classification.py in main()
350 use_auth_token=True if model_args.use_auth_token else None,
351 )
--> 352 tokenizer = AutoTokenizer.from_pretrained(
353 model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,
354 cache_dir=model_args.cache_dir,
c:\python38\lib\site-packages\transformers\models\auto\tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
388 kwargs["_from_auto"] = True
389 if not isinstance(config, PretrainedConfig):
--> 390 config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
391
392 use_fast = kwargs.pop("use_fast", True)
c:\python38\lib\site-packages\transformers\models\auto\configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
396 """
397 kwargs["_from_auto"] = True
--> 398 config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
399 if "model_type" in config_dict:
400 config_class = CONFIG_MAPPING[config_dict["model_type"]]
c:\python38\lib\site-packages\transformers\configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
466 )
467 # Load config dict
--> 468 config_dict = cls._dict_from_json_file(resolved_config_file)
469
470 except EnvironmentError as err:
c:\python38\lib\site-packages\transformers\configuration_utils.py in _dict_from_json_file(cls, json_file)
549 def _dict_from_json_file(cls, json_file: Union[str, os.PathLike]):
550 with open(json_file, "r", encoding="utf-8") as reader:
--> 551 text = reader.read()
552 return json.loads(text)
553
c:\python38\lib\codecs.py in decode(self, input, final)
320 # decode input (taking the buffer into account)
321 data = self.buffer + input
--> 322 (result, consumed) = self._buffer_decode(data, self.errors, final)
323 # keep undecoded input until the next call
324 self.buffer = data[consumed:]
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position 0: invalid start byte
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11480/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11480/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11479 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11479/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11479/comments | https://api.github.com/repos/huggingface/transformers/issues/11479/events | https://github.com/huggingface/transformers/issues/11479 | 869,315,391 | MDU6SXNzdWU4NjkzMTUzOTE= | 11,479 | [Docs] Add Caching Example For CI? | {
"login": "hamelsmu",
"id": 1483922,
"node_id": "MDQ6VXNlcjE0ODM5MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1483922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hamelsmu",
"html_url": "https://github.com/hamelsmu",
"followers_url": "https://api.github.com/users/hamelsmu/followers",
"following_url": "https://api.github.com/users/hamelsmu/following{/other_user}",
"gists_url": "https://api.github.com/users/hamelsmu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hamelsmu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hamelsmu/subscriptions",
"organizations_url": "https://api.github.com/users/hamelsmu/orgs",
"repos_url": "https://api.github.com/users/hamelsmu/repos",
"events_url": "https://api.github.com/users/hamelsmu/events{/privacy}",
"received_events_url": "https://api.github.com/users/hamelsmu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This was written by @julien-c so I'll let him reply on this :-)",
"This was written before we moved from S3 to our own Cloudfront-served repositories so we could also probably just remove that paragraph.",
"Interesting. Maybe it could _still_ be a good reminder as many folks would forget to do this (I know I might have and have wasted so much of my own compute!)?\r\n\r\nHowever, I'll open a PR to remove the paragraph if that's preferred 🙇🏽 "
] | 1,619 | 1,619 | 1,619 | CONTRIBUTOR | null | From the [installation instructions](https://huggingface.co/transformers/installation.html#caching-models):
> If you expect to be downloading large volumes of models (more than 10,000) from huggingface.co (for instance through your CI setup, or a large-scale production deployment), please cache the model files on your end. It will be way faster, and cheaper. Feel free to contact us privately, we’d love to help with this.
I'm happy to write an example of how to cache with GitHub Actions. Shall I contribute this to the docs? If so, please assign the issue to me and I'll do it. If this is not a good idea, please feel free to close the issue.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11479/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11479/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11478 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11478/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11478/comments | https://api.github.com/repos/huggingface/transformers/issues/11478/events | https://github.com/huggingface/transformers/issues/11478 | 869,045,753 | MDU6SXNzdWU4NjkwNDU3NTM= | 11,478 | [Flax] Add FlaxBart model | {
"login": "stancld",
"id": 46073029,
"node_id": "MDQ6VXNlcjQ2MDczMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/46073029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stancld",
"html_url": "https://github.com/stancld",
"followers_url": "https://api.github.com/users/stancld/followers",
"following_url": "https://api.github.com/users/stancld/following{/other_user}",
"gists_url": "https://api.github.com/users/stancld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stancld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stancld/subscriptions",
"organizations_url": "https://api.github.com/users/stancld/orgs",
"repos_url": "https://api.github.com/users/stancld/repos",
"events_url": "https://api.github.com/users/stancld/events{/privacy}",
"received_events_url": "https://api.github.com/users/stancld/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Wow this sounds awesome! Let me know if you need help! (I think we will need to look at the `generate()` function) together :-)",
"Hi @patrickvonplaten, as I indicated I've started working on `FlaxBart` which can be found on this branch https://github.com/stancld/transformers/tree/FlaxBart . So far, I've implemented `FlaxBartModel` and `FlaxBartForConditionalGeneration` with some remaining pieces to do, but it is possible to run them. \r\n\r\nAs there is no official template for Flax encoder-decoder models, I've tried to follow Torch implementation of Bart and Flax implementation of Bert. Before diving deeper and finishing all the components, tests etc, could I, please, ask you to provide me with short feedback if this structure seems ok to you? Thanks a lot in advance! :)\r\n(I guess I left some redundant code there but I'm gonna polish it soon)",
"This sounds great :-) Thanks a lot for tackling this! Could you maybe make a [WIP] PR from your branch and ping me - this would make it a bit easier to review the code",
"Hey @stancld, \r\n\r\nI looked quickly and in general the PR already looks great :-) \r\n\r\nA couple of things:\r\n- we don't allow `labels` as an input argument to Flax models (and actually probably even won't do this in the future). Flax/Jax is inherently functional which means that a loss function should wrap the model forward function and not the other way around\r\n- Weight tying is done a bit differently as in PyTorch, thus we don't need ` def get_input_embeddings(self):` for now\r\n- `return_dict` is now also implemented in `FlaxBert` -> so this can be copied from there :-) \r\n\r\n=> In short the design already looks great :-) I think you can open a PR & we'll discuss everything on the PR ",
"@patrickvonplaten Thanks a lot for the feedback! I will create [WIP] PR later today :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"ping"
] | 1,619 | 1,623 | 1,623 | CONTRIBUTOR | null | # 🚀 Feature request
It would be nice to implement Flax version of BART model.
## Motivation
Narrow the gap in support between encoder transformers (BERT, RoBERTa, ...) and encoder-decoder models (BART,...).
## Your contribution
I've been working on this now so I hope I will send a PR soon.
@patrickvonplaten @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11478/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11478/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11477 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11477/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11477/comments | https://api.github.com/repos/huggingface/transformers/issues/11477/events | https://github.com/huggingface/transformers/pull/11477 | 869,016,026 | MDExOlB1bGxSZXF1ZXN0NjI0NDQ2ODM0 | 11,477 | Move integrations imports before any ML framework imports | {
"login": "dsblank",
"id": 168568,
"node_id": "MDQ6VXNlcjE2ODU2OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/168568?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dsblank",
"html_url": "https://github.com/dsblank",
"followers_url": "https://api.github.com/users/dsblank/followers",
"following_url": "https://api.github.com/users/dsblank/following{/other_user}",
"gists_url": "https://api.github.com/users/dsblank/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dsblank/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dsblank/subscriptions",
"organizations_url": "https://api.github.com/users/dsblank/orgs",
"repos_url": "https://api.github.com/users/dsblank/repos",
"events_url": "https://api.github.com/users/dsblank/events{/privacy}",
"received_events_url": "https://api.github.com/users/dsblank/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,619 | 1,622 | 1,622 | CONTRIBUTOR | null | ## Fixes
Current transformers breaks compatibility with comet_ml because it needs to be imported before any ML frameworks (such as torch). This PR simply moves the imports earlier in the flow.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
- integrations and imports: @sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11477/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11477/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11477",
"html_url": "https://github.com/huggingface/transformers/pull/11477",
"diff_url": "https://github.com/huggingface/transformers/pull/11477.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11477.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11476 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11476/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11476/comments | https://api.github.com/repos/huggingface/transformers/issues/11476/events | https://github.com/huggingface/transformers/pull/11476 | 869,010,636 | MDExOlB1bGxSZXF1ZXN0NjI0NDQyNDI1 | 11,476 | Adding new argument `max_new_tokens` for generate. | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@LysandreJik @patrickvonplaten\r\n\r\nForgot to add you for review, my bad."
] | 1,619 | 1,622 | 1,622 | CONTRIBUTOR | null | # What does this PR do?
This is a proposal to add a new argument `max_new_tokens` to `generate`.
This include a `MaxNewTokensCriteria` that enables callers that don't
know about the token length ahead (like pipelines callers) to manage
more easily the length of their generated output.
`max_length` is a hard to use argument for generate:
- It means different things in `encoder-decoder` context and
`decoder-only` context
- `encoder-decoder`: max_length = max_new_tokens - 1 (in case of bos)
- `decoder-only`: max_length = input_ids.shape[-1] + max_new_tokens
- It is hard to understand from a pipeline point of view where `tokens`
do not exist yet.
This is a proposal to add a new argument `max_new_tokens` to `generate`.
This include a `MaxNewTokensCriteria` which is a bit redundant with
respect to `MaxLengthCriteria`. It is a consistency concern for now but debattable.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik @patrickvonplaten
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11476/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11476/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11476",
"html_url": "https://github.com/huggingface/transformers/pull/11476",
"diff_url": "https://github.com/huggingface/transformers/pull/11476.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11476.patch",
"merged_at": 1622118178000
} |
https://api.github.com/repos/huggingface/transformers/issues/11475 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11475/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11475/comments | https://api.github.com/repos/huggingface/transformers/issues/11475/events | https://github.com/huggingface/transformers/pull/11475 | 869,009,087 | MDExOlB1bGxSZXF1ZXN0NjI0NDQxMTU3 | 11,475 | Experimental symbolic tracing feature with torch.fx for BERT, ELECTRA and T5 | {
"login": "michaelbenayoun",
"id": 25418079,
"node_id": "MDQ6VXNlcjI1NDE4MDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/25418079?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaelbenayoun",
"html_url": "https://github.com/michaelbenayoun",
"followers_url": "https://api.github.com/users/michaelbenayoun/followers",
"following_url": "https://api.github.com/users/michaelbenayoun/following{/other_user}",
"gists_url": "https://api.github.com/users/michaelbenayoun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michaelbenayoun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michaelbenayoun/subscriptions",
"organizations_url": "https://api.github.com/users/michaelbenayoun/orgs",
"repos_url": "https://api.github.com/users/michaelbenayoun/repos",
"events_url": "https://api.github.com/users/michaelbenayoun/events{/privacy}",
"received_events_url": "https://api.github.com/users/michaelbenayoun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Let's also:\r\n1. add some basic usage doc - we can start with just docstring\r\n2. add one test for one of the models, polish it and then see how to replicate it for other models.",
"My understanding was that this is experimental and as we start using this side of library we will generalize and improve things. Hence the more slack approach.\r\n\r\nSame for tests, I thought it was good to start with unique tests because the workarounds are unique and then over time as more models are ported to come up with common tests. \r\n\r\n@michaelbenayoun, one way to approach this puzzle is to create common tests for what's the same in all of them, and if something is unique to a given model then have just that tested in that model's test file. If you need help with that, please don't hesitate to ask.\r\n",
"Even for experimental features like model parallelism, we are using common tests. This should not be different IMO.",
"@sgugger, Michael merged the custom tests into common_tests and significantly simplified the mods to the models - yay!\r\n\r\nSo it looks ready for your review whenever you have a chance. \r\n\r\nThank you!",
"Sorry for jumping in. \r\nOut of curiosity, what is the scenario to use this symbolic tracing feature? Didn't find any example/doc...\r\nThanks.",
"Well, I initially wanted this in order to be able to try https://github.com/flexflow/FlexFlow, which requires symbolic tracing - but I haven't had a chance to do so yet.",
"Got it, thanks for the explanation.",
"> Sorry for jumping in.\r\n> Out of curiosity, what is the scenario to use this symbolic tracing feature? Didn't find any example/doc...\r\n> Thanks.\r\n\r\nThis would be also be helpful to quantize models using [ FX Graph Mode Quantization](https://pytorch.org/docs/stable/quantization.html?highlight=quantization) which automate the quantization process in Pytorch. ",
"Are these updates still functional currently? As no modeling_fx_utils.py can be seen in the source code directory. "
] | 1,619 | 1,635 | 1,621 | MEMBER | null | # What does this PR do?
This PR provides a function called "symbolic_trace" which enables symbolic tracing for models of the library using the new and still experimental torch.fx feature. Our models can't be symbolically traces directly using `torch.fx`, so this is wrapper function that overcomes various issues.
This new feature allows to perform [many kinds of transformations to the graph](https://pytorch.org/docs/stable/fx.html).
It's also needed for projects like https://github.com/flexflow/FlexFlow/
As an experiment currently only three models are supported: BERT, ELECTRA and T5 (support for other models will follow soon).
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11475/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11475/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11475",
"html_url": "https://github.com/huggingface/transformers/pull/11475",
"diff_url": "https://github.com/huggingface/transformers/pull/11475.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11475.patch",
"merged_at": 1621018650000
} |
https://api.github.com/repos/huggingface/transformers/issues/11474 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11474/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11474/comments | https://api.github.com/repos/huggingface/transformers/issues/11474/events | https://github.com/huggingface/transformers/issues/11474 | 868,929,696 | MDU6SXNzdWU4Njg5Mjk2OTY= | 11,474 | can not import mbart and mT5 modeling file | {
"login": "Aniruddha-JU",
"id": 36475622,
"node_id": "MDQ6VXNlcjM2NDc1NjIy",
"avatar_url": "https://avatars.githubusercontent.com/u/36475622?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aniruddha-JU",
"html_url": "https://github.com/Aniruddha-JU",
"followers_url": "https://api.github.com/users/Aniruddha-JU/followers",
"following_url": "https://api.github.com/users/Aniruddha-JU/following{/other_user}",
"gists_url": "https://api.github.com/users/Aniruddha-JU/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aniruddha-JU/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aniruddha-JU/subscriptions",
"organizations_url": "https://api.github.com/users/Aniruddha-JU/orgs",
"repos_url": "https://api.github.com/users/Aniruddha-JU/repos",
"events_url": "https://api.github.com/users/Aniruddha-JU/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aniruddha-JU/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could you please post more details and a code snippet? \r\n\r\nTo import a modeling use\r\n```python\r\nfrom transformers.models.mbart import modeling_mbart\r\n```"
] | 1,619 | 1,619 | 1,619 | NONE | null | @patrickvonplaten
ImportError: cannot import name 'modeling_mbart'
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11474/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11474/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11473 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11473/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11473/comments | https://api.github.com/repos/huggingface/transformers/issues/11473/events | https://github.com/huggingface/transformers/issues/11473 | 868,926,721 | MDU6SXNzdWU4Njg5MjY3MjE= | 11,473 | Can not import modeling_mbart | {
"login": "Aniruddha-JU",
"id": 36475622,
"node_id": "MDQ6VXNlcjM2NDc1NjIy",
"avatar_url": "https://avatars.githubusercontent.com/u/36475622?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aniruddha-JU",
"html_url": "https://github.com/Aniruddha-JU",
"followers_url": "https://api.github.com/users/Aniruddha-JU/followers",
"following_url": "https://api.github.com/users/Aniruddha-JU/following{/other_user}",
"gists_url": "https://api.github.com/users/Aniruddha-JU/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aniruddha-JU/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aniruddha-JU/subscriptions",
"organizations_url": "https://api.github.com/users/Aniruddha-JU/orgs",
"repos_url": "https://api.github.com/users/Aniruddha-JU/repos",
"events_url": "https://api.github.com/users/Aniruddha-JU/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aniruddha-JU/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Please follow the issue template."
] | 1,619 | 1,619 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11473/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11473/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11472 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11472/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11472/comments | https://api.github.com/repos/huggingface/transformers/issues/11472/events | https://github.com/huggingface/transformers/pull/11472 | 868,880,559 | MDExOlB1bGxSZXF1ZXN0NjI0MzMyMTY2 | 11,472 | Update min versions in README and add Flax | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,619 | 1,619 | 1,619 | COLLABORATOR | null | # What does this PR do?
This PR adapts the minimum versions of each backend (PyTorch was still at 1.0 and TensorFlow at 2.0), removes mention of TensorFlow 2.0 to just say TensorFlow (I think it's safe now!) and adds Jax as an official backend since we have worked the API a bit more.
Fixes #11422 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11472/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11472/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11472",
"html_url": "https://github.com/huggingface/transformers/pull/11472",
"diff_url": "https://github.com/huggingface/transformers/pull/11472.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11472.patch",
"merged_at": 1619615406000
} |
https://api.github.com/repos/huggingface/transformers/issues/11471 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11471/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11471/comments | https://api.github.com/repos/huggingface/transformers/issues/11471/events | https://github.com/huggingface/transformers/pull/11471 | 868,844,150 | MDExOlB1bGxSZXF1ZXN0NjI0MzAxNTE2 | 11,471 | Pytorch - Lazy initialization of models | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Not to impede this PR, just wanted to add that this comment might be of importance for future work in this area: https://github.com/pytorch/pytorch/issues/29523#issuecomment-831390099\r\n\r\nAnd the whole issue: https://github.com/pytorch/pytorch/issues/29523\r\n\r\nI wonder if our side needs/should support that `reset_parameters` feature in our models. Given the last comment https://github.com/pytorch/pytorch/issues/29523#issuecomment-831435863 it's unclear where it's standing. So perhaps this is something to revisit later when the dust settles on the pytorch side.",
"FYI, pytorch has just added `torch.nn.utils.skip_init()` to handle similar situations:\r\nhttps://pytorch.org/tutorials/prototype/skip_param_init.html\r\nit should appear probably around pt-1.9.1.\r\n\r\nNote that `torch.nn.utils.skip_init()` is even more efficient as it doesn't allocate any storage at all! So there is not even an overhead of creating any weights until they are loaded from state_dict. https://pytorch.org/tutorials/prototype/skip_param_init.html#implementation-details\r\n",
"Hey all - this introduced a pretty nasty bug for us, that took a while to figure out. Here's a case where this initialization of the `missing_keys`, which normally shouldn't matter, broke our model after version 4.6. We have a custom model that optionally initializes some of its weights from a separate module. Calling `from_pretrained` and passing this separate module used to \"work\", but now those weights are overwritten. See MWE:\r\n```\r\nfrom torch.nn import Linear\r\nfrom transformers import BertModel\r\n\r\n\r\nclass MyCustomModel(BertModel):\r\n def __init__(self, config, custom_layer=None):\r\n super().__init__(config)\r\n if custom_layer is not None:\r\n self.custom_layer = custom_layer\r\n else:\r\n self.custom_layer = Linear(1024, 1024)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n import transformers\r\n print(transformers.__version__)\r\n layer = Linear(1024, 1024)\r\n print(layer.weight.sum())\r\n custom_model = MyCustomModel.from_pretrained('bert-base-uncased', custom_layer=layer)\r\n # used to be the same as the layer above, but it is \"re-initialized\" in the from_pretrained method\r\n print(custom_model.custom_layer.weight.sum())\r\n```\r\nResult:\r\n```\r\n4.11.3\r\ntensor(5.9874, grad_fn=<SumBackward0>)\r\nSome weights of the model checkpoint at bert-base-uncased were not used when initializing MyCustomModel: ['cls.seq_relationship.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.decoder.weight', 'cls.seq_relationship.bias']\r\n- This IS expected if you are initializing MyCustomModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing MyCustomModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome weights of MyCustomModel were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['bert.custom_layer.bias', 'bert.custom_layer.weight']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\ntensor(17.4651, grad_fn=<SumBackward0>)\r\n```\r\n\r\nThis used to \"just work\" in version < 4.6. Perhaps we were relying on an unintended \"feature\". Setting `_fast_init=False` *does* fix things, but it's a bit hacky as it only applies to the initialization of this custom module that is called upstream in our service. Additionally, we don't know what happens if we'll need to rely on this feature in the future, but it goes away.\r\n\r\nCan you comment on this? Thanks!",
"Hey @john-heyer,\r\n\r\nThanks for the feedback. We indeed didn't take account the effect this would have on custom models that inherit from `transformers` models like BERT. Just to understand better, the problem was that before 4.6, the `custom_layer` was not (re-)initialized when calling `MyCustomModel.from_pretrained(...)` - however after 4.6 it was initialized twice once before calling `from_pretrained(...)` and once after calling it? ",
"@john-heyer - I answered in-detail here: https://github.com/huggingface/transformers/issues/17370",
"thanks @patrickvonplaten - yes, that is correct - before 4.6 `custom_layer` was not re-initialized, and now it is! Thanks for opening the other thread. I'll follow there.",
"Can this be used to load pretrained params directly to the GPU models without keeping the full copy of params on CPU?\r\n\r\nE.g. https://github.com/huggingface/diffusers has `unet = UNetModel.from_pretrained(\"fusing/ddpm-lsun-church\").to(torch_device)` while theoretically one could have unet = UNetModel.from_pretrained(\"fusing/ddpm-lsun-church\"m device = torch_device)`",
"Hey @vadimkantorov, could you maybe open an issue under `diffusers` instead? :-)",
"Is transfomers following a different design? I assumed diffusers just copied the original design from transformers"
] | 1,619 | 1,655 | 1,620 | MEMBER | null | 🚨🚨🚨 **Breaking seeded model initialization** 🚨🚨🚨
As explained below this PR breaks seeded model initialization by default. To ensure the exact same model initialization as before, use:
```python
torch.manual_seed(seed)
model = BertForSequenceClassification.from_pretrained("bert-base-cased", _fast_init=False)
```
# What does this PR do?
This PR implements fast initializing by only initializing weights that need to be initialized. For every model two aggressive tests are added to make sure that the new "fast" initialization initializes the weights according to the exact same distributions as the previous init scheme.
IMO, it is not possible to ensure that:
```python
from transformers import BertForSequenceClassification
import torch
torch.manual_seed(0)
model = BertForSequenceClassification.from_pretrained("bert-base-cased", _fast_init=False) # this randomely inits the lm_head layer
```
yields the same results as the new "fast" init
```python
torch.manual_seed(0)
model = BertForSequenceClassification.from_pretrained("bert-base-cased") # this randomely inits the lm_head layer
```
since in the first case all layers are initialized so that the random number generator is called much more often, thus making it impossible to ensure identical weights initialization => compare to [this](https://discuss.pytorch.org/t/does-pytorch-change-its-internal-seed-during-training/46505/4) post to better understand why this is probably not possible.
This became obvious in this PR since I had to change the random_seed for running the `run_ner.py` examples test to make it pass.
I guess it is therefore better to stick to "initializing all weights" for now and only when changing to version 5.0 making this breaking change. => Guess we should discuss this @sgugger @LysandreJik @stas00
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Fixes: #9205 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11471/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11471/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11471",
"html_url": "https://github.com/huggingface/transformers/pull/11471",
"diff_url": "https://github.com/huggingface/transformers/pull/11471.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11471.patch",
"merged_at": 1620228141000
} |
https://api.github.com/repos/huggingface/transformers/issues/11470 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11470/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11470/comments | https://api.github.com/repos/huggingface/transformers/issues/11470/events | https://github.com/huggingface/transformers/pull/11470 | 868,774,517 | MDExOlB1bGxSZXF1ZXN0NjI0MjQzODE5 | 11,470 | [FlaxRoberta] Add FlaxRobertaModels & adapt run_mlm_flax.py | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,619 | 1,620 | 1,620 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adds all FlaxRobertaModels and adapts `run_mlm_flax.py` to be trainable with FlaxRoberta as well.
This [roberta-base](https://huggingface.co/patrickvonplaten/norwegian-roberta-base) was pretrained as an example.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11470/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11470/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11470",
"html_url": "https://github.com/huggingface/transformers/pull/11470",
"diff_url": "https://github.com/huggingface/transformers/pull/11470.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11470.patch",
"merged_at": 1620151079000
} |
https://api.github.com/repos/huggingface/transformers/issues/11469 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11469/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11469/comments | https://api.github.com/repos/huggingface/transformers/issues/11469/events | https://github.com/huggingface/transformers/issues/11469 | 868,761,536 | MDU6SXNzdWU4Njg3NjE1MzY= | 11,469 | Train GPT2 with Trainer & TrainingArguments using/specifying attention_mask | {
"login": "alexol91",
"id": 1785722,
"node_id": "MDQ6VXNlcjE3ODU3MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1785722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexol91",
"html_url": "https://github.com/alexol91",
"followers_url": "https://api.github.com/users/alexol91/followers",
"following_url": "https://api.github.com/users/alexol91/following{/other_user}",
"gists_url": "https://api.github.com/users/alexol91/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexol91/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexol91/subscriptions",
"organizations_url": "https://api.github.com/users/alexol91/orgs",
"repos_url": "https://api.github.com/users/alexol91/repos",
"events_url": "https://api.github.com/users/alexol91/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexol91/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi there,\r\n\r\nIf your dataset or collator returns the `attention_mask`, then you don't need to pass it separately. With `Trainer`, all the input that you want to pass to model's `forward` should be returned by the dataset/collator and it will be passed to `model.forward` by `Trainer`.",
"Thank you very much for your quick reply @patil-suraj \r\n\r\nIs it possible that the problem is that more than 50% of the input is padding? Could this be too much? Do you think that training more would solve it? It currently returns a minimal loss. Having such a low loss is what made me think that I was not considering the `attention_mask` and that is why the loss was low (obviously it is easy to predict 600 padding tokens xD)\r\n\r\nWhat I am doing now is changing the size of the input, instead of 1024 (default value) I am testing with 400 (size of the longest text in my dataset).\r\n\r\nBest regards!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,619 | 1,622 | 1,622 | NONE | null | Hi, I'm using Trainer & TrainingArguments to train GPT2 Model, but it seems that this does not work well.
My datasets have the ids of the tokens of my corpus and the mask of each text, to indicate where to apply the attention:
```
Dataset({
features: ['attention_mask', 'input_ids', 'labels'],
num_rows: 2012860
}))
```
I am doing the training with Trainer & TrainingArguments, passing my model and my previous dataset as follows. But nowhere do I specify anything about the attention_mask:
```
training_args = TrainingArguments(
output_dir=path_save_checkpoints,
overwrite_output_dir=True,
num_train_epochs=1,
per_device_train_batch_size = 4,
gradient_accumulation_steps = 4,
logging_steps = 5_000, save_steps=5_000,
fp16=True,
deepspeed="ds_config.json",
remove_unused_columns = True,
debug = True
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=dataset,
tokenizer=tokenizer,
)
trainer.train()
```
How should I tell the Trainer to use this feature (attention_mask)?
If you take a look at the file /transformers/trainer.py there is no reference to "attention" or "mask".
Thanks in advance! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11469/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11469/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11468 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11468/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11468/comments | https://api.github.com/repos/huggingface/transformers/issues/11468/events | https://github.com/huggingface/transformers/issues/11468 | 868,680,007 | MDU6SXNzdWU4Njg2ODAwMDc= | 11,468 | binary classification does not work with a large amount of data | {
"login": "saeedrafieyan",
"id": 61290778,
"node_id": "MDQ6VXNlcjYxMjkwNzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/61290778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saeedrafieyan",
"html_url": "https://github.com/saeedrafieyan",
"followers_url": "https://api.github.com/users/saeedrafieyan/followers",
"following_url": "https://api.github.com/users/saeedrafieyan/following{/other_user}",
"gists_url": "https://api.github.com/users/saeedrafieyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saeedrafieyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saeedrafieyan/subscriptions",
"organizations_url": "https://api.github.com/users/saeedrafieyan/orgs",
"repos_url": "https://api.github.com/users/saeedrafieyan/repos",
"events_url": "https://api.github.com/users/saeedrafieyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/saeedrafieyan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It won't be possible for us to answer issues with another library. Also please use the forum to such [questions](https://discuss.huggingface.co/). Thank you!",
"Thank you for your answer.\r\n\r\nI asked this question here because the simple transformers are working on the transformers library. It is just an interface of it, so I thought my question relevant to this repo!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,619 | 1,622 | 1,622 | NONE | null | I'm trying to use binary classification, and I used a code like Minimal Start code in the simpletransformers.ai. I can get f1, tp> 0 with a small sample of my data (around 200K rows), but surprisingly, when I'm trying to apply the model to a whole dataset (2.6M rows) or less (500K rows), the evaluation of the model is not working very well. it returns mcc=0, tp=0, f1=0 . This is the case if the model works properly with fewer data and can predict correctly.
my code is here:
```
from simpletransformers.classification import ClassificationModel, ClassificationArgs
import pandas as pd
import logging
from sklearn.model_selection import train_test_split
from sklearn.metrics import f1_score as f1
import torch
logging.basicConfig(level=logging.INFO)
transformers_logger = logging.getLogger("transformers")
transformers_logger.setLevel(logging.WARNING)
dataset = pd.read_csv(r"C:\Users\**.csv", encoding="utf-8")#, header=None)
dataset['labels'] = (def_dataset['labels'].astype(int))
train, test = train_test_split(dataset, train_size=0.8)
model_args = ClassificationArgs(num_train_epochs=1, train_batch_size=1, save_eval_checkpoints=False,
save_steps=2000000, overwrite_output_dir=True,
output_dir=r'C:\Users\***\test\output',
save_model_every_epoch=True,
)
cuda_available = torch.cuda.is_available()
# Create a ClassificationModel
model = ClassificationModel(
"bert", "HooshvareLab/bert-fa-base-uncased", args=model_args, use_cuda=cuda_available
)
# Train the model
model.train_model(train)
# Evaluate the model
result, model_outputs, wrong_predictions = model.eval_model(test, f1=f1)
```
These are the results obtained with a semi-large amount of data(>=500K):
```
{'mcc': 0.0,
'tp': 0,
'tn': 77052,
'fp': 0,
'fn': 22948,
'auroc': 0.5,
'auprc': 0.22948,
'f1': 0.0,
'eval_loss': 1.0871533093261718}
```
and this is what I get with fewer data(200K):
```
{'mcc': 0.6321070718937202,
'tp': 6193,
'tn': 28748,
'fp': 1925,
'fn': 3134,
'auroc': 0.9218176063608271,
'auprc': 0.7718176609516296,
'f1': 0.7100028661507596,
'eval_loss': 0.31948030271530153}
```
The only difference between these two results is the size of the dataset.
I'm using windows 10 and Nvidia Quadro RTX 5000
I also tried on google colab, But the problem persisted.
How can I solve this problem? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11468/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11468/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11467 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11467/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11467/comments | https://api.github.com/repos/huggingface/transformers/issues/11467/events | https://github.com/huggingface/transformers/pull/11467 | 868,474,972 | MDExOlB1bGxSZXF1ZXN0NjIzOTk1MjE4 | 11,467 | Finish Making Quick Tour respect the model object | {
"login": "hamelsmu",
"id": 1483922,
"node_id": "MDQ6VXNlcjE0ODM5MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1483922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hamelsmu",
"html_url": "https://github.com/hamelsmu",
"followers_url": "https://api.github.com/users/hamelsmu/followers",
"following_url": "https://api.github.com/users/hamelsmu/following{/other_user}",
"gists_url": "https://api.github.com/users/hamelsmu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hamelsmu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hamelsmu/subscriptions",
"organizations_url": "https://api.github.com/users/hamelsmu/orgs",
"repos_url": "https://api.github.com/users/hamelsmu/repos",
"events_url": "https://api.github.com/users/hamelsmu/events{/privacy}",
"received_events_url": "https://api.github.com/users/hamelsmu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,619 | 1,619 | 1,619 | CONTRIBUTOR | null | This PR makes the following changes
1. As a follow up to #11462, finish correcting places where the tuple is mentioned instead of the model object.
2. You must import `AutoModel` as well as `TFAutoModel` for the tutorial to run correctly.
3. Cleaned up some language for readability.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11467/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11467/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11467",
"html_url": "https://github.com/huggingface/transformers/pull/11467",
"diff_url": "https://github.com/huggingface/transformers/pull/11467.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11467.patch",
"merged_at": 1619532252000
} |
https://api.github.com/repos/huggingface/transformers/issues/11466 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11466/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11466/comments | https://api.github.com/repos/huggingface/transformers/issues/11466/events | https://github.com/huggingface/transformers/pull/11466 | 868,468,508 | MDExOlB1bGxSZXF1ZXN0NjIzOTg5Njc5 | 11,466 | fix docs for decoder_input_ids | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks a lot for catching this Patrick! I corrected this for BART and mBART."
] | 1,619 | 1,619 | 1,619 | MEMBER | null | # What does this PR do?
Few doc fixes for `decoder_input_ids` in s2s models.
Fixes #11357
Thanks, @shyrma for spotting this! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11466/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11466/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11466",
"html_url": "https://github.com/huggingface/transformers/pull/11466",
"diff_url": "https://github.com/huggingface/transformers/pull/11466.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11466.patch",
"merged_at": 1619532397000
} |
https://api.github.com/repos/huggingface/transformers/issues/11465 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11465/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11465/comments | https://api.github.com/repos/huggingface/transformers/issues/11465/events | https://github.com/huggingface/transformers/issues/11465 | 868,363,301 | MDU6SXNzdWU4NjgzNjMzMDE= | 11,465 | [resume optimization] skip loading pretrained weights on resume | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"This can be achieved by just doing `model = AutoModelForSeq2SeqLM.from_config(config)` when the checkpoint is not None. I don't believe it will be much faster however as your analysis in #9205 pointed out to the random initialization as being the bottleneck.",
"> This can be achieved by just doing model = AutoModelForSeq2SeqLM.from_config(config) when the checkpoint is not None. \r\n\r\nFrom here, right?\r\nhttps://github.com/huggingface/transformers/blob/88ac60f7b5f6d4b62245dc21653ea3d5db7d4935/src/transformers/models/auto/auto_factory.py#L362\r\n\r\nGreat idea!\r\n\r\nThen this important part would be missed:\r\n```\r\n with deepspeed.zero.Init():\r\n model = cls(config, *model_args, **model_kwargs)\r\n```\r\nI guess I need to add it to `from_config` anyway, which would solve this part\r\n\r\nand also this won't be done:\r\n```\r\n model.eval()\r\n```\r\nbut the latter is probably redundant anyway.\r\n\r\n> I don't believe it will be much faster however as your analysis in #9205 pointed out to the random initialization as being the bottleneck.\r\n\r\nFor huge models every saving counts! once you start working with models like t5-11b it's excruciatingly slow to wait for things to start.\r\n\r\nShould I try one example and re-shuffle the order of the code?",
"Yes, we should try on one example first! Though the first step is to fix the `from_config` method of `AutoModel` :-)"
] | 1,619 | 1,622 | null | CONTRIBUTOR | null | This is similar to what was discussed in https://github.com/huggingface/transformers/issues/9205, which proposed not to random init weights on `from_pretrained`, but this time it's about resume - currently we load pretrained weights and immediately drop them on resume from checkpoint in Trainer.
To solve this we, for example, could change examples:
1. to figure out the checkpoint immediately after we init `TrainingArguments` and just before model is created.
2. then change `from_pretrained()` API to do keep everything as is, except loading the weights from `state_dict`, if say `skip_weights_load=True` is passed:
So the code becomes:
```
if training_args.do_train:
if last_checkpoint is not None:
checkpoint = last_checkpoint
elif os.path.isdir(model_args.model_name_or_path):
checkpoint = model_args.model_name_or_path
else:
checkpoint = None
model = AutoModelForSeq2SeqLM.from_pretrained(
model_args.model_name_or_path,
[...],
skip_weights_load=checkpoint is not None,
)
if training_args.do_train:
train_result = trainer.train(resume_from_checkpoint=checkpoint)
```
Any flaws in my thinking?
@patrickvonplaten, @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11465/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11465/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11464 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11464/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11464/comments | https://api.github.com/repos/huggingface/transformers/issues/11464/events | https://github.com/huggingface/transformers/issues/11464 | 868,350,690 | MDU6SXNzdWU4NjgzNTA2OTA= | 11,464 | [DeepSpeed] ZeRO-Infinity integration: getting started and issues | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
}
] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @stas00, is it normal for zero3 training to take a while to get started?\r\n\r\nI haven't put in any time to investigating yet, but I updated transformers and deepspeed to the latest masters just to see if I could get them working. My simple training script (derived from the summarization example) works fine with deepspeed and the default zero2 config, but when I run the same script with the default zero3 config, training begins but hangs with the progress bar at step 0. I let it run for about half an hour before I killed the process. The quick test zero3 in your post above seems to run fine, however. \r\n\r\nIs there some initial zero3 overhead I just need to be more patient with, or do I possibly have some deeper problem?\r\n",
"Something is wrong then, deepspeed takes a bit longer to start than normal as it pre-allocates some memory, and extra so the first time if it needs to compile some cuda extensions, but once started it should work at the normal speed.\r\n\r\nHanging on zero3 could indicate that you're on multi-gpu and doing some code that blocks on trying to sync with other gpus. Anything involving forward calls must be performed on all gpus participating in the process. If one of them is skipped all other gpus will block waiting for that gpu.\r\n\r\nFor example, if you're doing some code that performs `if trainer.is_world_process_zero()` it could block - depending on the code. For example, saving checkpoints has to happen on all processes and not just rank0.\r\n\r\nCould you please open a separate issue and help me to reproduce the problem and then we can look at it together.\r\n\r\nTo help diagnose, you can add this anywhere to your code:\r\n```\r\nimport faulthandler\r\nfaulthandler.dump_traceback_later(20, repeat=True)\r\n```\r\n\r\nand it'll dump bt for all threads every 20 secs. So you will be able to see where it's hanging.",
"Hello! I was trying out the command pasted above, but replacing the zero_optimization part from tests/deepspeed/ds_config_zero3.json with the configuration from the NVMe offload example (see link above). The error I get is:\r\n```AssertionError: num_elems 7563520> buffer 7563328```.\r\nI got this error before as well with the Megatron example from Deepspeed, but was able to solve it by increasing the aio block_size, however this time it did not work out. I should add that I used a SSD disk, in case that's important. ",
"Thank you for trying this new feature.\r\n\r\nThis looks like a potential bug in Deepspeed. I asked @tjruwase to have a look.\r\n\r\nMay be it's worthwhile to file an Issue at https://github.com/microsoft/DeepSpeed/issues if you have a few minutes? As this is definitely not an integration issue.\r\n\r\nIf you do please paste the full config you were using.\r\n\r\nthank you, @thies1006 ",
"@thies1006, thanks for reporting this issue. As @stas00 suggested, could please report this as a deepspeed issue? It would be great if you included the exact ds_config.json in the issue report. Thanks so much!",
"Just now there appeared this [issue](https://github.com/microsoft/DeepSpeed/issues/1033) which I guess is exactly the same case. Sorry for not posting the exact config right away. Thank you very much!\r\n\r\nEdit: Lowering \"sub_group_size\" from 1e14 to 1e3 solved the issue (however another one comes up, filed another issue at Deepspeed). ",
"@thies1006, there is now a [PR ](https://github.com/microsoft/DeepSpeed/pull/1036) for the assert: ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"@stas00 I am not sure if this is the right forum to ask. Feel free to direct me to somewhere else\r\nIs there a standard way of cloning a partitioned parameter? The examples I have seen are usually using gather to reconstructing it into a pytorch parameter and then cloning it. ",
"indeed, but you have to do it before you called `deepspeed.initialize` - if you do after it - Deepspeed won't know about those new parameters and all kinds of undefined behaviors/breakages will occur.\r\n\r\nYou can still add/remove params after `zero.Init` context was run (if it's used), but the model needs to be complete wrt all params being in place before it's passed to `deepspeed.initialize`\r\n\r\n",
"@stas00 Thank you for your prompt response. so before `deepspeed.initialize` would this be a correct way of cloning a ds_module?\r\n \r\n```\r\nimport deepspeed\r\n# ds_module is already partitioned\r\nwith deepspeed.zero.GatheredParameters(list(ds_module.parameters())):\r\n new_module = copy.deepcopy(ds_module)\r\n\r\n# at this point new_module is pytorch paramter\r\n# to convert to ds module\r\nnew_module = deepspeed.zero.Init(new_module)\r\n```",
"I don't think this example can work, since deepspeed installs special attributes into the tensor which would be copied and point to the wrong place. You'd have to create a normal torch param and copy the data from another param, bu perhaps you can simply ask deepspeed for adding a new util that will do the right thing for you.\r\n\r\nBut let's stop this discussion here as this is offtopic to this thread and not really related to `transformers` - I propose for you to start a new issue at https://github.com/microsoft/DeepSpeed and discuss it there, where the Deepspeed team will be able to answer your needs better."
] | 1,619 | 1,690 | 1,622 | CONTRIBUTOR | null | [DeepSpeed ZeRO-Infinity](https://arxiv.org/abs/2104.07857) HF Integration is now available in the master branch of `transformers`. Here is a quick getting started/what's new post.
ZeRO-Infinity extends ZeRO-3 by extending CPU Offload with NVMe Offload, enabling training even bigger models. And it adds various other optimizations and improvements.
## Getting started
Install the latest `deepspeed` version:
```
pip install git+https://github.com/microsoft/DeepSpeed
```
You will want to be on a transformers master branch, if you want to run a quick test:
```
git clone https://github.com/huggingface/transformers
cd transformers
BS=4; PYTHONPATH=src USE_TF=0 deepspeed examples/pytorch/translation/run_translation.py \
--model_name_or_path t5-small --output_dir /tmp/zero3 --overwrite_output_dir --max_train_samples 64 \
--max_eval_samples 64 --max_source_length 128 --max_target_length 128 --val_max_target_length 128 \
--do_train --num_train_epochs 1 --per_device_train_batch_size $BS --per_device_eval_batch_size $BS \
--learning_rate 3e-3 --warmup_steps 500 --predict_with_generate --logging_steps 0 --save_steps 0 \
--eval_steps 1 --group_by_length --dataset_name wmt16 --dataset_config ro-en --source_lang en \
--target_lang ro --source_prefix "translate English to Romanian: " \
--deepspeed tests/deepspeed/ds_config_zero3.json
```
You will find a very detailed documentation here: https://huggingface.co/transformers/master/main_classes/trainer.html#deepspeed
Your new config file will look like this (for ZeRO-3 as an example):
```json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e14,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_fp16_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
If you want to experiment with NVMe offload, please see: https://huggingface.co/transformers/master/main_classes/trainer.html#nvme-support
## Deepspeed currently runs only fp16-mixed precision
While deepspeed devs [are working on the fp32 mode](https://github.com/microsoft/DeepSpeed/pull/1004), at this moment only fp16-amp-like train/eval is available. So if your model struggles under fp16/amp it will have the same struggles under deepspeed.
Moreover, because deepspeed does `model.half()` forcing all weights to fp16, some models might be ready for this (under AMP things are switched dynamically to fp16 where needed). If you run into this please post a new issue and we will try to find a solution/workaround for those special cases.
## must use the latest `transformers` master
If you get deepspeed errors like it doesn't know what `auto` value is, you aren't on latest `transformers` master branch, `git pull` if you already have a clone and if you installed it already update your install.
## For those who already use DeepSpeed HF integration
As the integration part is evolving it has gone through a major revamp and various improvements.
There are 2 important changes that you need to be aware of if you're already using DeepSpeed integration in `transformers`:
1. After this release only config params that are set to `auto` will get automatically overriden/set to the correct/recommended values, everything else is left as is. This is to avoid the previously confusing behavior of never being quite sure what gets overridden and what not despite the logger telling what it did override. The new behavior is completely unambiguous.
See examples
* [zero2](https://github.com/huggingface/transformers/blob/0f221d2cce751182c455295ef2c03a2c1bd3d66b/tests/deepspeed/ds_config_zero2.json)
* [zero3](https://github.com/huggingface/transformers/blob/0f221d2cce751182c455295ef2c03a2c1bd3d66b/tests/deepspeed/ds_config_zero3.json)
Full doc: https://huggingface.co/transformers/master/main_classes/trainer.html#shared-configuration
2. If you are using massive models and aren't using example scripts, make sure to read:
Full doc: https://huggingface.co/transformers/master/main_classes/trainer.html#constructing-massive-models
Everything else should work as before or better.
The docs were revamped a lot too - if you find anything unclear or lacking please let me know.
If you encounter any problems please post an Issue and tag `@stas00` to it.
Thank you! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11464/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11464/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11463 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11463/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11463/comments | https://api.github.com/repos/huggingface/transformers/issues/11463/events | https://github.com/huggingface/transformers/pull/11463 | 868,339,269 | MDExOlB1bGxSZXF1ZXN0NjIzODgyMjc2 | 11,463 | [model loading] don't init weights for pretrained models | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> This won't work sadly, for two reasons.\r\n> \r\n> 1. First, everything in the `__dict__` of the config gets serialized when one uses `config.self_pretrained()` (which is called by `model.from_pretrained`) so any other model downloaded from the hub with a checkpoint saved after this is merged will get this attribute in the config. Then if a user instantiates a randomly-initialized model using the config, with the following code:\r\n> \r\n> \r\n> ```python\r\n> config = AutoConfig.from_pretrained(\"new_checkpoint_after_this_is_merge\")\r\n> model = AutoModel.from_config(config)\r\n> ```\r\n> \r\n> then the model won't be randomly initalized (at least not with `_init_weights`) since the config will have this `use_pretrained_weights`.\r\n\r\nSo if I find another way to do it that doesn't taint the config then it's OK, right? (as far as config correctness goes)\r\n\r\ne.g. what if I unset this config value as soon as `model = cls()` is done? So this is sort of a \"context\" operation then.\r\n\r\n> 2. Then come the problem that pretrained model instantiated with `from_pretrained` does not necessarily have all weights initialized (if you discard the head to put another task-specific head) and this PR will break the way those weights are randomly initialized.\r\n> \r\n> \r\n> I sadly don't see a way around passing around a list of not-initialized weights from pretrained to the `_init_weights` function\r\n\r\nI appreciate that you could think of the edge cases.\r\n\r\nClearly, we don't have any tests that somehow verify that the init is done correctly. I was hoping that there would be some, but these would be hard to conjure.\r\n\r\nIf you feel this is a worthwhile effort, perhaps let's start coming up with examples, write tests if possible and solve those? You can throw the edge-cases at me and I will try to overcome those.\r\n\r\nOr alternatively, we provide a very easy way for users to either force the init, or if it's safer to force no-init? e.g. the staple examples could all enforce no-init and explain how to change that if the user wants to modify the example to have the original behavior?\r\n\r\nSo what I'm suggesting is that instead of `from_pretrained` automatically forcing no init as I proposed in this PR, we instead have a way for a user to choose whether they want init_weights or not explicitly?",
"> I appreciate that you could think of the edge cases.\r\n\r\nThat is not the edge case but the overwhelming majority ;-) You are mostly working with seq2seq models that don't throw away any weights when doing transfer learning, but all the basic examples fine-tuning BERT on a classification task encounter this :-)\r\n\r\nTesting the init is done properly is very difficult as those are all random weights. Testing those weights follow this distribution instead of that one is not something easily achievable.\r\n\r\nI don't think the `no_init` option is the right one: it will only work for a certain class of problems and not others, so it's not general enough. We shouldn't go for it just before it's easier to implement than the other solutions on the table.",
"@stas00 @sgugger that's how I would approach the problem: https://github.com/huggingface/transformers/pull/11471",
"OK, let's move the effort to Patrick's PR https://github.com/huggingface/transformers/pull/11471\r\n\r\n> [...] You are mostly working with seq2seq models [...]\r\n\r\nGuilty as charged. I'm glad you guys have a much wider view than I. Thank you!\r\n"
] | 1,619 | 1,619 | 1,619 | CONTRIBUTOR | null | Skip `_init_weights` for pretrained models since they get immediately replaced by pretrained weights. This leads to a much faster startup for huge models.
Fixes: https://github.com/huggingface/transformers/issues/9205
@sgugger, @patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11463/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11463/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11463",
"html_url": "https://github.com/huggingface/transformers/pull/11463",
"diff_url": "https://github.com/huggingface/transformers/pull/11463.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11463.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11462 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11462/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11462/comments | https://api.github.com/repos/huggingface/transformers/issues/11462/events | https://github.com/huggingface/transformers/pull/11462 | 868,309,116 | MDExOlB1bGxSZXF1ZXN0NjIzODU0NjE2 | 11,462 | update QuickTour docs to reflect model output object | {
"login": "hamelsmu",
"id": 1483922,
"node_id": "MDQ6VXNlcjE0ODM5MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1483922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hamelsmu",
"html_url": "https://github.com/hamelsmu",
"followers_url": "https://api.github.com/users/hamelsmu/followers",
"following_url": "https://api.github.com/users/hamelsmu/following{/other_user}",
"gists_url": "https://api.github.com/users/hamelsmu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hamelsmu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hamelsmu/subscriptions",
"organizations_url": "https://api.github.com/users/hamelsmu/orgs",
"repos_url": "https://api.github.com/users/hamelsmu/repos",
"events_url": "https://api.github.com/users/hamelsmu/events{/privacy}",
"received_events_url": "https://api.github.com/users/hamelsmu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I might need some help or tips with figuring out why styling check CI is failing. I tried to debug but it is not clear to me what is wrong",
"Thanks, @sgugger that worked",
"Thank *you* for the fixes :-)"
] | 1,619 | 1,619 | 1,619 | CONTRIBUTOR | null | Currently, the [Quick tour](https://huggingface.co/transformers/quicktour.html#) docs shows model output as tuples when you print them out. In the current version of 🤗, the user sees an object that inherits from the `ModelOutput` class. Yes, you can still access this object as a tuple, but this might be confusing for many readers, especially since this is the very first document that many people see when using 🤗 .
This PR does the following things:
1. Changes code examples in the Quick Tour to show the output object, not the tuple.
2. Minor modification in the `Model Output` doc as _both_ PyTorch and Tensorflow models return an object that is an instance of a subclass of `ModelOutput`.
@sgugger
P.S. I am planning to go through all of the documentation very carefully like this, please let me know if there is anything along these lines that I can pay more attention to that is needed.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11462/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11462/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11462",
"html_url": "https://github.com/huggingface/transformers/pull/11462",
"diff_url": "https://github.com/huggingface/transformers/pull/11462.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11462.patch",
"merged_at": 1619489918000
} |
https://api.github.com/repos/huggingface/transformers/issues/11461 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11461/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11461/comments | https://api.github.com/repos/huggingface/transformers/issues/11461/events | https://github.com/huggingface/transformers/issues/11461 | 868,295,660 | MDU6SXNzdWU4NjgyOTU2NjA= | 11,461 | T5-large FP16 produces nan in loss | {
"login": "raviskolli",
"id": 48601275,
"node_id": "MDQ6VXNlcjQ4NjAxMjc1",
"avatar_url": "https://avatars.githubusercontent.com/u/48601275?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raviskolli",
"html_url": "https://github.com/raviskolli",
"followers_url": "https://api.github.com/users/raviskolli/followers",
"following_url": "https://api.github.com/users/raviskolli/following{/other_user}",
"gists_url": "https://api.github.com/users/raviskolli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/raviskolli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/raviskolli/subscriptions",
"organizations_url": "https://api.github.com/users/raviskolli/orgs",
"repos_url": "https://api.github.com/users/raviskolli/repos",
"events_url": "https://api.github.com/users/raviskolli/events{/privacy}",
"received_events_url": "https://api.github.com/users/raviskolli/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I see nans creeping in at the T5Attention in decoder. I didn't find any inf or nan in either hidden_states or key_value_states but the computed values of both key_states and value_states have nan's",
"> FP16 mode shouldn't produce nan in loss.\r\n\r\nWhy do you believe this to be the case? This model was trained in bf16, which has a totally different numerical range from fp16. So it shouldn't produce NaNs under bf16 or fp32, but under fp16 it's almost guaranteed to not work. Please see: https://discuss.huggingface.co/t/mixed-precision-for-bfloat16-pretrained-models/5315\r\n\r\nThat's said, please try this branch https://github.com/huggingface/transformers/pull/10956 that tries to use a workaround for AMP. Some users reported success. One user reported problems.\r\n\r\nAnd you can also try the new over/underflow detector: https://github.com/huggingface/transformers/pull/11274 if you want to get more precise info on where the problem emerges first. Just add `--debug activation_overflow` to the trainer command line and it will bail with the traces of the last frames as soon as nan or inf is encountered. I am reworking this tool to provide more info, and need to revamp the interface, but it's mostly done.\r\n\r\n",
"Thank you for the pointers to the discussion. Is it just finetuning or do you expect inference to be unstable as well in fp16 mode?\r\n\r\ndebug_activation_overflow looks like a great tool that can be useful in identifying the source of nans. I'll give [#10956 ](url) a try and see if it helps with my runs.",
"> Is it just finetuning or do you expect inference to be unstable as well in fp16 mode?\r\n\r\nThere are less moving parts during inference. But more or less expect the same problems.\r\n\r\nSo the workaround is to identify where under/overflow happens and force the model to perform those ops in fp32 and then convert back to fp16.\r\n\r\nIn fact with finetuning if you don't have the problem happening right away like it does with mt5, you could try to stir the model into the fp16 range by punishing large activations. Please see the proposed `loss` calculation extra: https://github.com/huggingface/transformers/pull/10956#issuecomment-820712267 (it in fact comes from the original t5 implementation but for some reason wasn't implemented in that ported model in `transformers`). \r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,619 | 1,622 | 1,622 | NONE | null | ## Environment info
- `transformers` version: 4.6.0.dev0, commit hash: 5e04d7086803ae4a3892f4082f2835a756592c2c
- Platform: Linux-4.15.0-1071-azure-x86_64-with-debian-buster-sid
- Python version: 3.7.3
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
### Who can help
t5: @patrickvonplaten, @patil-suraj
## Information
Model I am using (Bert, XLNet ...): t5-large
The problem arises when using:
* [ ] the official example scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
## To reproduce
Steps to reproduce the behavior:
cd examples/seq2seq
CUDA_VISIBLE_DEVICES=0 PYTHONPATH=../../src USE_TF=0 ./run_translation.py \
--model_name_or_path t5-large \
--do_train --source_lang en --target_lang ro \
--source_prefix "translate English to Romanian: " \
--dataset_name wmt16 --dataset_config "ro-en" \
--output_dir /tmp/tst-translation \
--per_device_train_batch_size 4 \
--overwrite_output_dir \
--predict_with_generate \
--num_train_epochs 1 --fp16
## Expected behavior
FP16 mode shouldn't produce nan in loss. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11461/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11461/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11460 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11460/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11460/comments | https://api.github.com/repos/huggingface/transformers/issues/11460/events | https://github.com/huggingface/transformers/issues/11460 | 868,211,301 | MDU6SXNzdWU4NjgyMTEzMDE= | 11,460 | support batch-sampler in trainer | {
"login": "dorooddorood606",
"id": 79288051,
"node_id": "MDQ6VXNlcjc5Mjg4MDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/79288051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorooddorood606",
"html_url": "https://github.com/dorooddorood606",
"followers_url": "https://api.github.com/users/dorooddorood606/followers",
"following_url": "https://api.github.com/users/dorooddorood606/following{/other_user}",
"gists_url": "https://api.github.com/users/dorooddorood606/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorooddorood606/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorooddorood606/subscriptions",
"organizations_url": "https://api.github.com/users/dorooddorood606/orgs",
"repos_url": "https://api.github.com/users/dorooddorood606/repos",
"events_url": "https://api.github.com/users/dorooddorood606/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorooddorood606/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,619 | 1,622 | 1,622 | NONE | null | # 🚀 Feature request
Hi
Currently Trainer class only supports sampler, but not batch-sampler, (the sampling strategies based on batches, which are a group of samplers in torch), if a user wants to use batch-sampler, currently Trainer class introduces unwanted bugs, but not setting epoch for this type of samplers. If a user is careful enough, he/she still need to overwrite the whole ```train``` function, which is a big chunk of code, could you make this line a function, so the user easily overwrite it for the case of batch-sampler? This part of trainer also introduces bugs specially, if a user use user-defined samplers as well, and not being careful to set epoch for them.
```
if isinstance(train_dataloader.sampler, DistributedSampler):
train_dataloader.sampler.set_epoch(epoch)
```
so the user might need to set it to:
```
train_dataloader.batch_sampler.set_epoch(epoch)
```
thanks
## Motivation
supporting all type of samplers in Trainer, or making set_epoch a function so user can overwrite it nicely
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11460/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11460/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11459 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11459/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11459/comments | https://api.github.com/repos/huggingface/transformers/issues/11459/events | https://github.com/huggingface/transformers/issues/11459 | 868,191,568 | MDU6SXNzdWU4NjgxOTE1Njg= | 11,459 | extending metric_for_best_model to a list of strings | {
"login": "dorooddorood606",
"id": 79288051,
"node_id": "MDQ6VXNlcjc5Mjg4MDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/79288051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorooddorood606",
"html_url": "https://github.com/dorooddorood606",
"followers_url": "https://api.github.com/users/dorooddorood606/followers",
"following_url": "https://api.github.com/users/dorooddorood606/following{/other_user}",
"gists_url": "https://api.github.com/users/dorooddorood606/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorooddorood606/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorooddorood606/subscriptions",
"organizations_url": "https://api.github.com/users/dorooddorood606/orgs",
"repos_url": "https://api.github.com/users/dorooddorood606/repos",
"events_url": "https://api.github.com/users/dorooddorood606/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorooddorood606/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This could be added indeed. Note that there is already a workaround: when defining your `compute_metrics` function, you can add a new field with this average:\r\n```\r\ndef compute_metrics(eval_preds):\r\n # Your previous metric computation\r\n metrics[\"combined\"] = (metrics[\"accuracy\"] + metrics[\"f1\"]) / 2\r\n return metrics\r\n```\r\nand then you can pass `--metric_for_best_model combined`. This approach is also more flexible as you can completely define the way the combination is done (so you can pick weights for your mean or do something else than the mean).",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,619 | 1,690 | 1,622 | NONE | null | # 🚀 Feature request
Hi
Currently ` metric_for_best_model(:obj:`str`, `optional`) ` only covers one metric, for some datasets like MRPC, one have several metrics, like accuracy/F1 or like in STSB both pearson/spearman, and user might need to choose the best based on average on all metrics, it could be helpful to have this option.
This is related to the trainer @sgugger
thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11459/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11459/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11458 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11458/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11458/comments | https://api.github.com/repos/huggingface/transformers/issues/11458/events | https://github.com/huggingface/transformers/issues/11458 | 868,153,133 | MDU6SXNzdWU4NjgxNTMxMzM= | 11,458 | "Is next sentence" pre-training task availability for Language Modeling scripts | {
"login": "shabie",
"id": 30535146,
"node_id": "MDQ6VXNlcjMwNTM1MTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/30535146?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shabie",
"html_url": "https://github.com/shabie",
"followers_url": "https://api.github.com/users/shabie/followers",
"following_url": "https://api.github.com/users/shabie/following{/other_user}",
"gists_url": "https://api.github.com/users/shabie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shabie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shabie/subscriptions",
"organizations_url": "https://api.github.com/users/shabie/orgs",
"repos_url": "https://api.github.com/users/shabie/repos",
"events_url": "https://api.github.com/users/shabie/events{/privacy}",
"received_events_url": "https://api.github.com/users/shabie/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Related issues: https://github.com/huggingface/transformers/issues/1622, https://github.com/huggingface/transformers/issues/2898 and https://github.com/huggingface/transformers/issues/2166\r\n\r\nHowever, example scripts are made to be very understandable and very easy to tweak, so modifying them to include the next sentence prediction objective for BERT shouldn't be complicated! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Same issue! Anybody done it before?\r\nHave been trying using Hugging face library for more than a month now. Still running into issues"
] | 1,619 | 1,674 | 1,622 | CONTRIBUTOR | null | # 🚀 Feature request
BERT was trained on 2 pretraining tasks. The scripts [here ](https://github.com/huggingface/transformers/tree/master/examples/pytorch/language-modeling) in the repo for language modeling only provide scripts for masked language modeling.
Is there any specific reason for this?
I was hoping if those scripts could be extended to include "next sentence" pretraining task to remain faithful to the pretraining methodology used by BERT if I choose to further pretrain on some corpus.
## Motivation
BERT uses 2 pre-training tasks and the scripts provide only of them.
## Your contribution
I am not sure if I can extend the script myself. I will gladly look into them if I know the there aren't any good reasons for them not being provided by HF in the first place.
Thanks a lot! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11458/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11458/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11457 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11457/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11457/comments | https://api.github.com/repos/huggingface/transformers/issues/11457/events | https://github.com/huggingface/transformers/issues/11457 | 868,097,181 | MDU6SXNzdWU4NjgwOTcxODE= | 11,457 | Can this `@slow` annotation be removed at barthez tokenizer test= | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"No it is intended as slow. The Barthez tokenizer fast tokenizer takes forever to load from the slow one sadly, which is why we marked those tests are slow. To be able to remove that, we must first create a small sentencepiece model that is compatible with Barthez (it is using the real tokenizer checkpoint right now) and then the tests can be marked as unslow.",
"Ok thanks. Closing this then.",
"@sgugger When do you execute the slow tests? Do you do it manually before you do a release?",
"They run once every day!",
"Ahh ok thanks."
] | 1,619 | 1,620 | 1,619 | CONTRIBUTOR | null | Here is a `@slow` annotation that can be removed IMO.
Otherwise maybe write a comment why it is tagged as slow?
https://github.com/huggingface/transformers/blob/bc2571e61c985ec82819cf01ad038342771c94d0/tests/test_tokenization_barthez.py#L27
I can provide a PR. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11457/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11457/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11456 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11456/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11456/comments | https://api.github.com/repos/huggingface/transformers/issues/11456/events | https://github.com/huggingface/transformers/issues/11456 | 867,961,856 | MDU6SXNzdWU4Njc5NjE4NTY= | 11,456 | Perturb Hidden-State in Encoder-Decoder Models | {
"login": "vin-nag",
"id": 32803965,
"node_id": "MDQ6VXNlcjMyODAzOTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/32803965?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vin-nag",
"html_url": "https://github.com/vin-nag",
"followers_url": "https://api.github.com/users/vin-nag/followers",
"following_url": "https://api.github.com/users/vin-nag/following{/other_user}",
"gists_url": "https://api.github.com/users/vin-nag/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vin-nag/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vin-nag/subscriptions",
"organizations_url": "https://api.github.com/users/vin-nag/orgs",
"repos_url": "https://api.github.com/users/vin-nag/repos",
"events_url": "https://api.github.com/users/vin-nag/events{/privacy}",
"received_events_url": "https://api.github.com/users/vin-nag/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @vin-nag \r\n\r\nIf you want to directly pass `hidden_states` then you could do it this way\r\n\r\n```python\r\nmodel = EncoderDecoderModel.from_pretrained(\"google/roberta2roberta_L-24_gigaword\")\r\ntok = AutoTokenizer.from_pretrained(\"google/roberta2roberta_L-24_gigaword\")\r\n\r\narticle = \"\"\"australian shares closed down #.# percent monday\r\nfollowing a weak lead from the united states and\r\nlower commodity prices , dealers said .\"\"\"\r\n\r\nenc = tok(article, return_tensors=\"pt\")\r\nhidden_states = model.encoder(**enc, return_dict=True)\r\n\r\n# perturb the last_hidden_state\r\nhidden_states.last_hidden_state = perturb(hidden_states.last_hidden_state)\r\n\r\ngen_ids = model.generate(input_ids=None, encoder_outputs=hidden_states, attention_mask=enc[\"attention_mask\"])\r\ntok.batch_decode(gen_ids)\r\n```",
"@patil-suraj Thank you so much!",
"Hi @patil-suraj, thanks for the script. How can I access the `loss` after perturbation?"
] | 1,619 | 1,660 | 1,619 | NONE | null | Hi All,
I'm fairly new at using huggingface - so I apologize if this is answered in the documentation. I've looked around and don't think this has been asked before. I'm trying to perturb and use the hidden-state of an encoder-decoder model on the summarization task.
More specifically, I'd like to
1. Get a fixed-length hidden state for a given input after passing it to an encoder model,
2. perturb it, and
3. pass it to the decoder model to get an output.
1 and 2 seem straightforward. To get the hidden state I use:
` hidden_states = model.base_model.encoder(inputs).last_hidden_state`
and to get a fixed length embedding I'm getting the last element of this list as per this [discussion](https://github.com/huggingface/transformers/issues/1950). Perturbing this would be as simple as adding noise to a tensor.
As for the third part, I'm having difficulty using this perturbed hidden-state in the decoder to generate an output. I've looked through the code and it seems a lot of the steps are abstracted out to accommodate many kinds of models.
From what I understand, we can modify the model.generate() method so that the perturbation is added. However, this doesn't necessarily work for me since I wanted to use this hidden-state for other purposes before passing it to the decoder.
The other approach would be to create a separate function that takes as input the hidden-state and uses the second part of the code from model.generate() to produce outputs.
Before I proceed to implement this, I was wondering if there is a simpler way using existing code to do this.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11456/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11456/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11455 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11455/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11455/comments | https://api.github.com/repos/huggingface/transformers/issues/11455/events | https://github.com/huggingface/transformers/issues/11455 | 867,890,495 | MDU6SXNzdWU4Njc4OTA0OTU= | 11,455 | Unable to use custom dataset: AttributeError: 'list' object has no attribute 'keys' | {
"login": "tommasodelorenzo",
"id": 57231812,
"node_id": "MDQ6VXNlcjU3MjMxODEy",
"avatar_url": "https://avatars.githubusercontent.com/u/57231812?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tommasodelorenzo",
"html_url": "https://github.com/tommasodelorenzo",
"followers_url": "https://api.github.com/users/tommasodelorenzo/followers",
"following_url": "https://api.github.com/users/tommasodelorenzo/following{/other_user}",
"gists_url": "https://api.github.com/users/tommasodelorenzo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tommasodelorenzo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tommasodelorenzo/subscriptions",
"organizations_url": "https://api.github.com/users/tommasodelorenzo/orgs",
"repos_url": "https://api.github.com/users/tommasodelorenzo/repos",
"events_url": "https://api.github.com/users/tommasodelorenzo/events{/privacy}",
"received_events_url": "https://api.github.com/users/tommasodelorenzo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is really weird. Could you print a few items of your Dataset? The error means that they are not dictionaries containing `\"input_ids\"` but they certainly seem to be.\r\n\r\nAlso note that since you already have applied padding in your preprocessing, you can use the `default_data_collator`, but the code should work nonetheless.",
"> Also note that since you already have applied padding in your preprocessing, you can use the `default_data_collator`, but the code should work nonetheless.\r\n\r\nYeah, I did try commenting the line about the data_collator as well, but I got the same error.\r\n\r\n> This is really weird. Could you print a few items of your Dataset? The error means that they are not dictionaries containing `\"input_ids\"` but they certainly seem to be.\r\n\r\nFor instance, `dataset_train.__getitem__(1)` gives me\r\n```\r\n{'input_ids': tensor([ 102, 2719, 10118, 19614, 784, 366, 119, 142, 17586, 113,\r\n 10885, 4019, 5129, 143, 10885, 119, 4019, 14633, 1354, 137,\r\n 917, 1621, 9048, 360, 151, 143, 784, 366, 113, 213,\r\n 7809, 985, 1941, 1702, 9580, 749, 12993, 135, 9272, 119,\r\n 1202, 1328, 2909, 7427, 2909, 483, 15079, 6766, 2201, 5754,\r\n 4213, 1266, 642, 119, 1968, 115, 7584, 7124, 2899, 9654,\r\n 151, 143, 3684, 137, 17586, 113, 3151, 113, 193, 4283,\r\n 165, 1035, 1354, 4913, 1621, 9048, 360, 137, 17586, 113,\r\n 119, 7809, 985, 1941, 1702, 1621, 9048, 360, 4913, 16829,\r\n 913, 272, 3694, 2909, 7427, 145, 1723, 20957, 15016, 213,\r\n 11171, 119, 7809, 642, 3761, 188, 164, 4706, 119, 3684,\r\n 8941, 119, 6330, 8076, 2199, 642, 23829, 22462, 30934, 4213,\r\n 1354, 2759, 311, 7809, 5434, 137, 1031, 510, 2603, 5569,\r\n 5434, 137, 1031, 510, 3732, 5569, 5434, 137, 1031, 510,\r\n 3627, 14715, 30951, 4543, 8823, 5066, 3625, 3627, 1701, 7900,\r\n 153, 5066, 3625, 3732, 7559, 127, 3732, 13703, 133, 176,\r\n 11576, 2909, 13703, 133, 1621, 9048, 360, 1723, 5230, 9580,\r\n 749, 12993, 114, 1031, 510, 387, 11993, 189, 22264, 8823,\r\n 143, 6766, 3462, 5622, 27082, 113, 7809, 3132, 1011, 189,\r\n 7825, 8823, 143, 6766, 111, 341, 7124, 2899, 18482, 103]),\r\n 'token_type_ids': tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0]),\r\n 'attention_mask': tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1]),\r\n 'labels': tensor(5)}\r\n```\r\nInput texts are emails in italian.\r\n\r\n(the issue appears also with transformers 4.5.1)",
"I am unable to reproduce your bug. Are you sure your data frames don't contain a list of text in one of the line instead of just texts?",
"I found the mistake! I was doing something slightly different from what I wrote, namely\r\n``` \r\nfrom transformers import AutoConfig, TrainingArguments, DataCollatorWithPadding, Trainer\r\n\r\ntrain_dataset=dataset_train,\r\neval_dataset = dataset_val\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir='/trial',\r\n learning_rate=1e-6,\r\n do_train=True,\r\n do_eval=True,\r\n evaluation_strategy='epoch',\r\n num_train_epochs=10,\r\n per_device_train_batch_size=8,\r\n per_device_eval_batch_size=8,\r\n warmup_steps=0,\r\n weight_decay=0.2,\r\n logging_dir=\"./logs\",\r\n)\r\n\r\nnum_labels = len(label_dict)\r\nmodel = AutoModelForSequenceClassification.from_pretrained(model_name,num_labels = num_labels)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=DataCollatorWithPadding(tokenizer),\r\n tokenizer= tokenizer,\r\n train_dataset=train_dataset,\r\n eval_dataset=eval_dataset,\r\n)\r\n```\r\nThe difference is in line 3 and 4, and consequently last two lines. The mistake is the comma at the end of line 3. My bad I did not run the example code I published in the question exactly as it was. I am so sorry, and so upset to have spent a week for a stupid comma.\r\nThanks for the help",
"Oh that's a nasty little bug indeed! Glad you found the problem!"
] | 1,619 | 1,620 | 1,620 | NONE | null | What am I doing wrong?
I encode data with
```
model_name = "dbmdz/bert-base-italian-uncased"
tokenizer = AutoTokenizer.from_pretrained(model_name, do_lower_case = True)
def encode_data(texts):
return tokenizer.batch_encode_plus(
texts,
add_special_tokens=True,
return_attention_mask=True,
padding = True,
truncation=True,
max_length=200,
return_tensors='pt'
)
```
Then I create my datasets with
```
import torch
class my_Dataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = torch.tensor(labels)
def __getitem__(self, idx):
item = {key: val[idx] for key, val in self.encodings.items()}
item['labels'] = self.labels[idx]
print(item)
return item
def __len__(self):
return len(self.labels)
```
So I have
```
encoded_data_train = encode_data(df_train['text'].tolist())
encoded_data_val = encode_data(df_val['text'].tolist())
encoded_data_test = encode_data(df_test['text'].tolist())
dataset_train = my_Dataset(encoded_data_train, df_train['labels'].tolist())
dataset_val = my_Dataset(encoded_data_val, df_val['labels'].tolist())
dataset_test = my_Dataset(encoded_data_test, df_test['labels'].tolist())
```
Then I initiate my Trainer with
```
from transformers import AutoConfig, TrainingArguments, DataCollatorWithPadding, Trainer
training_args = TrainingArguments(
output_dir='/trial',
learning_rate=1e-6,
do_train=True,
do_eval=True,
evaluation_strategy='epoch',
num_train_epochs=10,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
warmup_steps=0,
weight_decay=0.2,
logging_dir="./logs",
)
num_labels = len(label_dict)
model = AutoModelForSequenceClassification.from_pretrained(model_name,num_labels = num_labels)
trainer = Trainer(
model=model,
args=training_args,
data_collator=DataCollatorWithPadding(tokenizer),
tokenizer= tokenizer,
train_dataset=dataset_train,
eval_dataset=dataset_val,
)
```
and finally I train
```
trainer.train()
```
Here is the error I get
```
AttributeErrorTraceback (most recent call last)
<ipython-input-22-5d018b4b061d> in <module>
----> 1 trainer.train()
/opt/conda/lib/python3.8/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, **kwargs)
1032 self.control = self.callback_handler.on_epoch_begin(self.args, self.state, self.control)
1033
-> 1034 for step, inputs in enumerate(epoch_iterator):
1035
1036 # Skip past any already trained steps if resuming training
/opt/conda/lib/python3.8/site-packages/torch/utils/data/dataloader.py in __next__(self)
433 if self._sampler_iter is None:
434 self._reset()
--> 435 data = self._next_data()
436 self._num_yielded += 1
437 if self._dataset_kind == _DatasetKind.Iterable and \
/opt/conda/lib/python3.8/site-packages/torch/utils/data/dataloader.py in _next_data(self)
473 def _next_data(self):
474 index = self._next_index() # may raise StopIteration
--> 475 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
476 if self._pin_memory:
477 data = _utils.pin_memory.pin_memory(data)
/opt/conda/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index)
45 else:
46 data = self.dataset[possibly_batched_index]
---> 47 return self.collate_fn(data)
/opt/conda/lib/python3.8/site-packages/transformers/data/data_collator.py in __call__(self, features)
116
117 def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
--> 118 batch = self.tokenizer.pad(
119 features,
120 padding=self.padding,
/opt/conda/lib/python3.8/site-packages/transformers/tokenization_utils_base.py in pad(self, encoded_inputs, padding, max_length, pad_to_multiple_of, return_attention_mask, return_tensors, verbose)
2558 if self.model_input_names[0] not in encoded_inputs:
2559 raise ValueError(
-> 2560 "You should supply an encoding or a list of encodings to this method"
2561 f"that includes {self.model_input_names[0]}, but you provided {list(encoded_inputs.keys())}"
2562 )
AttributeError: 'list' object has no attribute 'keys'
```
What I am doing wrong?
I also tried using
```
import torch
from torch.utils.data import TensorDataset
dataset_train = TensorDataset(encoded_data_train['input_ids'], encoded_data_train['attention_mask'], torch.tensor(df_train['labels'].tolist()))
dataset_test = TensorDataset(encoded_data_test['input_ids'], encoded_data_test['attention_mask'], torch.tensor(df_test['labels'].tolist()))
dataset_val = TensorDataset(encoded_data_val['input_ids'], encoded_data_val['attention_mask'], torch.tensor(df_val['labels'].tolist()))
```
getting the same error.
Using:
torch == 1.7.1
transformers == 4.4.2
Thank you!
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11455/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11455/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11454 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11454/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11454/comments | https://api.github.com/repos/huggingface/transformers/issues/11454/events | https://github.com/huggingface/transformers/issues/11454 | 867,880,128 | MDU6SXNzdWU4Njc4ODAxMjg= | 11,454 | cannot import name 'set_seed' from 'transformers' | {
"login": "andy311p",
"id": 68938613,
"node_id": "MDQ6VXNlcjY4OTM4NjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/68938613?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andy311p",
"html_url": "https://github.com/andy311p",
"followers_url": "https://api.github.com/users/andy311p/followers",
"following_url": "https://api.github.com/users/andy311p/following{/other_user}",
"gists_url": "https://api.github.com/users/andy311p/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andy311p/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andy311p/subscriptions",
"organizations_url": "https://api.github.com/users/andy311p/orgs",
"repos_url": "https://api.github.com/users/andy311p/repos",
"events_url": "https://api.github.com/users/andy311p/events{/privacy}",
"received_events_url": "https://api.github.com/users/andy311p/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Try the following code:\r\n`from transformers.trainer_utils import set_seed`\r\nLet me know if it works!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,619 | 1,622 | 1,622 | NONE | null | i run with transformers==4.5.1 and get the following error:

do you know how to resolve this issue?
thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11454/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11454/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11453 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11453/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11453/comments | https://api.github.com/repos/huggingface/transformers/issues/11453/events | https://github.com/huggingface/transformers/pull/11453 | 867,839,513 | MDExOlB1bGxSZXF1ZXN0NjIzNDQ0MDQ0 | 11,453 | Give each hub test a different repo name | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,619 | 1,619 | 1,619 | COLLABORATOR | null | # What does this PR do?
To reduce flakiness in the tests using the hub and be able to investigate failures more closely, this PR gives them each one different namespace. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11453/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11453/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11453",
"html_url": "https://github.com/huggingface/transformers/pull/11453",
"diff_url": "https://github.com/huggingface/transformers/pull/11453.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11453.patch",
"merged_at": 1619452343000
} |
https://api.github.com/repos/huggingface/transformers/issues/11452 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11452/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11452/comments | https://api.github.com/repos/huggingface/transformers/issues/11452/events | https://github.com/huggingface/transformers/issues/11452 | 867,663,561 | MDU6SXNzdWU4Njc2NjM1NjE= | 11,452 | wav2vec2 doesn't work with torch.distributed.launch & multi GPU | {
"login": "qqpann",
"id": 17402261,
"node_id": "MDQ6VXNlcjE3NDAyMjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/17402261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qqpann",
"html_url": "https://github.com/qqpann",
"followers_url": "https://api.github.com/users/qqpann/followers",
"following_url": "https://api.github.com/users/qqpann/following{/other_user}",
"gists_url": "https://api.github.com/users/qqpann/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qqpann/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qqpann/subscriptions",
"organizations_url": "https://api.github.com/users/qqpann/orgs",
"repos_url": "https://api.github.com/users/qqpann/repos",
"events_url": "https://api.github.com/users/qqpann/events{/privacy}",
"received_events_url": "https://api.github.com/users/qqpann/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"## Error messages in short\r\n### Warning\r\n```\r\nUserWarning: Using a non-full backward hook when the forward contains multiple autograd Nodes is deprecated and will be removed in future versions. This hook will be missing some grad_input. Please use register_full_backward_hook to get the documented behavior.\r\n warnings.warn(\"Using a non-full backward hook when the forward contains multiple autograd Nodes \"\r\n```\r\n\r\nThis warning may not seem to be the direct reason for the crush, but I encounter this warning in my own scripts as well and ends up the training freeze. \r\n\r\n### Unproceedable Error\r\n```\r\nRuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`; (2) making sure all `forward` function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).\r\n```\r\n\r\nAnd I have no idea how to solve this error.",
"## Non-wav2vec2 case\r\nI also looked up other examples.\r\nhttps://github.com/huggingface/transformers/tree/master/examples/pytorch/question-answering\r\nAnd it worked. \r\nSo it seems to me a wav2vec2 model's problem with multi GPU training.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Easiest solution at the moment will be to follow @stas00 PR at #11638 ",
"This error most likely has to do with randomly skipping the layers in LayerDrop - so one gpu skips and another continues -and they get out of sync. \r\n\r\nTry to see if the error goes away if you disable its skipping logic and let all layers run, You can see how I did it \r\n\r\nhttps://github.com/huggingface/transformers/blob/c8acf9219febf534232f01ecc253034e6d3b68c3/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L643-L667\r\n\r\nI don't think LayerDrop they way it's used can be used with more than one GPU.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"FYI, I have the same issue when setting mask_time_prob to 0",
"Hey @voidful - could you add a reproducible code snippet here? :-)",
"Hi, I would love to understand if this issue is being fixed. I am having the same issue with wavlm"
] | 1,619 | 1,705 | 1,625 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.0.dev0
- Platform: Linux-4.15.0-140-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.9
- PyTorch version (GPU?): 1.8.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help
@patil-suraj @elgeish @patrickvonplaten
I see the readme is written by @patil-suraj and @elgeish , so any help would be appreciated.
## Information
Model I am using (Bert, XLNet ...): wav2vec2
The problem arises when using:
* [x] the official example scripts: (give details below)
Although the fine-tuning week is over, the example is pretty useful.
I am working on a voice recognition problem and want to train using distributed learning.
I refer to huggingface's official example here:
<https://github.com/huggingface/transformers/blob/master/examples/research_projects/wav2vec2/FINE_TUNE_XLSR_WAV2VEC2.md>
## To reproduce
Steps to reproduce the behavior:
1. On a clean environment, install requirements and git clone transformers repository.
2. Run multi GPU training code as written in the readme.
3. Bug reproduces.
The code is
```shell
git clone https://github.com/huggingface/transformers.git
cd transformers/examples/research_projects/wav2vec2/
mkdir outputs
python -m torch.distributed.launch \
--nproc_per_node=4 run_common_voice.py \
--model_name_or_path="facebook/wav2vec2-large-xlsr-53" \
--dataset_config_name="tr" \
--output_dir=./outputs \
--overwrite_output_dir \
--num_train_epochs="5" \
--per_device_train_batch_size="16" \
--learning_rate="3e-4" \
--warmup_steps="500" \
--evaluation_strategy="steps" \
--save_steps="400" \
--eval_steps="400" \
--logging_steps="400" \
--save_total_limit="3" \
--freeze_feature_extractor \
--feat_proj_dropout="0.0" \
--layerdrop="0.1" \
--gradient_checkpointing \
--fp16 \
--group_by_length \
--do_train --do_eval
```
## Error
The following error occurs.
```text
0%| | 0/275 [00:00<?, ?it/s]/home/aidealab/.conda/envs/hf/lib/python3.7/site-packages/torch/nn/modules/module.py:760: UserWarning: Using non-full backward hooks on a Module that does not return a single Tensor or a tuple of Tensors is deprecated and will be removed in future versions. This hook will be missing some of the grad_output. Please use register_full_backward_hook to get the documented behavior.
warnings.warn("Using non-full backward hooks on a Module that does not return a "
/home/aidealab/.conda/envs/hf/lib/python3.7/site-packages/torch/nn/modules/module.py:795: UserWarning: Using a non-full backward hook when the forward contains multiple autograd Nodes is deprecated and will be removed in future versions. This hook will be missing some grad_input. Please use register_full_backward_hook to get the documented behavior.
warnings.warn("Using a non-full backward hook when the forward contains multiple autograd Nodes "
Traceback (most recent call last):
File "run_common_voice.py", line 512, in <module>
main()
File "run_common_voice.py", line 484, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/aidealab/.conda/envs/hf/lib/python3.7/site-packages/transformers/trainer.py", line 1118, in train
tr_loss += self.training_step(model, inputs)
File "run_common_voice.py", line 230, in training_step
loss = self.compute_loss(model, inputs)
File "/home/aidealab/.conda/envs/hf/lib/python3.7/site-packages/transformers/trainer.py", line 1548, in compute_loss
outputs = model(**inputs)
File "/home/aidealab/.conda/envs/hf/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/aidealab/.conda/envs/hf/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 692, in forward
if self.reducer._rebuild_buckets():
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`; (2) making sure all `forward` function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
Traceback (most recent call last):
File "run_common_voice.py", line 512, in <module>
main()
File "run_common_voice.py", line 484, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/aidealab/.conda/envs/hf/lib/python3.7/site-packages/transformers/trainer.py", line 1118, in train
tr_loss += self.training_step(model, inputs)
File "run_common_voice.py", line 230, in training_step
loss = self.compute_loss(model, inputs)
File "/home/aidealab/.conda/envs/hf/lib/python3.7/site-packages/transformers/trainer.py", line 1548, in compute_loss
outputs = model(**inputs)
File "/home/aidealab/.conda/envs/hf/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/aidealab/.conda/envs/hf/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 692, in forward
if self.reducer._rebuild_buckets():
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`; (2) making sure all `forward` function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
Killing subprocess 25001
Killing subprocess 25002
Killing subprocess 25003
Killing subprocess 25004
Traceback (most recent call last):
File "/home/aidealab/.conda/envs/hf/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/aidealab/.conda/envs/hf/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/aidealab/.conda/envs/hf/lib/python3.7/site-packages/torch/distributed/launch.py", line 340, in <module>
main()
File "/home/aidealab/.conda/envs/hf/lib/python3.7/site-packages/torch/distributed/launch.py", line 326, in main
sigkill_handler(signal.SIGTERM, None) # not coming back
File "/home/aidealab/.conda/envs/hf/lib/python3.7/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler
raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd)
subprocess.CalledProcessError: Command '['/home/aidealab/.conda/envs/hf/bin/python', '-u', 'run_common_voice.py', '--local_rank=3', '--model_name_or_path=facebook/wav2vec2-large-xlsr-53', '--dataset_config_name=tr', '--output_dir=/home/aidealab/workspace/transformers/examples/research_projects/wav2vec2/outputs', '--overwrite_output_dir', '--num_train_epochs=5', '--per_device_train_batch_size=16', '--learning_rate=3e-4', '--warmup_steps=500', '--evaluation_strategy=steps', '--save_steps=400', '--eval_steps=400', '--logging_steps=400', '--save_total_limit=3', '--freeze_feature_extractor', '--feat_proj_dropout=0.0', '--layerdrop=0.1', '--gradient_checkpointing', '--fp16', '--group_by_length', '--do_train', '--do_eval']' returned non-zero exit status 1.
```
## Expected behavior
It is expected the script runs without error. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11452/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11452/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11451 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11451/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11451/comments | https://api.github.com/repos/huggingface/transformers/issues/11451/events | https://github.com/huggingface/transformers/issues/11451 | 867,615,392 | MDU6SXNzdWU4Njc2MTUzOTI= | 11,451 | mBART and DataCollatorForLanguageModeling: index -1 is out of bounds for dimension 1 with size N | {
"login": "AdrianNunez",
"id": 2635121,
"node_id": "MDQ6VXNlcjI2MzUxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2635121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AdrianNunez",
"html_url": "https://github.com/AdrianNunez",
"followers_url": "https://api.github.com/users/AdrianNunez/followers",
"following_url": "https://api.github.com/users/AdrianNunez/following{/other_user}",
"gists_url": "https://api.github.com/users/AdrianNunez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AdrianNunez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AdrianNunez/subscriptions",
"organizations_url": "https://api.github.com/users/AdrianNunez/orgs",
"repos_url": "https://api.github.com/users/AdrianNunez/repos",
"events_url": "https://api.github.com/users/AdrianNunez/events{/privacy}",
"received_events_url": "https://api.github.com/users/AdrianNunez/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @AdrianNunez \r\n\r\nCould you post the for of the `input_ids` and `labels`. As explained in the [docs](https://huggingface.co/transformers/model_doc/mbart.html#training-of-mbart), mBART expects `input_ids` and `labels` in a certain format. \r\n\r\n`labels` are prepared with the format `ids [eos, tgt_lang_code]` and then the `shift_tokens_right` function prepares `decoder_input_ids` by shifting the `labels` to right so `decoder_input_ids` become `[tgt_lang_code] ids [eos]`\r\n\r\nso from the error, it seem that there is either `eos` or `tgt_lang_code` code missing in the labels. But if this is how you want to use it then you should provide the `deocder_input_ids` manually.\r\n\r\nAlso `DataCollatorForLanguageModeling` is not really meant to be used with mBART, it's intended MLM and auto-regressive models like BERT and GPT, so it might not work with mBART which is expected. ",
"Hi @patil-suraj, thank you for your answer. This is an input example:\r\n\r\n```\r\n{'input_ids': tensor([ 33424, 6, 95866, 216479, 104, 3934, 10, 5744, 41,\r\n 22, 6, 4, 10, 23182, 6, 5, 2, 250004,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1]), \r\n'special_tokens_mask': tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]), \r\n'attention_mask': tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]), \r\n'labels': tensor([ 3786, 11281, 293, 13173, 90929, 23, 6, 4, 293,\r\n 19190, 59486, 6, 5, 2, 250019, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1])}\r\n```\r\n\r\nAnd decoding the input and labels:\r\n\r\n```\r\n['Beat', '', 'rix', 'evolve', 'd', 'into', 'a', 'modern', 'que', 'en', '', ',', 'a', 'professional', '', '.', '</s>', 'en_XX', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>']\r\n['Ze', 'werd', 'een', 'moderne', 'koning', 'in', '', ',', 'een', 'vak', 'vrouw', '', '.', '</s>', 'nl_XX', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>']\r\n```\r\n\r\n> Also DataCollatorForLanguageModeling is not really meant to be used with mBART, it's intended MLM and auto-regressive models like BERT and GPT, so it might not work with mBART which is expected.\r\n\r\nThank you for the advice. Is there an specific data collator for the BART model family?\r\n\r\nThank you in advance.",
"Thanks, I will take a look.\r\n\r\n> Thank you for the advice. Is there an specific data collator for the BART model family?\r\n\r\nIf you want to train mBART for [translation](https://github.com/huggingface/transformers/tree/master/examples/pytorch/translation) or [summrization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization) you could take a look at these examples ",
"> If you want to train mBART for translation or summrization you could take a look at these examples\r\n\r\nThank you for the links and the help. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,619 | 1,622 | 1,622 | NONE | null | ## Environment info
- `transformers` version: 4.5.1
- Platform: Linux-3.10.0-1062.9.1.el7.x86_64-x86_64-with-centos-7.7.1908-Core
- Python version: 3.6.8
- PyTorch version (GPU?): 1.7.1+cu101 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): mBART
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
I would like to finetune mBART with a parallel corpus (custom dataset).
Steps to reproduce the behavior:
1. Load an MBartForConditionalGeneration and an MBartTokenizer with pre-trained weights.
2. DataCollatorForLanguageModeling as the data collator.
3. I use a Trainer and a custom torch Dataset class.
I noticed the problem arises when combining mBART and DataCollatorForLanguageModeling. I found a similar issue without no specific solution: https://github.com/huggingface/transformers/issues/9417. Here is my error stack:
```
File "main.py", line 99, in <module>
main()
File "main.py", line 85, in main
resume_from_checkpoint=LANG_MODEL_PATH + 'last_model' if os.path.exists(LANG_MODEL_PATH + 'last_model') else None
File "/var/python3envs/transformers-4.5.1/lib/python3.6/site-packages/transformers/trainer.py", line 1120, in train
tr_loss += self.training_step(model, inputs)
File "/var/python3envs/transformers-4.5.1/lib/python3.6/site-packages/transformers/trainer.py", line 1524, in training_step
loss = self.compute_loss(model, inputs)
File "/var/python3envs/transformers-4.5.1/lib/python3.6/site-packages/transformers/trainer.py", line 1556, in compute_loss
outputs = model(**inputs)
File "/var/python3envs/transformers-4.5.1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/var/python3envs/transformers-4.5.1/lib/python3.6/site-packages/transformers/models/mbart/modeling_mbart.py", line 1287, in forward
decoder_input_ids = shift_tokens_right(labels, self.config.pad_token_id)
File "/var/python3envs/transformers-4.5.1/lib/python3.6/site-packages/transformers/models/mbart/modeling_mbart.py", line 74, in shift_tokens_right
decoder_start_tokens = prev_output_tokens.gather(1, index_of_eos).squeeze()
RuntimeError: index -1 is out of bounds for dimension 1 with size 309
33%|███████████████ | 1/3 [00:29<00:58, 29.07s/it]
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11451/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11451/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11450 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11450/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11450/comments | https://api.github.com/repos/huggingface/transformers/issues/11450/events | https://github.com/huggingface/transformers/pull/11450 | 867,609,482 | MDExOlB1bGxSZXF1ZXN0NjIzMjU0NzQw | 11,450 | [Black] Pin Version | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"After discussion with @lhoestq & @sgugger, upgrading is the better option => so merge https://github.com/huggingface/transformers/pull/11442",
"Shouldn't we close this ?"
] | 1,619 | 1,619 | 1,619 | MEMBER | null | Pin black until repo will be re-style with black 21.4b0. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11450/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11450/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11450",
"html_url": "https://github.com/huggingface/transformers/pull/11450",
"diff_url": "https://github.com/huggingface/transformers/pull/11450.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11450.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11449 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11449/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11449/comments | https://api.github.com/repos/huggingface/transformers/issues/11449/events | https://github.com/huggingface/transformers/pull/11449 | 867,550,398 | MDExOlB1bGxSZXF1ZXN0NjIzMjA0MTYz | 11,449 | Clarify description of the is_split_into_words argument | {
"login": "kstathou",
"id": 9084998,
"node_id": "MDQ6VXNlcjkwODQ5OTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9084998?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kstathou",
"html_url": "https://github.com/kstathou",
"followers_url": "https://api.github.com/users/kstathou/followers",
"following_url": "https://api.github.com/users/kstathou/following{/other_user}",
"gists_url": "https://api.github.com/users/kstathou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kstathou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kstathou/subscriptions",
"organizations_url": "https://api.github.com/users/kstathou/orgs",
"repos_url": "https://api.github.com/users/kstathou/repos",
"events_url": "https://api.github.com/users/kstathou/events{/privacy}",
"received_events_url": "https://api.github.com/users/kstathou/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thank you for the feedback! "
] | 1,619 | 1,619 | 1,619 | CONTRIBUTOR | null | # What does this PR do?
Clarifies the description of the `is_split_into_words` argument which is used in the tokenizers.
Closes #11333
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Issue: #11333
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? No additional tests were needed, I only clarified docs.
## Who can review?
I initially discussed this with @LysandreJik in #11333. I think @sgugger can review it too!
Thank you for taking the time to review this!
NB the failed CI test seems unrelated. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11449/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11449/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11449",
"html_url": "https://github.com/huggingface/transformers/pull/11449",
"diff_url": "https://github.com/huggingface/transformers/pull/11449.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11449.patch",
"merged_at": 1619450976000
} |
https://api.github.com/repos/huggingface/transformers/issues/11448 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11448/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11448/comments | https://api.github.com/repos/huggingface/transformers/issues/11448/events | https://github.com/huggingface/transformers/issues/11448 | 867,504,930 | MDU6SXNzdWU4Njc1MDQ5MzA= | 11,448 | Activating gradient checkpointing | {
"login": "ShivanshuPurohit",
"id": 42869065,
"node_id": "MDQ6VXNlcjQyODY5MDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/42869065?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ShivanshuPurohit",
"html_url": "https://github.com/ShivanshuPurohit",
"followers_url": "https://api.github.com/users/ShivanshuPurohit/followers",
"following_url": "https://api.github.com/users/ShivanshuPurohit/following{/other_user}",
"gists_url": "https://api.github.com/users/ShivanshuPurohit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ShivanshuPurohit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShivanshuPurohit/subscriptions",
"organizations_url": "https://api.github.com/users/ShivanshuPurohit/orgs",
"repos_url": "https://api.github.com/users/ShivanshuPurohit/repos",
"events_url": "https://api.github.com/users/ShivanshuPurohit/events{/privacy}",
"received_events_url": "https://api.github.com/users/ShivanshuPurohit/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"In some places I think \"activation checkpointing\" is referred to as \"gradient checkpointing\". The former sounds more logical as it's activations that aren't being saved. So it's https://pytorch.org/docs/stable/checkpoint.html.\r\n\r\nIt should be already there, as you can see the `GPTNeoModel` has it setup:\r\n\r\nhttps://github.com/huggingface/transformers/blob/bc2571e61c985ec82819cf01ad038342771c94d0/src/transformers/models/gpt_neo/modeling_gpt_neo.py#L834",
"Ah. Thanks for the clarification."
] | 1,619 | 1,619 | 1,619 | NONE | null | Working on a script which uses `GPTNeoForCausalLM` as the model object. As I understand it, gradient checkpointing requires checkpointing every layer. How do I change `GPTNeoForCausalLM` to incorporate gradient checkpointing, given that it doesn't show the layers explicitly but rather uses
```
self.transformer = GPTNeoModel(config)
self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
```
PS. working on [visua-grounding](https://github.com/EleutherAI/visual-grounding/tree/main) with which @stas00 has already been a huge help, and knows the issues. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11448/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11448/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11447 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11447/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11447/comments | https://api.github.com/repos/huggingface/transformers/issues/11447/events | https://github.com/huggingface/transformers/issues/11447 | 867,499,449 | MDU6SXNzdWU4Njc0OTk0NDk= | 11,447 | Google Colab TypeError: expected str, bytes or os.PathLike object, not NoneType | {
"login": "TatProg",
"id": 43710369,
"node_id": "MDQ6VXNlcjQzNzEwMzY5",
"avatar_url": "https://avatars.githubusercontent.com/u/43710369?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TatProg",
"html_url": "https://github.com/TatProg",
"followers_url": "https://api.github.com/users/TatProg/followers",
"following_url": "https://api.github.com/users/TatProg/following{/other_user}",
"gists_url": "https://api.github.com/users/TatProg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TatProg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TatProg/subscriptions",
"organizations_url": "https://api.github.com/users/TatProg/orgs",
"repos_url": "https://api.github.com/users/TatProg/repos",
"events_url": "https://api.github.com/users/TatProg/events{/privacy}",
"received_events_url": "https://api.github.com/users/TatProg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"My issue is quite similar to #10756 ",
"Hi! I believe you're using XLM models/tokenizers with XLM-R checkpoints. Have you tried using `XLMRobertaTokenizer` and `XLMRobertaLMHeadModel` instead?\r\n\r\nYou're also trying to load a BERT checkpoint in an XLM tokenizer, this won't work. If you want to load any checkpoint without worrying about the tokenizer/model architecture, I would recommend you use the `Auto*` instead:\r\n\r\n```py\r\nimport torch\r\nfrom transformers import pipeline, AutoTokenizer, AutoModelWithLMHead\r\nmodel_bert = 'bert-base-multilingual-cased'\r\nmodel_roberta = 'xlm-roberta-large'\r\ntokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')\r\nmodel = AutoModelWithLMHead.from_pretrained('xlm-roberta-large')\r\n```",
"Thank you very much. Everything works well now! ",
"> Hi! I believe you're using XLM models/tokenizers with XLM-R checkpoints. Have you tried using `XLMRobertaTokenizer` and `XLMRobertaLMHeadModel` instead?\r\n> \r\n> You're also trying to load a BERT checkpoint in an XLM tokenizer, this won't work. If you want to load any checkpoint without worrying about the tokenizer/model architecture, I would recommend you use the `Auto*` instead:\r\n> \r\n> ```python\r\n> import torch\r\n> from transformers import pipeline, AutoTokenizer, AutoModelWithLMHead\r\n> model_bert = 'bert-base-multilingual-cased'\r\n> model_roberta = 'xlm-roberta-large'\r\n> tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')\r\n> model = AutoModelWithLMHead.from_pretrained('xlm-roberta-large')\r\n> ```\r\n\r\nthank you a lot\r\n"
] | 1,619 | 1,655 | 1,619 | NONE | null | - `transformers` version: 4.5.1
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
Models I am using are **RoBERTa** (xlm-roberta-large) and **BERT** (bert-base-multilingual-cased)
### The problem arises when using:
In the beginning of April I started getting this error without any changes on my side. I just loaded my old Colab notebook (that worked well few months before that). Now I still getting this error and don't know what to do.
### The tasks I am working on is:
Just playing with models
### Steps to reproduce the behavior:
1. Open Google Colab
2. Run code below
3. Enjoy Error Message
```
!pip3 install transformers
import torch
from transformers import pipeline, XLMTokenizer, XLMWithLMHeadModel
model_bert = 'bert-base-multilingual-cased'
model_roberta = 'xlm-roberta-large'
tokenizer = XLMTokenizer.from_pretrained('xlm-roberta-large')
model = XLMWithLMHeadModel.from_pretrained('xlm-roberta-large')
```
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-5-09548189c94e> in <module>()
4 model_bert = 'bert-base-multilingual-cased'
5 model_roberta = 'xlm-roberta-large'
----> 6 tokenizer = XLMTokenizer.from_pretrained('xlm-roberta-large')
7 model = XLMWithLMHeadModel.from_pretrained('xlm-roberta-large')
2 frames
/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1708
1709 return cls._from_pretrained(
-> 1710 resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs
1711 )
1712
/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs)
1779 # Instantiate tokenizer.
1780 try:
-> 1781 tokenizer = cls(*init_inputs, **init_kwargs)
1782 except OSError:
1783 raise OSError(
/usr/local/lib/python3.7/dist-packages/transformers/models/xlm/tokenization_xlm.py in __init__(self, vocab_file, merges_file, unk_token, bos_token, sep_token, pad_token, cls_token, mask_token, additional_special_tokens, lang2id, id2lang, do_lowercase_and_remove_accent, **kwargs)
642 self.zh_word_tokenizer = None
643
--> 644 with open(vocab_file, encoding="utf-8") as vocab_handle:
645 self.encoder = json.load(vocab_handle)
646 self.decoder = {v: k for k, v in self.encoder.items()}
TypeError: expected str, bytes or os.PathLike object, not NoneType
```
Also, I tried to change
from `tokenizer = XLMTokenizer.from_pretrained('xlm-roberta-large')`
to `tokenizer = XLMTokenizer.from_pretrained(model_roberta)`
or using another model `tokenizer = XLMTokenizer.from_pretrained('bert-base-multilingual-cased')`but got same error | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11447/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11447/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11446 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11446/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11446/comments | https://api.github.com/repos/huggingface/transformers/issues/11446/events | https://github.com/huggingface/transformers/issues/11446 | 867,498,123 | MDU6SXNzdWU4Njc0OTgxMjM= | 11,446 | [wav2vec] deepspeed eval bug in the case of >1 gpus | {
"login": "tommy19970714",
"id": 14125841,
"node_id": "MDQ6VXNlcjE0MTI1ODQx",
"avatar_url": "https://avatars.githubusercontent.com/u/14125841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tommy19970714",
"html_url": "https://github.com/tommy19970714",
"followers_url": "https://api.github.com/users/tommy19970714/followers",
"following_url": "https://api.github.com/users/tommy19970714/following{/other_user}",
"gists_url": "https://api.github.com/users/tommy19970714/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tommy19970714/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tommy19970714/subscriptions",
"organizations_url": "https://api.github.com/users/tommy19970714/orgs",
"repos_url": "https://api.github.com/users/tommy19970714/repos",
"events_url": "https://api.github.com/users/tommy19970714/events{/privacy}",
"received_events_url": "https://api.github.com/users/tommy19970714/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
}
] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"deepspeed doesn't work with `autocast`, it has its own way of dealing with mixed precision, if you look in the `trainer.py` it's carefully bypassed. \r\n\r\ndoes the problem go away if you remove `autocast`?",
"@stas00 Thank you for your reply!\r\nWhen I deleted `autocast` and ran it, I got the error `RuntimeError (Input type (torch.cuda.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same)`.\r\nI ran it with autocast to eliminate this error.\r\nFYI, when I do not do_eval or use only 1 GPU, the code run fine with autocast and deepspeed.\r\n\r\nThe full text of the error is below.\r\n\r\n```\r\nFile \"run_common_voice.py\", line 512, in <module>\r\n main()\r\n File \"run_common_voice.py\", line 484, in main\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/trainer.py\", line 1240, in train\r\n tr_loss += self.training_step(model, inputs)\r\n File \"run_common_voice.py\", line 232, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/trainer.py\", line 1667, in compute_loss\r\n outputs = model(**inputs)\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py\", line 889, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.7/dist-packages/deepspeed/runtime/engine.py\", line 928, in forward\r\n loss = self.module(*inputs, **kwargs)\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py\", line 889, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py\", line 1050, in forward\r\n return_dict=return_dict,\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py\", line 889, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py\", line 828, in forward\r\n hidden_states = self.feature_extractor(input_values)\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py\", line 889, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py\", line 253, in forward\r\n hidden_states = conv_layer(hidden_states)\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py\", line 889, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_wav2vec2.py\", line 156, in forward\r\n hidden_states = self.conv(hidden_states)\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py\", line 889, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py\", line 263, in forward\r\n return self._conv_forward(input, self.weight, self.bias)\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py\", line 260, in _conv_forward\r\n self.padding, self.dilation, self.groups)\r\nRuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same\r\n```\r\n\r\nYou can check this error in the last cell of the following colab.\r\nhttps://colab.research.google.com/drive/1VRCGcnhBlrMFYQ5aaNebucZuja-WB2I2?usp=sharing",
"Thank you for making this reproducible, @tommy19970714 - I haven't worked yet with wav2vec, so I will have a look and get back to you.",
"OK, this is a new type of model that requires a special type of handling.\r\n\r\nThe NLP models get `long` inputs which get converted to the same dtype as the embedding weights, which under deepspeed/fp16 are `float16`. Currently deepspeed does `model.half`.\r\n\r\nThis model however receives inputs that are `float32` and it doesn't check whether the model weights are fp16 or not. Hence the error.\r\n\r\nSo this is one way to fix it:\r\n```\r\ndiff --git a/src/transformers/models/wav2vec2/modeling_wav2vec2.py b/src/transformers/models/wav2vec2/modeling_wav2vec2.py\r\nindex 98123bdd3..639c2bc13 100755\r\n--- a/src/transformers/models/wav2vec2/modeling_wav2vec2.py\r\n+++ b/src/transformers/models/wav2vec2/modeling_wav2vec2.py\r\n@@ -153,7 +153,7 @@ class Wav2Vec2LayerNormConvLayer(nn.Module):\r\n self.activation = ACT2FN[config.feat_extract_activation]\r\n\r\n def forward(self, hidden_states):\r\n- hidden_states = self.conv(hidden_states)\r\n+ hidden_states = self.conv(hidden_states.to(dtype=self.conv.weight.dtype))\r\n\r\n hidden_states = hidden_states.transpose(-2, -1)\r\n hidden_states = self.layer_norm(hidden_states)\r\n```\r\n\r\nThe test I was using is:\r\n```\r\nCUDA_VISIBLE_DEVICES=0 deepspeed --num_gpus=1 \\\r\nexamples/research_projects/wav2vec2/run_common_voice.py \\\r\n--model_name_or_path=\"facebook/wav2vec2-large-xlsr-53\" --dataset_config_name=\"tr\" \\\r\n--output_dir=./wav2vec2-large-xlsr-turkish-demo --overwrite_output_dir --num_train_epochs=\"5\" \\\r\n--per_device_train_batch_size=\"16\" --learning_rate=\"3e-4\" --warmup_steps=\"500\" \\\r\n--evaluation_strategy=\"steps\" --save_steps=\"5\" --eval_steps=\"5\" --logging_steps=\"5\" \\\r\n--save_total_limit=\"3\" --freeze_feature_extractor --feat_proj_dropout=\"0.0\" --layerdrop=\"0.1\" \\\r\n--gradient_checkpointing --fp16 --group_by_length --do_train --do_eval --deepspeed \\\r\ntests/deepspeed/ds_config_zero2.json\r\n```\r\n\r\nCould probably move it to the top-level layer so it'd work in all cases, if this exact path isn't always taken.\r\n\r\nSo this overcomes:\r\n```\r\nRuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same\r\n```\r\n\r\nbut now running into:\r\n```\r\n File \"examples/research_projects/wav2vec2/run_common_voice.py\", line 512, in <module>\r\n main()\r\n File \"examples/research_projects/wav2vec2/run_common_voice.py\", line 484, in main\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer.py\", line 1240, in train\r\n tr_loss += self.training_step(model, inputs)\r\n File \"examples/research_projects/wav2vec2/run_common_voice.py\", line 232, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer.py\", line 1667, in compute_loss\r\n outputs = model(**inputs)\r\n File \"/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1015, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/mnt/nvme1/code/github/00optimize/deepspeed/deepspeed/runtime/engine.py\", line 942, in forward\r\n loss = self.module(*inputs, **kwargs)\r\n File \"/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1015, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/wav2vec2/modeling_wav2vec2.py\", line 1076, in forward\r\n loss = F.ctc_loss(\r\n File \"/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/functional.py\", line 2436, in ctc_loss\r\n return torch.ctc_loss(\r\nRuntimeError: \"ctc_loss_cuda\" not implemented for 'Half'\r\n```\r\nso need to look more to see what to do there, probably need to switch to float32 just for that op.\r\n\r\nHowever, it appears that may be this model can't be trained/eval'ed in fp16/mixed precision? \r\n\r\nWhen I run:\r\n```\r\nCUDA_VISIBLE_DEVICES=0 python examples/research_projects/wav2vec2/run_common_voice.py \\\r\n--model_name_or_path=\"facebook/wav2vec2-large-xlsr-53\" --dataset_config_name=\"tr\" \\\r\n--output_dir=./wav2vec2-large-xlsr-turkish-demo --overwrite_output_dir --num_train_epochs=\"5\" \\\r\n--per_device_train_batch_size=\"16\" --learning_rate=\"3e-4\" --warmup_steps=\"500\" \\\r\n--evaluation_strategy=\"steps\" --save_steps=\"5\" --eval_steps=\"5\" --logging_steps=\"5\" \\\r\n--save_total_limit=\"3\" --freeze_feature_extractor --feat_proj_dropout=\"0.0\" --layerdrop=\"0.1\" \\\r\n--gradient_checkpointing --fp16 --group_by_length --do_train --do_eval\r\n```\r\nI see:\r\n```\r\n{'loss': nan, 'learning_rate': 4.2e-06, 'epoch': 0.05} \r\n```\r\n\r\nWe have multiple models that won't train under `fp16`-mixed precision, because they were pretrained in `bfloat16` which doesn't lend to `fp16` numerical range.\r\n\r\nDeepspeed devs are working on adding the fp32 mode (next release hopefully). https://github.com/microsoft/DeepSpeed/pull/1004\r\n\r\np.s. please don't mix `amp` with running modes that don't use `amp` (deepspeed is one of them) ",
"Hi, @stas00 \r\nThanks for your help!\r\n(I am working together with @tommy19970714 )\r\n\r\nI saw your tweet about the new release of version 0.3.16.\r\nhttps://github.com/microsoft/DeepSpeed/releases/tag/v0.3.16\r\nhttps://huggingface.co/transformers/master/main_classes/trainer.html#fp32-precision\r\n\r\nI set the `deepspeed.json` config to `auto`, referring to the article.\r\n\r\n```JSON\r\n{\r\n \"fp16\": {\r\n \"enabled\": \"auto\",\r\n \"loss_scale\": 0,\r\n \"loss_scale_window\": 1000,\r\n \"hysteresis\": 2,\r\n \"min_loss_scale\": 1,\r\n \"opt_level\": \"O3\"\r\n },\r\n \"steps_per_print\": 100,\r\n \"wall_clock_breakdown\": \"false\"\r\n}\r\n```\r\n\r\nIn addition to your suggestion, I made some changes to the model file to convert `log_probs` to float32:\r\n\r\n```python\r\ndiff --git a/src/transformers/models/wav2vec2/modeling_wav2vec2.py b/src/transformers/models/wav2vec2/modeling_wav2vec2.py\r\nindex ba548dc3d..ce2ecdbe3 100755\r\n--- a/src/transformers/models/wav2vec2/modeling_wav2vec2.py\r\n+++ b/src/transformers/models/wav2vec2/modeling_wav2vec2.py\r\n@@ -153,7 +153,7 @@ class Wav2Vec2LayerNormConvLayer(nn.Module):\r\n self.activation = ACT2FN[config.feat_extract_activation]\r\n\r\n def forward(self, hidden_states):\r\n- hidden_states = self.conv(hidden_states)\r\n+ hidden_states = self.conv(hidden_states.to(dtype=self.conv.weight.dtype))\r\n\r\n hidden_states = hidden_states.transpose(-2, -1)\r\n hidden_states = self.layer_norm(hidden_states)\r\n@@ -1071,10 +1071,15 @@ class Wav2Vec2ForCTC(Wav2Vec2PreTrainedModel):\r\n flattened_targets = labels.masked_select(labels_mask)\r\n\r\n log_probs = F.log_softmax(logits, dim=-1).transpose(0, 1)\r\n+ # log_probs = log_probs.to(dtype=torch.float32), # doesn't work here\r\n\r\n with torch.backends.cudnn.flags(enabled=False):\r\n loss = F.ctc_loss(\r\n- log_probs,\r\n+ log_probs.to(dtype=torch.float32),\r\n+ # log_probs.to(dtype=torch.bfloat16),\r\n+ # log_probs,\r\n flattened_targets,\r\n input_lengths,\r\n target_lengths,\r\n```\r\n\r\nThen it somehow worked! \r\nDoes this seem to be a proper fix?\r\nI might not be fully understanding the type differences tho.\r\n\r\nAlso, what else can I do to have it merged into the main branch?\r\nI am willing to contribute, but I am not sure if the code is good enough. \r\nI assume it is missing config handling. ",
"Thank you for suggested adjustments, @qqhann \r\n\r\nFor the proper solution we shouldn't mess with the model ;)\r\n\r\nThe inputs `dtype` change is normally done inside the training loop, because it knows the context of the training. We just didn't need to do it until now, since as I mentioned earlier for NLP models we get the inputs adjusted to the right type through embedding lookup, so this is different.\r\n\r\nOne of the important parts here is to add tests for each of these situations.\r\n\r\nWhat would be really useful is if you could help with creating a tiny wav2vec2 random model, to enable quick functional tests.\r\n\r\nHere are some examples of such scripts:\r\n- https://huggingface.co/stas/mt5-tiny-random/blob/main/mt5-make-tiny-model.py\r\n- https://huggingface.co/stas/t5-very-small-random/blob/main/t5-make-very-small-model.py\r\n\r\nIn both cases it takes a normal model and reshapes it to a much smaller size. Usually the hard part is to figure out the non-model parts - dicts, tokenizers, etc. I don't know yet anything about wav2vec2 so it'd help if you had the know-how to create it.\r\n\r\nThe idea behind a tiny model is that it runs just like a normal model, but its weights are random, it's very small ~5-10MB or even smaller, it loads fast, and of course it produces random results. This is perfect for functional testing.\r\n\r\nIf you're not sure how to approach it, that's alright too. We will figure it out.",
"To update: @patrickvonplaten is kindly going to create a few tiny models and using his tiny `--dataset_name=patrickvonplaten/librispeech_asr_dummy` it should be possible to use `examples/research_projects/wav2vec2/run_asr.py` as the dev and test bench, so when this happens I should be able to complete this work. \r\n\r\nUntil then your workaround is probably good enough if it's working for you.",
"You're welcome to follow my progress at fixing this issue at https://github.com/huggingface/transformers/pull/11638\r\n\r\nZeRO-2 works fully. ZeRO-3 still has one issue, but fp32 works. \r\n\r\nDo try and let me know if you run into any problems.\r\n\r\n",
"@stas00 Thanks for letting me know! I'll keep an eye on it!",
"Update: with deepspeed master both zero-2 and zero-3 now work https://github.com/huggingface/transformers/pull/11638\r\n\r\nIt's ready to be merged.\r\n\r\nPlease give it a try."
] | 1,619 | 1,623 | 1,623 | NONE | null | ## Environment info
- `transformers` version: 4.5.1
- Platform: Linux-4.15.0-140-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.9
- PyTorch version (GPU?): 1.8.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <2,4>
- Using distributed or parallel set-up in script?: <distributed>
### Who can help
@stas00
@patrickvonplaten
@patil-suraj
## Information
I'm working on wav2vec2.0 using the following official script of huggingface.
https://github.com/huggingface/transformers/blob/master/examples/research_projects/wav2vec2/run_common_voice.py
I am trying to finetune huggingface model with multiple gpus using deepspeed.
```
deepspeed --num_gpus=1 run_common_voice.py --deepspeed ds_config.json --do_train --do_eval
```
works, but
```
deepspeed --num_gpus=2 run_common_voice.py --deepspeed ds_config.json --do_train --do_eval
```
stops working and freezes at the end of eval.
The progress bar is 100% done but the eval result is not returned and it freezes.
## To reproduce
This is how to reproduce!
https://colab.research.google.com/drive/1VRCGcnhBlrMFYQ5aaNebucZuja-WB2I2?usp=sharing
Steps to reproduce the behavior:
1. Install deepspeed
2. Add `with autocast():` after line 481 in run_common_voice.py
3. Set param: `--deepspeed ds_config.json --do_train --do_eval`
4. Run run_common_voice.py using deepspeed with 1> gpus
ds_config has the following parameters.
```ds_config.json
{
"fp16": {
"enabled": "true",
"loss_scale": 0,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1,
"opt_level": "O3"
},
"steps_per_print": 100,
"wall_clock_breakdown": "false"
}
```
## Expected behavior
The finetuning eval should be executed without freezing.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11446/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11446/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11445 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11445/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11445/comments | https://api.github.com/repos/huggingface/transformers/issues/11445/events | https://github.com/huggingface/transformers/pull/11445 | 867,460,627 | MDExOlB1bGxSZXF1ZXN0NjIzMTI3NTYz | 11,445 | CLIP | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2669577093,
"node_id": "MDU6TGFiZWwyNjY5NTc3MDkz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/PR%20for%20Model%20Addition",
"name": "PR for Model Addition",
"color": "5319e7",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"@sgugger \r\n\r\n> is it possible to add a fast version of the tokenizer?\r\n\r\nYes, will add the fast version as well.\r\n\r\n\r\n> why are there two classes for the vision model and the text model? They have the exact same forward, so we should only have one class IMO.\r\n\r\nAdded two versions so that one could just load `CLIPTextModel` or `CLIPVisionModle` directly from `CLIPModel`'s weights. If we just keep single modules then it's not possible to load the weights from the full model because then the keys won't match. The `CLIPModel` has these extra keys `text_model` and `vision_model`, hence the two extra modules with the same keys.\r\n\r\nThis would allow one to use the vision model in some other downstream tasks like adding a liner layer on top or using it as an image encoder in some other settings. Not sure if users will actually want this, but this does not really add much complexity to the code IMO.",
"All green!! \r\nI've addressed most of the suggestions, notably\r\n\r\n- new processor API => as discussed with @patrickvonplaten and @LysandreJik processor's `__call__` now accepts both the text and/or images and returns a single encoding dict. `as_target_processor` is now removed. The API is as follows\r\n\r\n```python3\r\nmodel = CLIPModel.from_pretrained(checkpoint)\r\ninputs = CLIPProcessor(texts=..., images=..., some_other_kwargs)\r\noutputs = model(**inputs)\r\n```\r\n- the `encode_text` and `encode_image` methods are renamed to `get_text_features` and `get_image_features`\r\n- Added fast tokenizer.\r\n\r\nReady for second review @LysandreJik @sgugger @patrickvonplaten ",
"> All green!!\r\n> I've addressed most of the suggestions, notably\r\n> \r\n> * new processor API => as discussed with @patrickvonplaten and @LysandreJik processor's `__call__` now accepts both the text and/or images and returns a single encoding dict. `as_target_processor` is now removed. The API is as follows\r\n> \r\n> ```python\r\n> model = CLIPModel.from_pretrained(checkpoint)\r\n> inputs = CLIPProcessor(texts=..., images=..., some_other_kwargs)\r\n> outputs = model(**inputs)\r\n> ```\r\n> \r\n> * the `encode_text` and `encode_image` methods are renamed to `get_text_features` and `get_image_features`\r\n> * Added fast tokenizer.\r\n> \r\n> Ready for second review @LysandreJik @sgugger @patrickvonplaten\r\n\r\n\r\nHow to use processor in __getitem()__? I got an error\"RuntimeError: stack expects each tensor to be equal size, but got [1, 11] at entry 0 and [1, 13] at entry 1\" ,as follow:\r\ndef __getitem__(self, idx):\r\n img_id = self.img_ids[idx]\r\n # randomly pick one caption from the image captions\r\n text = random.choice(self.img_id_to_captions[img_id])\r\n img_filename = self.img_id_to_filename[img_id]\r\n img_path = op.join(self.img_dir, img_filename)\r\n img = Image.open(img_path) \r\n input = self.processor(text = text, images = img, return_tensors = \"pt\", padding = True)\r\n return input\r\nI thought processor might need other args, inherited from pretraintokenizerbase,such as padding.But I couldn't find it at processor's __call__ in doc.",
"Hi @lycfight could you please open an issue with a minimal code snippet so we could take a look. Thanks :) ",
"> Hi @lycfight could you please open an issue with a minimal code snippet so we could take a look. Thanks :)\r\n\r\nof course"
] | 1,619 | 1,630 | 1,620 | MEMBER | null | # What does this PR do?
This PR adds the [CLIP](https://github.com/openai/CLIP) model.
CLIP is a multi-modal vision+language model which uses a transformer model for encoding both the images and text.
- The model here is designed such that both `CLIPTextModel` and `CLIPVisionModel` can be loaded independently, and composed together to get the `CLIPModel`.
- Both `CLIPTextModel` and `CLIPVisionModel` use the shared encoder class `CLIPEncoder`.
- The config classes are also kept in separate i.e `CLIPTextConfig` and `CLIPVisionConfig`. This could be kept in one config class but then we would have to add two arguments for each config value i.e `text_hidden_size` for text model `vision_hidden_size` for vision model etc.
One issue here is that when we load an individual model, like `CLIPTextModel` using the weights of the whole `CLIPModel`
the config ends up containing both text and vision config dicts, this does not cause any issue but could be confusing to look at.
One important thing to note here is that CLIP's tokenizer does have a pad token defined for it, but they use 0 as `pad_token_id` to pad the text, but the token, but the token associated with 0 is not a pad token. So here, to able to do padding I've added `pad_token_id` as a `property` which returns 0. I would be happy to hear if there is some other way to achieve this.
Also, I've added a processor class here but not sure if we really need it for this model. We could easily use the extractor for the vision model and tokenizer for the text model.
Would love your review about the design @LysandreJik , @patrickvonplaten , @sgugger. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11445/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11445/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11445",
"html_url": "https://github.com/huggingface/transformers/pull/11445",
"diff_url": "https://github.com/huggingface/transformers/pull/11445.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11445.patch",
"merged_at": 1620807496000
} |
https://api.github.com/repos/huggingface/transformers/issues/11444 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11444/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11444/comments | https://api.github.com/repos/huggingface/transformers/issues/11444/events | https://github.com/huggingface/transformers/pull/11444 | 867,435,956 | MDExOlB1bGxSZXF1ZXN0NjIzMTA3MDMw | 11,444 | Variable Correction for Consistency in Distillation Example | {
"login": "jaimeenahn",
"id": 32367255,
"node_id": "MDQ6VXNlcjMyMzY3MjU1",
"avatar_url": "https://avatars.githubusercontent.com/u/32367255?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jaimeenahn",
"html_url": "https://github.com/jaimeenahn",
"followers_url": "https://api.github.com/users/jaimeenahn/followers",
"following_url": "https://api.github.com/users/jaimeenahn/following{/other_user}",
"gists_url": "https://api.github.com/users/jaimeenahn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jaimeenahn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaimeenahn/subscriptions",
"organizations_url": "https://api.github.com/users/jaimeenahn/orgs",
"repos_url": "https://api.github.com/users/jaimeenahn/repos",
"events_url": "https://api.github.com/users/jaimeenahn/events{/privacy}",
"received_events_url": "https://api.github.com/users/jaimeenahn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,619 | 1,619 | 1,619 | CONTRIBUTOR | null | As the error comes from the incosistency of variable meaning number of gpus in parser and its actual usage in the train.py script, 'gpus' and 'n_gpu' respectively, the correction makes the example work
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue) #11441
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->@VictorSanh
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11444/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11444/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11444",
"html_url": "https://github.com/huggingface/transformers/pull/11444",
"diff_url": "https://github.com/huggingface/transformers/pull/11444.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11444.patch",
"merged_at": 1619458248000
} |
https://api.github.com/repos/huggingface/transformers/issues/11443 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11443/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11443/comments | https://api.github.com/repos/huggingface/transformers/issues/11443/events | https://github.com/huggingface/transformers/issues/11443 | 867,408,685 | MDU6SXNzdWU4Njc0MDg2ODU= | 11,443 | BERT model gets fairly random results | {
"login": "dorooddorood606",
"id": 79288051,
"node_id": "MDQ6VXNlcjc5Mjg4MDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/79288051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorooddorood606",
"html_url": "https://github.com/dorooddorood606",
"followers_url": "https://api.github.com/users/dorooddorood606/followers",
"following_url": "https://api.github.com/users/dorooddorood606/following{/other_user}",
"gists_url": "https://api.github.com/users/dorooddorood606/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorooddorood606/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorooddorood606/subscriptions",
"organizations_url": "https://api.github.com/users/dorooddorood606/orgs",
"repos_url": "https://api.github.com/users/dorooddorood606/repos",
"events_url": "https://api.github.com/users/dorooddorood606/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorooddorood606/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Please give us the whole command you are running as we can't reproduce without it.\r\nAre you properly setting the seed? Depending on the seed used, the results differ a lot on MRPC, since it's a tiny dataset. This is known and there have been [published papers](https://arxiv.org/pdf/2002.06305.pdf) on this.",
"Hi,\r\nthank you the issue resolved with moving the codes to version 4.6.dev\r\nthanks "
] | 1,619 | 1,619 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.0
- Platform: linux
- Python version: 3.8
- PyTorch version (GPU?): 1.8
- Tensorflow version (GPU?): -
- Using GPU in script?: -
- Using distributed or parallel set-up in script?: -
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@LysandreJik, @sgugger
## Information
Model I am using BERT-base-uncased model with run_glue.py on MRPC dataset, there are substantial differences in the results each time I run the codes, sometimes it reaches 5% percent. Too much variation makes the results not reliable, and this is quite a big issue. Thanks for your help on this. This might be a bug in the trainer/or the model itself.
first run:
```
[INFO|trainer_pt_utils.py:727] 2021-04-26 10:03:22,711 >> epoch = 3.0
[INFO|trainer_pt_utils.py:727] 2021-04-26 10:03:22,711 >> eval_average_metrics = 0.8300182784978398
[INFO|trainer_pt_utils.py:727] 2021-04-26 10:03:22,711 >> eval_mem_cpu_alloc_delta = 0MB
[INFO|trainer_pt_utils.py:727] 2021-04-26 10:03:22,711 >> eval_mem_cpu_peaked_delta = 2MB
[INFO|trainer_pt_utils.py:727] 2021-04-26 10:03:22,711 >> eval_mem_gpu_alloc_delta = 0MB
[INFO|trainer_pt_utils.py:727] 2021-04-26 10:03:22,711 >> eval_mem_gpu_peaked_delta = 264MB
[INFO|trainer_pt_utils.py:727] 2021-04-26 10:03:22,711 >> mrpc_eval_accuracy = 0.799
[INFO|trainer_pt_utils.py:727] 2021-04-26 10:03:22,711 >> mrpc_eval_combined_score = 0.83
[INFO|trainer_pt_utils.py:727] 2021-04-26 10:03:22,711 >> mrpc_eval_f1 = 0.861
[INFO|trainer_pt_utils.py:727] 2021-04-26 10:03:22,711 >> mrpc_eval_loss = 0.4643
[INFO|trainer_pt_utils.py:727] 2021-04-26 10:03:22,711 >> mrpc_eval_runtime = 0:00:00.38
[INFO|trainer_pt_utils.py:727] 2021-04-26 10:03:22,711 >> mrpc_eval_samples_per_second = 529.617
```
second run:
```
[INFO|trainer_pt_utils.py:722] 2021-04-26 10:02:59,294 >> ***** test metrics *****
[INFO|trainer_pt_utils.py:727] 2021-04-26 10:02:59,294 >> epoch = 3.0
[INFO|trainer_pt_utils.py:727] 2021-04-26 10:02:59,294 >> eval_average_metrics = 0.8090236094437775
[INFO|trainer_pt_utils.py:727] 2021-04-26 10:02:59,294 >> eval_mem_cpu_alloc_delta = 0MB
[INFO|trainer_pt_utils.py:727] 2021-04-26 10:02:59,294 >> eval_mem_cpu_peaked_delta = 2MB
[INFO|trainer_pt_utils.py:727] 2021-04-26 10:02:59,294 >> eval_mem_gpu_alloc_delta = 0MB
[INFO|trainer_pt_utils.py:727] 2021-04-26 10:02:59,294 >> eval_mem_gpu_peaked_delta = 264MB
[INFO|trainer_pt_utils.py:727] 2021-04-26 10:02:59,294 >> mrpc_eval_accuracy = 0.7745
[INFO|trainer_pt_utils.py:727] 2021-04-26 10:02:59,294 >> mrpc_eval_combined_score = 0.809
[INFO|trainer_pt_utils.py:727] 2021-04-26 10:02:59,294 >> mrpc_eval_f1 = 0.8435
[INFO|trainer_pt_utils.py:727] 2021-04-26 10:02:59,294 >> mrpc_eval_loss = 0.4631
[INFO|trainer_pt_utils.py:727] 2021-04-26 10:02:59,295 >> mrpc_eval_runtime = 0:00:00.35
[INFO|trainer_pt_utils.py:727] 2021-04-26 10:02:59,295 >> mrpc_eval_samples_per_second = 567.515
```
## To reproduce
Steps to reproduce the behavior:
Please run run_glue.py default script on MRPC.
## Expected behavior
The model needs to reproduce the same results each time it runs. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11443/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11443/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11442 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11442/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11442/comments | https://api.github.com/repos/huggingface/transformers/issues/11442/events | https://github.com/huggingface/transformers/pull/11442 | 867,345,187 | MDExOlB1bGxSZXF1ZXN0NjIzMDMwNDMw | 11,442 | Upgrade Black to version 21.4b0 | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Closing this PR in favor of https://github.com/huggingface/transformers/pull/11450"
] | 1,619 | 1,619 | 1,619 | MEMBER | null | This PR reformats all files with Black's newest version 21.4b0. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11442/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11442/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11442",
"html_url": "https://github.com/huggingface/transformers/pull/11442",
"diff_url": "https://github.com/huggingface/transformers/pull/11442.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11442.patch",
"merged_at": 1619437834000
} |
https://api.github.com/repos/huggingface/transformers/issues/11441 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11441/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11441/comments | https://api.github.com/repos/huggingface/transformers/issues/11441/events | https://github.com/huggingface/transformers/issues/11441 | 867,329,902 | MDU6SXNzdWU4NjczMjk5MDI= | 11,441 | Minor error on example distillation script | {
"login": "jaimeenahn",
"id": 32367255,
"node_id": "MDQ6VXNlcjMyMzY3MjU1",
"avatar_url": "https://avatars.githubusercontent.com/u/32367255?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jaimeenahn",
"html_url": "https://github.com/jaimeenahn",
"followers_url": "https://api.github.com/users/jaimeenahn/followers",
"following_url": "https://api.github.com/users/jaimeenahn/following{/other_user}",
"gists_url": "https://api.github.com/users/jaimeenahn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jaimeenahn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaimeenahn/subscriptions",
"organizations_url": "https://api.github.com/users/jaimeenahn/orgs",
"repos_url": "https://api.github.com/users/jaimeenahn/repos",
"events_url": "https://api.github.com/users/jaimeenahn/events{/privacy}",
"received_events_url": "https://api.github.com/users/jaimeenahn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,619 | 1,619 | 1,619 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1
- Platform: Linux-4.18.0-147.el8.x86_64-x86_64-with-Ubuntu-16.04-xenial
- Python version: 3.7.10
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: True
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> I think @VictorSanh might help since it's about a minor bug in distillation.
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
examples/research_projects/distillation
The tasks I am working on is:
* [] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
It's not GLUE/SQUaD but official BookCorpus and Wikipedia datasets from `datasets`
## To reproduce
Steps to reproduce the behavior:
1. Convert concatenation of bookcorpus and Wikipedia text from `datasets` to `txt` file.
2. Separate it with `\n`
3. Run scripts following *A. Preparing the data*
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
Traceback (most recent call last):
File "train.py", line 322, in <module>
main()
File "train.py", line 223, in main
init_gpu_params(args)
File "/volume/compression_and_distillation/transformers/examples/distillation/utils.py", line 55, in init_gpu_params
if params.n_gpu <= 0:
AttributeError: 'Namespace' object has no attribute 'n_gpu'
Traceback (most recent call last):
File "train.py", line 322, in <module>
main()
File "train.py", line 223, in main
init_gpu_params(args)
File "/volume/compression_and_distillation/transformers/examples/distillation/utils.py", line 55, in init_gpu_params
if params.n_gpu <= 0:
AttributeError: 'Namespace' object has no attribute 'n_gpu'
Traceback (most recent call last):
File "train.py", line 322, in <module>
main()
File "train.py", line 223, in main
init_gpu_params(args)
File "/volume/compression_and_distillation/transformers/examples/distillation/utils.py", line 55, in init_gpu_params
if params.n_gpu <= 0:
AttributeError: 'Namespace' object has no attribute 'n_gpu'
Traceback (most recent call last):
File "train.py", line 322, in <module>
main()
File "train.py", line 223, in main
init_gpu_params(args)
File "/volume/compression_and_distillation/transformers/examples/distillation/utils.py", line 55, in init_gpu_params
if params.n_gpu <= 0:
AttributeError: 'Namespace' object has no attribute 'n_gpu'
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/venv/distill/lib/python3.7/site-packages/torch/distributed/launch.py", line 261, in <module>
main()
File "/home/venv/distill/lib/python3.7/site-packages/torch/distributed/launch.py", line 257, in main
cmd=cmd)
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The error comes because of the inconsistency of variable name as `n_gpu` in the trainer.py script but `gpus` in parsing.
It can easily be solved changing `gpus` when parsing to `n_gpu`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11441/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11441/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11440 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11440/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11440/comments | https://api.github.com/repos/huggingface/transformers/issues/11440/events | https://github.com/huggingface/transformers/issues/11440 | 867,321,669 | MDU6SXNzdWU4NjczMjE2Njk= | 11,440 | Feedback whilst resuming | {
"login": "david-waterworth",
"id": 5028974,
"node_id": "MDQ6VXNlcjUwMjg5NzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5028974?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/david-waterworth",
"html_url": "https://github.com/david-waterworth",
"followers_url": "https://api.github.com/users/david-waterworth/followers",
"following_url": "https://api.github.com/users/david-waterworth/following{/other_user}",
"gists_url": "https://api.github.com/users/david-waterworth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/david-waterworth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/david-waterworth/subscriptions",
"organizations_url": "https://api.github.com/users/david-waterworth/orgs",
"repos_url": "https://api.github.com/users/david-waterworth/repos",
"events_url": "https://api.github.com/users/david-waterworth/events{/privacy}",
"received_events_url": "https://api.github.com/users/david-waterworth/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This has been fixed by #11324. If you use a source install, you will be able to use (or in this case see) this feature :-)"
] | 1,619 | 1,619 | 1,619 | NONE | null | # 🚀 Feature request
Add some sort of progress bar during resumption of training from a checkpoint
## Motivation
I resumed training from a checkpoint mid-way through a 100 Epoch training. The progress bar sat on zero for quite some time, and I could see CPU activity but no GPU's were active. Eventually the progress bar jumped to 50% and training resumed - I assume it's doing some sort of initialisation.
It would be nice if there were some indication that some progress is being made as it's not obvious what's occuring
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11440/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11440/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11439 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11439/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11439/comments | https://api.github.com/repos/huggingface/transformers/issues/11439/events | https://github.com/huggingface/transformers/pull/11439 | 867,264,526 | MDExOlB1bGxSZXF1ZXN0NjIyOTYwOTk5 | 11,439 | [BigBird] enable BigBirdForQuestionAnswering to return pooler output | {
"login": "thevasudevgupta",
"id": 53136577,
"node_id": "MDQ6VXNlcjUzMTM2NTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/53136577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thevasudevgupta",
"html_url": "https://github.com/thevasudevgupta",
"followers_url": "https://api.github.com/users/thevasudevgupta/followers",
"following_url": "https://api.github.com/users/thevasudevgupta/following{/other_user}",
"gists_url": "https://api.github.com/users/thevasudevgupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thevasudevgupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thevasudevgupta/subscriptions",
"organizations_url": "https://api.github.com/users/thevasudevgupta/orgs",
"repos_url": "https://api.github.com/users/thevasudevgupta/repos",
"events_url": "https://api.github.com/users/thevasudevgupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/thevasudevgupta/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,619 | 1,619 | 1,619 | CONTRIBUTOR | null | # What does this PR do?
This PR will enable `BigBirdForQuestionAnswering` to return pooler output. This can be useful for the tasks involving predicting category along with answer eg: [Natural Questions dataset](https://huggingface.co/datasets/natural_questions)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11439/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11439/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11439",
"html_url": "https://github.com/huggingface/transformers/pull/11439",
"diff_url": "https://github.com/huggingface/transformers/pull/11439.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11439.patch",
"merged_at": 1619420753000
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.