repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 11,419 | closed | Parameter in `DebertaV2Tokenizer.__init__()` without documentation: `split_by_punct` | The `split_by_punct` parameter in `DebertaV2Tokenizer.__init__()` should be documented:
https://github.com/huggingface/transformers/blob/52166f672ed337934d90cc525c226d93209e0923/src/transformers/models/deberta_v2/tokenization_deberta_v2.py#L89
@BigBird01 : could you please check this? | 04-24-2021 21:33:12 | 04-24-2021 21:33:12 | @BigBird01 : could you please help with this?<|||||>I believe this parameter additionally splits the text on the punctuation, as it can be seen from the method it's calling:
https://github.com/huggingface/transformers/blob/afe479adb5474250215438fe27db9dc9dbbbde09/src/transformers/models/deberta_v2/tokenization_deberta_v2.py#L446-L464
I think this docstring should help out:
https://github.com/huggingface/transformers/blob/afe479adb5474250215438fe27db9dc9dbbbde09/src/transformers/models/deberta_v2/tokenization_deberta_v2.py#L500-L512
To see it in practice, you can try it with:
```py
>>> tok = DebertaV2Tokenizer.from_pretrained("microsoft/deberta-v2-xlarge")
>>> tok._tokenizer._run_split_on_punc("Hey, how's he doing?")
['Hey', ',', ' how', "'", 's he doing', '?']
```<|||||>Thanks @LysandreJik for the explanation. Yes, it will split input sentences by punctuation then tokenize the segments by SPM tokenizer. We found this can help the performance of **SQuAD** task, but not on other tasks, e.g. MNLI. So we set it to false by default.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,418 | closed | [Deepspeed] ZeRO-Infinity integration plus config revamp | This PR:
- [x] integrates ZeRO-Infinity
- [x] revamps the configuration process, instead of the confusing to users sometimes-we-override-values, sometimes-we-don't - all values are now explicit unless they are set to `auto`, then and only then the Trainer will set them to the correct or recommended values.
- [x] massively revamps the way the configuration is done. now splitting the config parsing into 2 phases - one happening at the very end of `TrainingArguments` and then a weak ref global module var is created which can then be queried by various `transformers` components w/o needing to change any APIs. The global object cleanly goes away when `TrainingArguments` goes out of scope. Users no longer need to make any special calls - just need to ensure the `TrainingArguments` object is created before `model.from_pretrained()` is called (like we do in all examples). Phase 2 happens during `train` where we get a few variables that weren't there during `TrainingArguments`, so the config gets completed here.
- [x] ds_config is now passed to `zero.Init` in `from_pretrained` under ZeRO-3 since it now needs several configuration values - this is in preparation for fp32 and other important features.
- [x] adds new tests for ZeRO-Inf and configuration.
- [x] adds a minor fix in `get_regression_trainer`
If you're testing this PR please make sure you install deepspeed master branch:
```
git clone https://github.com/microsoft/DeepSpeed
cd DeepSpeed
pip install -e .
```
## Important changes
Please note a major change is that now only params that are set to `auto` will get automatically overriden/set to the correct/recommended values, everything else is left as is. This is to avoid the previously confusing behavior of never being quite sure what gets overridden and what not despite the logger telling what it did override. The new behavior is completely unambiguous.
See: examples
- [zero2](https://github.com/huggingface/transformers/blob/0f221d2cce751182c455295ef2c03a2c1bd3d66b/tests/deepspeed/ds_config_zero2.json)
- [zero3](https://github.com/huggingface/transformers/blob/0f221d2cce751182c455295ef2c03a2c1bd3d66b/tests/deepspeed/ds_config_zero3.json)
It's ready to release now. 0.3.15 just has a debug print that is loud, fixed in their master.
<!--
TODO:
The following is probably best saved for the next PR as it'd probably require waiting for deepspeed==0.3.16
- [ ] may be revamp the resume to avoid first loading the model weights. can do it in another PR.
PRs waiting to be integrated before this PR can be merged:
- [ ] zero.init() ds_config arg - not yet created
- [ ] new release is needed 0.3.16
-->
@sgugger | 04-24-2021 21:07:34 | 04-24-2021 21:07:34 | > Great job @stas00! I like the solution you picked to be able to start initializing part of deepspeed inside the training arguments. Does it fully solve the chicken and egg problem you add?
Thank you!
For the current needs of `zero.Init()`, yes! As long as the user separates creating `TrainingArguments` from creating the `Trainer` and calling `from_pretrained` in between, which is what all examples do. It took quite a lot of trial and error, but I think it's pretty clean now.
Splitting the configuration processing in 2 stages helped a lot too.
I hope that me using a weak ref global object is a good solution, since w/o it we would somehow have to make the framework aware of deepspeed in multiple places and somehow pass the config to it - most likely by sticking the DS object into the model object. The neat thing is that it being a weak ref it goes away automatically as soon as the TrainerArguments are `gc`'ed. If down the road we discover a better way nothing prevents us from switching to it.
I will clean up all the XXX's, I was originally planning to wait for 0.3.16 and include all the fixes there, but last time it took more than 10 days for them to make a new release, so I decided it'd be better for users to be able to use this code already, and will make another PR with extra changes next for deepspeed==0.3.16.
<|||||>The weakref is okay by me. The only other way to achieve this would be to have some "singleton" class where all instances share the same state, but the weakref is actually more adapted in this case. |
transformers | 11,417 | closed | Enable option for subword regularization in more tokenizers. | see https://github.com/huggingface/transformers/pull/11149#pullrequestreview-643686428
## To-do
### `AlbertTokenizer`
- [x] add `sp_model_kwargs` param with test
- [x] add pickle support with test
- [x] remove obscure function argument called `sample`
- [x] check
- [x] refactor test to follow DRY
### `BarthezTokenizer`
- [x] add `sp_model_kwargs` param with test
- [x] add pickle support with test
- [x] check
- [x] refactor test to follow DRY
- <s>remove obscure function argument called `sample`</s>
### `BertGenerationTokenizer`
- [x] add `sp_model_kwargs` param with test
- [x] add pickle support with test
- [x] remove obscure function argument called `sample`
- [x] check
- [x] refactor test to follow DRY
### `BigBirdTokenizer`
- [x] add `sp_model_kwargs` param with test
- [x] add pickle support with test
- [x] remove obscure function argument called `sample`
- [x] check
- [x] refactor test to follow DRY
### `CamembertTokenizer`
- [x] add `sp_model_kwargs` param with test
- [x] add pickle support with test
- [x] check
- [x] refactor test to follow DRY
- <s>remove obscure function argument called `sample`</s>
### `DebertaV2Tokenizer`
- [x] add `sp_model_kwargs` param with test
- [x] add pickle support with test
- [x] check
- [x] refactor test to follow DRY
- <s>remove obscure function argument called `sample`</s>
### `M2M100Tokenizer`
- [x] add `sp_model_kwargs` param with test
- [x] add pickle support with test
- [x] check
- [x] refactor test to follow DRY
- <s>remove obscure function argument called `sample`</s>
### `MarianTokenizer` - has src and target tokenizer
- [x] add `sp_model_kwargs` param with test
- [x] add pickle support with test
- [x] check
- [x] refactor test to follow DRY
- <s>remove obscure function argument called `sample`</s>
### `MBart50Tokenizer`
- [x] add `sp_model_kwargs` param with test
- [x] add pickle support with test
- [x] check
- [x] refactor test to follow DRY
- <s>remove obscure function argument called `sample`</s>
### `PegasusTokenizer`
- [x] add `sp_model_kwargs` param with test
- [x] add pickle support with test
- [x] remove obscure function argument called `sample`
- [x] check
- [x] refactor test to follow DRY
### `ReformerTokenizer`
- [x] add `sp_model_kwargs` param with test
- [x] add pickle support with test
- [x] remove obscure function argument called `sample`
- [x] check
- [x] refactor test to follow DRY
### `Speech2TextTokenizer`
- [x] add `sp_model_kwargs` param with test
- [x] add pickle support with test
- [x] check
- [x] refactor test to follow DRY
- <s>remove obscure function argument called `sample`</s>
### `T5Tokenizer`
- [x] add `sp_model_kwargs` param with test
- [x] add pickle support with test
- [x] remove obscure function argument called `sample`
- [x] check
- [x] refactor test to follow DRY
### `XLMProphetNetTokenizer`
- [x] add `sp_model_kwargs` param with test
- [x] add pickle support with test
- [x] check
- [x] refactor test to follow DRY
- <s>remove obscure function argument called `sample`</s>
### `XLNetTokenizer`
- [x] add `sp_model_kwargs` param with test
- [x] add pickle support with test
- [x] remove obscure function argument called `sample`
- [x] check
- [x] refactor test to follow DRY
### `XML RoBERTa`
- [x] refactor test to follow DRY
### General
- [x] check if we changed all tokenizers
- [x] add typing
- [x] check if tok. is used in other functions
- [x] also add changes to XLM RoBERTa tokenizer
### After review
- [x] fix type comments with default `None`
- [x] possibly remove `test_sentencepiece_skip_back_convert_check` | 04-24-2021 19:43:14 | 04-24-2021 19:43:14 | I found this somehow obscure function argument called `sample` at `AlbertTokenizer`:
https://github.com/huggingface/transformers/blob/52166f672ed337934d90cc525c226d93209e0923/src/transformers/models/albert/tokenization_albert.py#L189
It seems to enable subword regularization but with fixed parameters for `nbest_size` and `alpha`.
https://github.com/google/sentencepiece/blob/351600c2971401f4e849147579aa1b5d42f614e1/python/src/sentencepiece/__init__.py#L110-L111
I would remove that `sample` parameter and replace that with my solution which is more flexible. But that would mean we have a breaking change. As an alternative I could add my solution but keep the `sample` argument. But that would add more complexity to the code.
What do you think? @sgugger @LysandreJik @stefan-it
PS: Same here:
https://github.com/huggingface/transformers/blob/52166f672ed337934d90cc525c226d93209e0923/src/transformers/models/bert_generation/tokenization_bert_generation.py#L113
https://github.com/huggingface/transformers/blob/52166f672ed337934d90cc525c226d93209e0923/src/transformers/models/big_bird/tokenization_big_bird.py#L143
https://github.com/huggingface/transformers/blob/52166f672ed337934d90cc525c226d93209e0923/src/transformers/models/pegasus/tokenization_pegasus.py#L169
https://github.com/huggingface/transformers/blob/52166f672ed337934d90cc525c226d93209e0923/src/transformers/models/reformer/tokenization_reformer.py#L109
https://github.com/huggingface/transformers/blob/52166f672ed337934d90cc525c226d93209e0923/src/transformers/models/t5/tokenization_t5.py#L237
https://github.com/huggingface/transformers/blob/52166f672ed337934d90cc525c226d93209e0923/src/transformers/models/xlnet/tokenization_xlnet.py#L191<|||||>This argument is not called from anywhere so it's only accessible if users somehow rewrote the tokenize method to pass it along to the private method `_tokenize`. Therefore I think it's fine to do the breaking change and clean up the code using `sample=True`, but let's see what @patrickvonplaten and @LysandreJik think before going forward (note that Lysandre is on vacation until this Wednesday so he'll reply at the end of the week :-) ).<|||||>Yes, removing the `sample` and cleaning up the `_tokenize()` method sounds good to me. As @sgugger said, it is private and nowhere is a `sample` or a `**kwargs` passed to that method.<|||||>> Yes, removing the `sample` and cleaning up the `_tokenize()` method sounds good to me. As @sgugger said, it is private and nowhere is a `sample` or a `**kwargs` passed to that method.
Agree!<|||||>rebase upstream/master done<|||||>> Yes, LGTM! Thanks a lot.
Hey @LysandreJik - this is not done yet. Please do not merge now. ;-)<|||||>Oh, I was misled! There are indeed a few tokenizers remaining. Thank you for letting me know!<|||||>This is ready to be merged from my point of view.<|||||>Can you take care of the merge conflicts? Will review tomorrow :-)<|||||>> Can you take care of the merge conflicts? Will review tomorrow :-)
@sgugger All conflicts resolved & green CI
<|||||>> Great work on the tests, this is great. The tests could indeed be refactored in a common test if you feel like it.
I will refactor the tests the next days. Shame on me that I criticized the lack of DRY in the tokenizers but did not follow the DRY principle in the tests.<|||||>This is strange:
`FAILED tests/test_hf_api.py::HfApiEndpointsTest::test_list_repos_objs - reque...`
See here: https://app.circleci.com/pipelines/github/huggingface/transformers/23276/workflows/bf1ad505-efdc-4394-8852-a07702b9f5be/jobs/209965/parallel-runs/0/steps/0-108
Will trigget CI again,,,<|||||>@LysandreJik @sgugger Tests are refactored and DRY now. CI is green again.
IMO ready for merge.
Maybe you want to investigate the flaky test (see my comment above). |
transformers | 11,416 | closed | Transformers Pegasus - how do I fine tune another language? | How do I fine tune another language? Who will tell you? | 04-24-2021 18:18:06 | 04-24-2021 18:18:06 | Hi @seregadgl20-oss
It would be nice if you use the forum (https://discuss.huggingface.co/) to ask such general questions. Issues are for bugs and feature requests. Thank you!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,415 | closed | Roberta Tokenizer cannot handle inputs with `<mask>` token | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1
- Platform: macOS-11.1-arm64-arm-64bit
- Python version: 3.9.1
- PyTorch version (GPU?): 1.8.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Description
It'a bizarre bug. When I encode a string with a `<mask>` token and decode it immediately, the space in front of the `<mask>` token disappears.
```
>>> from transformers import AutoTokenizer
>>> tokenizer=AutoTokenizer.from_pretrained('roberta-base')
>>> tokenized_inputs=tokenizer('I <mask> you')['input_ids']
>>> tokenizer.decode(tokenized_inputs)
'<s>I<mask> you</s>'
>>>
```
It will lead to many problems. Such as `pipeline('fill-mask')` cannot provides valid results.
```
>>> from transformers import pipeline
>>> nlp=pipeline('fill-mask')
>>> nlp.tokenizer
PreTrainedTokenizerFast(name_or_path='distilroberta-base', vocab_size=50265, model_max_len=512, is_fast=True, padding_side='right', special_tokens={'bos_token': '<s>', 'eos_token': '</s>', 'unk_token': '<unk>', 'sep_token': '</s>', 'pad_token': '<pad>', 'cls_token': '<s>', 'mask_token': AddedToken("<mask>", rstrip=False, lstrip=True, single_word=False, normalized=False)})
>>> nlp('app <mask>')
{'score': 0.09366267919540405,
'sequence': 'appeal',
'token': 18696,
'token_str': 'eal'}
```
It seems that the mask-filling process omits the space, which is not what we expected (we expect the token to be filled in mask is another word rather than a sub-word, since there is a space as a separator.)
Does anyone notice the same issue? | 04-24-2021 16:21:14 | 04-24-2021 16:21:14 | Hi, the tokenizer of RoBERTa is a byte-level BPE tokenizer. It is a subword tokenizer.
In your example with `app ` it completes the word, but there also exists full words within it's dictionary, as you can see when the input sequence is appropriate:
```py
>>> nlp('Hello, how are you <mask> sir?')
[
{'sequence': 'Hello, how are you doing sir?', 'score': 0.6784416437149048, 'token': 608, 'token_str': ' doing'},
{'sequence': 'Hello, how are you feeling sir?', 'score': 0.08236288279294968, 'token': 2157, 'token_str': ' feeling'},
{'sequence': 'Hello, how are you, sir?', 'score': 0.06469670683145523, 'token': 6, 'token_str': ','},
{'sequence': 'Hello, how are you looking sir?', 'score': 0.04527667537331581, 'token': 546, 'token_str': ' looking'},
{'sequence': 'Hello, how are you going sir?', 'score': 0.02970985323190689, 'token': 164, 'token_str': ' going'}]
```<|||||>@LysandreJik Thanks for your reply! I think this behavior is proper during the pretraining process: roberta completes or predicts the next token when we give the following input `app<mask>`. But I believe this is not expected when we call `fill-mask` pipeline.
Take `app <mask>` as example, there is a space in `app <mask>`, which means the `app` token is a complete token and roberta should predict the next token. However, after tokenizing, the space vanished. To realize what I said, maybe the only way is to rewrite the predict function to find those token with a space in front of it.
P.S. add an extra space `app <mask>` do not bring expected results as well.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,414 | closed | checkpointing is not still covering all cases | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.0
- Platform: linux
- Python version: 3.8
- PyTorch version (GPU?): 1.8
- Tensorflow version (GPU?): 1.8
- Using GPU in script?: -
- Using distributed or parallel set-up in script?: -
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger
## Information
Hi
sometime ago, you improved the checkopinting in huggingface repo, but this is not covering all cases, here is one example, let assume a user is wrapping a model into a class as below:
```
model = .... // one of huggingface model here
# lets wrap it
model = intrinsic_dimension_said(model, intrinsic_dim, training_args.output_dir, set())
```
I put the wrap class for completeness, but feel free to ignore it, this can be any class:
```
class IntrinsicDimensionLight:
def __init__(self, module: nn.Module, intrinsic_dimension: int, output_dir, str_filter: Set[str] = set(), said=False, random_seed=1997):
torch.manual_seed(random_seed)
np.random.seed(random_seed)
self.initial_value_path = os.path.join(output_dir, "initial_value")
self.fastfood_params_path = os.path.join(output_dir, "fastfood_params")
self.name_base_localname = []
self.initial_value = dict()
self.fastfood_params = {}
self.said = said
self.said_size = len(list(module.named_parameters()))
if self.said:
assert intrinsic_dimension > self.said_size
intrinsic_dimension -= self.said_size
self.intrinsic_parameter = nn.Parameter(
torch.zeros((intrinsic_dimension)).cpu())
module.register_parameter(
"intrinsic_parameter", self.intrinsic_parameter)
setattr(module, "intrinsic_parameter", self.intrinsic_parameter)
length = 0
for name, param in module.named_parameters():
if param.requires_grad and all([x not in name for x in str_filter]):
length += 1
self.initial_value[name] = v0 = (
param.clone().detach().requires_grad_(False).to(self.intrinsic_parameter.device)
)
DD = np.prod(v0.size())
self.fastfood_params[name] = fastfood_vars(
DD, self.intrinsic_parameter.device)
base, localname = module, name
while "." in localname:
prefix, localname = localname.split(".", 1)
base = base.__getattr__(prefix)
self.name_base_localname.append((name, base, localname))
if "intrinsic_parameter" not in name:
param.requires_grad_(False)
if said:
self.intrinsic_parameter_said = nn.Parameter(
torch.ones((length)).cpu())
module.register_parameter(
"intrinsic_parameter_said", self.intrinsic_parameter_said)
setattr(module, "intrinsic_parameter_said",
self.intrinsic_parameter_said)
# If this is created before, here we save it and here it loads it.
if not self.is_projection_params_saved():
self.save_required_params()
self.load_required_params()
def is_projection_params_saved(self):
return os.path.isfile(self.fastfood_params_path) and\
os.path.isfile(self.initial_value_path)
def load_required_params(self):
# check and if intrinsic porjection mats exists load them.
if self.is_projection_params_saved():
self.fastfood_params = torch.load(self.fastfood_params_path)
self.initial_value = torch.load(self.initial_value_path)
def save_required_params(self):
# Saves the generates projection params.
torch.save(self.initial_value, self.initial_value_path)
torch.save(self.fastfood_params, self.fastfood_params_path)
def move_to(self, x_tuple, target):
if isinstance(x_tuple, torch.Tensor):
return x_tuple.to(target)
a = []
for x in x_tuple:
if isinstance(x, torch.Tensor):
a.append(x.to(target))
else:
a.append(x)
return tuple(a)
def requires_to(self, x_tuple, target):
if isinstance(x_tuple, torch.Tensor):
x_tuple.requires_grad_(target)
for x in x_tuple:
if isinstance(x, torch.Tensor):
x.requires_grad_(target)
def fastfood_vars_requires_grad_(self, requires_grad):
for item in self.fastfood_params.items():
self.requires_to(item, requires_grad)
def __call__(self, module, inputs):
index = 0
with torch.enable_grad():
for name, base, localname in self.name_base_localname:
if localname == "intrinsic_parameter":
continue
self.initial_value[name] = self.initial_value[name].to(
getattr(base, localname))
device_dtype = getattr(base, localname).dtype
init_shape = self.initial_value[name].size()
DD = np.prod(init_shape)
self.fastfood_params[name] = self.move_to(
self.fastfood_params[name], module.intrinsic_parameter.device)
# Fastfood transform te replace dence P
ray = fastfood_torched(module.intrinsic_parameter, DD, self.fastfood_params[name]).view(
init_shape
)
if self.said:
ray = ray * self.intrinsic_parameter_said[index]
param = (self.initial_value[name] + ray).to(device_dtype)
delattr(base, localname)
setattr(base, localname, param)
index += 1
@staticmethod
def apply(module, intrinsic_dimension, output_dir, str_filter=set(), said=False):
for k, hook in module._forward_pre_hooks.items():
if isinstance(hook, IntrinsicDimensionLight) and hook.name == name:
raise RuntimeError("Cannot register two intrinsic dimension hooks on "
"the same parameter {}".format(name))
fn = IntrinsicDimensionLight(
module, intrinsic_dimension, output_dir, str_filter, said)
module.register_forward_pre_hook(fn)
return fn
@staticmethod
def apply_with_tensor(module, intrinsic_vector, str_filter=set()):
assert isinstance(intrinsic_vector,
torch.Tensor) and intrinsic_vector.ndim == 1
for k, hook in module._forward_pre_hooks.items():
if isinstance(hook, IntrinsicDimensionLight) and hook.name == name:
raise RuntimeError("Cannot register two intrinsic dimension hooks on "
"the same parameter {}".format(name))
fn = IntrinsicDimensionLight(
module, intrinsic_vector.size(0), str_filter, False)
fn.intrinsic_parameter = intrinsic_vector
module.register_forward_pre_hook(fn)
return fn
def intrinsic_dimension(module, intrinsic_dimension, output_dir, str_filter):
IntrinsicDimensionLight.apply(
module, intrinsic_dimension, output_dir, str_filter, False)
return module
def intrinsic_dimension_said(module, intrinsic_dimension, output_dir, str_filter):
IntrinsicDimensionLight.apply(
module, intrinsic_dimension, output_dir, str_filter, True)
return module
```
Now if you see the model after reloading in trainer, this is without this wrapper class, could you also cover this case that user might wrap a model into a class?
thanks
## Expected behavior
wrapped model needs to be reloaded fine in checkpoint reloading | 04-24-2021 15:04:18 | 04-24-2021 15:04:18 | |
transformers | 11,413 | closed | Allow adding custom logits processors in the `generate` method | # 🚀 Feature request
Hello,
I'd like to request a new feature in the `generate` method of the `GenerationMixin` class from `generation_utils`. Specifically, I'd like a feature that allows a user to pass custom LogitsProcessors by adding a new argument `logit_processors: Optional[LogitsProcessorList] = None` to the `generate` method.
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
I'd like to run generation on a pre-trained model, and I'd like to modify its output logits according to my custom function before the search or sampling or whatever is used. I think that this could be a common use case for controlled natural generation because one often wants to implement some trivial restrictions over generated logits.
Here is an example of how this could be used:
```
import torch
from transformers import GPT2LMHeadModel, GPT2Tokenizer, LogitsProcessor, LogitsProcessorList
class MyLogitsProcessor(LogitsProcessor):
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor):
something_useful()
model = GPT2LMHeadModel.from_pretrained('gpt2')
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
logit_processors = LogitsProcessorList([MyLogitsProcessor()])
input_ids = tokenizer('This dog is cute', return_tensors='pt').input_ids
model.generate(input_ids=input_ids, logit_processors=logit_processors)
```
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
I have no experience in open source, but I can try to help if you need a hand. I think that the general approach to implementing this is to do the following:
1) Add the `logit_processors: Optional[LogitsProcessorList] = None` argument to the `generate` method,
2) Add the same argument to the `_get_logits_processor` method of GenerationMixin and add the custom logit processors after all the other logit processors are in place.
3) Pass the custom logits processors to every call of `_get_logits_processor` in the `generate` method.
What do you think?
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| 04-24-2021 12:07:11 | 04-24-2021 12:07:11 | I think I could submit a pull request to this, if I had
1) feedback on the idea (do you think it makes sense to do that?)
2) a little help changing existing tests and/or implementing new tests to reflect the change.
Also, maybe one would need the new argument to be `Optional[LogitsProcessor]` instead of `Optional[LogitsProcessorList]`. Because `LogitsProcessotList` is a subclass of `LogitsProcessor`, this would allow adding both a list of logits processors and a single logits processor.
What do you folks think? Would you accept this pull request (after maybe giving me some tips related to the tests)? <|||||>Hey @wadimiusz,
Sorry to only come back to you now! I think in general, I'm fine with such an extension. The only problem I see is that a user could add a custom logits processor that already exists (*e.g.* a user would create his own `LengthPenaltyLogitsProcessor`) and also pass `length_penalty=...` . But even in this case I guess we could just apply both processors and there shouldn't be a big problem.
=> So I'm ok with this extension. Interested in hearing your thoughts about this @patil-suraj @Narsil<|||||>I think it's a very nice idea !.
The problem you mention @patrickvonplaten I think will be relevant mostly for power users (that want to add a LogitsProcessor) so they should be careful in terms of how they use this tool. I guess we could emphasis this in the documentation for the `generate` function, that the simpler arguments are preferred for non advanced usage.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@wadimiusz Is there any update on this? I think it would be a great addition.<|||||>Hi @ScientiaEtVeritas, the feature seems not hard to implement and I think I already have the code somewhere, but it would require nice and thorough tests that I don't have the time to write right now. If you could help me with the tests, we could submit a pull request together :)<|||||>There used to be a PR that might be used as a starting point:
https://github.com/huggingface/transformers/pull/12219
Thanks if you can work on this ! |
transformers | 11,412 | closed | Small bug while converting wav2vec2 model trained using fairseq to huggingface | Hi,
I was trying to convert a wav2vec2 model trained using fairseq to have support with HuggingFace but there is a small error in inference.
When I use the code below:
```
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained('hf/output')
model = Wav2Vec2ForCTC.from_pretrained("hf/output")
# load audio
audio_input, sample_rate = sf.read('004-M-23_001.wav')
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0])
```
I get this as the output:
```
<s>ह<s>ॉ<s>ट<s>ल<s> <s>र<s>ॉ<s>य<s>ल<s> <s>ह<s>े<s>र<s>ि<s>ट<s>े<s>ज<s> क<s>े<s> <s>च<s>ी<s>ज<s> <s>क<s>े<s> <s>ए<s>क<s> <s>ब<s>ह<s>ु<s>त<s> <s>अ<s>च्<s>छ<s>ा<s> <s>ह<s>ै<s> <s>क<s>्य<s>ा<s> <s>
```
but, I should ideally get this:
```
हॉटल रॉयल हेरिटेज के चीज के एक बहुत अच्छा है क्या
```
I can easily solve this by using:
```
print(transcription.replace('<s>', ''))
```
But if I deploy the model, the output of inference will contain ```<s>``` as I cannot change the output of deployed model.
Can you please let me know if I am doing any mistake in the conversion process.
My vocab.json looks like this:
```
{"<s>": 0, "<pad>": 1, "</s>": 2, "<unk>": 3, "|": 4, "0": 5, "1": 6, "2": 7, "3": 8, "4": 9, "5": 10, "6": 11, "7": 12, "8": 13, "9": 14, "ँ": 15, "ं": 16, "ः": 17, "अ": 18, "आ": 19, "इ": 20, "ई": 21, "उ": 22, "ऊ": 23, "ऋ": 24, "ए": 25, "ऐ": 26, "ऑ": 27, "ओ": 28, "औ": 29, "क": 30, "ख": 31, "ग": 32, "घ": 33, "ङ": 34, "च": 35, "छ": 36, "ज": 37, "झ": 38, "ञ": 39, "ट": 40, "ठ": 41, "ड": 42, "ढ": 43, "ण": 44, "त": 45, "थ": 46, "द": 47, "ध": 48, "न": 49, "प": 50, "फ": 51, "ब": 52, "भ": 53, "म": 54, "य": 55, "र": 56, "ल": 57, "व": 58, "श": 59, "ष": 60, "स": 61, "ह": 62, "ा": 63, "ि": 64, "ी": 65, "ु": 66, "ू": 67, "ृ": 68, "ॅ": 69, "े": 70, "ै": 71, "ॉ": 72, "ो": 73, "ौ": 74, "्": 75, "क़": 76, "ख़": 77, "ग़": 78, "ज़": 79, "ड़": 80, "ढ़": 81, "फ़": 82, "य़": 83}
``` | 04-24-2021 07:11:56 | 04-24-2021 07:11:56 | Hi @harveenchadha
if you pass `skip_special_tokens=True` to `decode` method, it will skip all the special tokens. <|||||>Hi Suraj,
Thanks! That works. But in the inference service that is deployed to test in browser, how will this change?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,411 | closed | What do these model parameters mean? |
"params_classifier.dense.weight"
"params_classifier.dense.bias"
"params_classifier.out_proj.weight"
"params_classifier.out_proj.bias"
Could someone please briefly explain these parameters to me? Using the DeBERTa model | 04-24-2021 05:58:45 | 04-24-2021 05:58:45 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,410 | closed | wrong parentclass in documentation | # What does this PR do?
The documentation linked to the parent class PreTrainedTokenizerFast but it should be the slow tokenizer class
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
| 04-24-2021 00:04:35 | 04-24-2021 00:04:35 | unsubscribe<https://github.com/notifications/unsubscribe-auth/ATPBJ2FNGKBXXXSN6ZH6POLTKIDLZANCNFSM43PQRSGA>.
/
0Merged #11410<https://github.com/huggingface/transformers/pull/11410> into master.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub<https://github.com/huggingface/transformers/pull/11410#event-4639275602>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ATPBJ2CW5M6UJ4RCYT2YIOLTKIFCTANCNFSM43PQRSGA>.
|
transformers | 11,409 | closed | How to use GPU when running run_summarization.py | When I run run_summarization.py on my computer and my computer has 2 GPUs. But the code didn't run with gpu cuda and is very very slow. Can anyone tell me how to use transformers with GPU? Thank you very much! | 04-23-2021 23:53:35 | 04-23-2021 23:53:35 | Hi @liubest
Please make sure that your torch installation can detect the GPU. All scripts will run on GPU if it's available.
If you want to run on multiple GPUs, follow the docs [here](https://github.com/huggingface/transformers/tree/master/examples/pytorch#distributed-training-and-mixed-precision).
And if you just want to use a single GPU, you could select the device by setting `CUDA_VISIBLE_DEVICES=0" which will select the first GPU.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,408 | open | [CI] solving the pytest crashing and hanging CI job | So as of recent we have the `run_tests_torch` CI job randomly and frequently failing.
We couldn't find any fault with any tests because there is never a traceback, just hanging `pytest` that sends no output.
This usually is a symptom that the process used more resources than it was allowed and it was killed - of course the python interpreted doesn't get a chance to make a peep - so no traceback. e.g. on colab processes get killed in the same way.
## Diagnostics
1. Go to the CI report and "rerun job with SSH"
it then enables SSH and gives you the cmd to access the CI instance. Use those instructions which it shows to you in `Enable SSH` to ssh to the instance.
when done remember to exit the ssh shells and `Cancel Job`, since otherwise the instance will continue running at $$$.
2. CI doesn't run docker with `--priveleged` flag, so most normal system tools are disabled, so it's almost impossible to debug anything. Things like `dmesg` or `/var/sys/log` are not there, you can `sudo`, but you can't do almost anything with it.
Ideally in such situations it'd be a good idea to switch from `docker` back to `machine` where we would have full root access.
3. Resource limit
```
resource_class: xlarge
```
as of this writing gives you 16GB RAM.
This is very confusing since when you log into the instance there are 70GB of memory reported in the top. And if you try to monitor %MEM you get a very misleading low usage. It gives you the report for out of 70GB, not out of the cgroups memory limit of 16GB.
How do we know the real limit:
```
$ cat /sys/fs/cgroup/memory/memory.limit_in_bytes | perl -ne 'print $_ / 2**30'
16
```
Yup, 16GB
4. Now it's very difficult to measure how much memory several forked processes use together, you can't use `top` for that.
I had 2 consoles opened, one with top and another with running `pytest -n 8` that I started manually
I did notice that the once all 8 processes were around 2-2.5GB RSS after awhile one of the workers crashed,
Then I found this handy tool thanks to https://unix.stackexchange.com/a/169129/291728
```
apt install smem
```
```
circleci@fc02c746bf66:~$ smem -t
PID User Command Swap USS PSS RSS
6 circleci /bin/sh 0 88 88 92
1 circleci /sbin/docker-init -- /bin/s 0 48 123 740
17567 circleci /usr/bin/time -v python -m 0 96 145 1216
17568 circleci tee tests_output.txt 0 140 225 1828
495 circleci /bin/bash -eo pipefail -c w 0 292 526 1692
1511 circleci -bash 0 608 1066 3140
476 circleci -bash 0 620 1079 3148
18170 circleci /usr/bin/python /usr/bin/sm 0 13160 13286 15424
7 circleci /bin/circleci-agent --confi 0 29424 29424 29428
17569 circleci python -m pytest -n 8 --dis 0 151172 163118 254684
17588 circleci /usr/local/bin/python -u -c 0 348860 371932 526452
17594 circleci /usr/local/bin/python -u -c 0 1863416 1887735 2048128
17579 circleci /usr/local/bin/python -u -c 0 2028784 2052674 2210400
17591 circleci /usr/local/bin/python -u -c 0 2031872 2056217 2214712
17574 circleci /usr/local/bin/python -u -c 0 2098124 2122054 2282392
17585 circleci /usr/local/bin/python -u -c 0 2226080 2247464 2401880
17582 circleci /usr/local/bin/python -u -c 0 2226864 2249367 2404832
17597 circleci /usr/local/bin/python -u -c 0 2643552 2665199 2818968
-------------------------------------------------------------------------------
18 1 0 15663200 15861722 17219156
```
The PSS column seems to be able to do correct totals on, so I did:
```
watch -n 1 'smem -t | tail -1'
```
and indeed, once the total PSS hit ~16GB pytest crashed.
The failure we get is intermittent because the tests are run randomly and sometimes we get 4 "fatter" tests run concurrently, and at all other times when it succeeds we are lucky not to hit the bad combination.
I tried to switch to:
```
resource_class: 2xlarge
```
which would give us 32GB, but apparently we aren't allowed to do so and need to ask for a special permission from CircleCI admins.
5. what happens to the hanging processes? clearly `pytest` doesn't recover from crash. I think it can recover from other failures of its workers, but not when a kernel nukes one of its workers.
When the resource limit gets hit, all but one workers were hanging in some strange place:
```
Thread 0x00007f65d91bb700 (most recent call first):
File "/home/circleci/.local/lib/python3.7/site-packages/execnet/gateway_base.py", line 400 in read
File "/home/circleci/.local/lib/python3.7/site-packages/execnet/gateway_base.py", line 432 in from_io
File "/home/circleci/.local/lib/python3.7/site-packages/execnet/gateway_base.py", line 967 in _thread_receiver
File "/home/circleci/.local/lib/python3.7/site-packages/execnet/gateway_base.py", line 220 in run
File "/home/circleci/.local/lib/python3.7/site-packages/execnet/gateway_base.py", line 285 in _perform_spawn
```
if I look in `top` 7 but 1 pytest workers stop working blocking on the above.
I figured that out by adding to `tests/conftest.py`:
```
import faulthandler
faulthandler.dump_traceback_later(20, repeat=True)
```
So now every 20 secs I was getting tb reports on where things were hanging...
But I'm not 100% sure it's why they are hanging, I will have to spend more time with it if we really want to understand why the other workers stop processing. So please don't take it as a truth, it's just one of the possibilities to check. But since it doesn't help our situation to understand why they can't recover I'm not going to waste time on it.
## Summary
1. we probably have a very small leak that grows over hundreds of tests as the memory usage slowly, but consistently goes up
2. 16GB is just enough for our `pytest -n 4` - probably 75% of the time, until we add more tests
3. so we either need to ask for the 2xlarge instance, or use `-n 3`
4. ~probably it'd be a good idea to add~ (see next comment)
```
apt install time
```
and run `pytest` with:
```
/usr/bin/time -v python -m pytest ...
```
which will give us an indepth resource usage report - so overtime we should see if our test suite consumes more and more resources.
@LysandreJik, @sgugger
| 04-23-2021 22:32:39 | 04-23-2021 22:32:39 | `/usr/bin/time -v`'s output for `-n3`:
```
Command being timed: "python -m pytest -n 3 --dist=loadfile -s --make-reports=tests_torch ./tests/"
User time (seconds): 1507.88
System time (seconds): 56.49
Percent of CPU this job got: 261%
Elapsed (wall clock) time (h:mm:ss or m:ss): 9:59.30
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 7038804
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 1
Minor (reclaiming a frame) page faults: 11559434
Voluntary context switches: 3084008
Involuntary context switches: 171112
Swaps: 0
File system inputs: 16456
File system outputs: 3261440
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
```
with `-n 4`:
```
Command being timed: "python -m pytest -n 4 --dist=loadfile -s --make-reports=tests_torch ./tests/"
User time (seconds): 1533.02
System time (seconds): 56.00
Percent of CPU this job got: 306%
Elapsed (wall clock) time (h:mm:ss or m:ss): 8:37.98
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 5797344
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 1231
Minor (reclaiming a frame) page faults: 11090301
Voluntary context switches: 2680563
Involuntary context switches: 433387
Swaps: 0
File system inputs: 277920
File system outputs: 3261200
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
```
So clearly this is not right max rss is smaller for `-n 4` then `-n 3` so it appears not to include `pytest` workers. The online information has very conflicting statements about whether forked processes are accounted for or not.
So we can't use this one.
<|||||>Thank you for this very in-depth analysis of the situation. It would probably be helpful to have a visualization of each test and how much memory it takes, it could help in singling out memory outliers; and it could also help to detect whether we actually have a memory leak.<|||||>Yes, this is all a big project. Just little time to do it.
I think the low-hanging fruit is to use `flake-finder` on some tests and see if the memory grows, to first identify if we have a leak. Normally unit-test refuses running the same test more than once.
https://huggingface.co/transformers/testing.html#repeat-tests
So may be even an exhaustive search:
for each test record mem usage while:
- run test once
- run test 10 times
I will see if I find some resources to try that.<|||||>This pytest plugins looked promising: https://github.com/CFMTech/pytest-monitor but I can't get it to work.
According to docs you just:
```
pip install pytest-monitor
```
and then run `pytest` normally, and it should create a sqlite db with all the data in it, but when I open it I get no test records in it:
```
pytest tests/test_logging.py
sqlite3 .pymon
sqlite> select * from TEST_METRICS;
```
It should print the resource stats here, but it doesn't.
I do get the sessions recorded, but it's not what we want:
```
sqlite> select * from EXECUTION_CONTEXTS;
266c14dea4f9f8a6dae5e46be30e70b3|12|4367.12125|x86_64|Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz|128696|hope|x86_64|64bit|Linux - 5.4.0-70-generic|3.8.8 (default, Feb 24 2021, 21:46:12)
[GCC 7.3.0]
faa034f4c783dc951159d07212d3a200|12|4300.12775|x86_64|Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz|128696|hope|x86_64|64bit|Linux - 5.4.0-70-generic|3.8.8 (default, Feb 24 2021, 21:46:12)
[GCC 7.3.0]
```
Perhaps I'm missing something - read through the whole long docs at https://pytest-monitor.readthedocs.io/en/latest/index.html but I don't see that I'm doing anything wrong.
**edit**: it doesn't work with unittest - too bad it doesn't mention that fact on their website. I created a normal test and then it records data.
<|||||>A little bit at a time I've been trying to work on this issue. At the moment trying to find a reliable way to take the measurements.
I have thought of at least 3 main ways the leak could be occurring.
1. leak in some API
2. leak in badly written test that doesn't clean up after itself - so some object is created in the test and somehow it doesn't get destroyed (see also 3)
3. "functional leak" as a side-effect of loading extra libraries - say we have 10 tests each loading 10 different libraries - each test will then make `pytest` grow just because it loaded something new - which is a variation on (2) - but how could a test unload the libraries it loaded. It'd be very inefficient practically.
Detection:
1) should be easy to detect by re-running the same test and noticing memory grow. My current algorithm - is to run the test once, ignore the memory usage because it could be loading a new module/lib, and run it second time to notice any difference here.
2 and 3.) these are difficult to make sense of and thus much harder to catch (2) because by just looking at numbers one doesn't know if it was just a new library loaded, or was some object not cleaned up after the test. |
transformers | 11,407 | closed | Add basic support for FP16 in SageMaker model parallelism | # What does this PR do?
**Note:** This is not full support yet as SageMaker Model Parallelism does not support gradient clipping, so a user has to change the default of `max_grad_norm` to 0 if they want to use it.
Otherwise, this adds support for mixed precision training in SageMaker Model Parallelism mode. The script has been tested on `run_glue.py` without error (as long as the caveat above is respected).
A defensive check is added so that the user gets an obvious error message if they don't change the default value of `max_grad_norm`. | 04-23-2021 21:44:45 | 04-23-2021 21:44:45 | |
transformers | 11,406 | closed | Pass along seed to DistributedSampler | # What does this PR do?
This PR passes along the seed to `DistributedSampler` otherwise it always uses 0 for setting its RNG. See #11389 for more context.
Fixes #11389 | 04-23-2021 21:35:35 | 04-23-2021 21:35:35 | Should also be passed to the constructor for `DistributedLengthGroupedSampler`?<|||||>Good point, added it! |
transformers | 11,405 | closed | Default to accuracy metric in run_glue_no_trainer | # What does this PR do?
In `run_glue_no_trainer`, the metric is not properly initialized when no task name is passed, this PR fixes that.
Fixes #11403 | 04-23-2021 17:57:26 | 04-23-2021 17:57:26 | |
transformers | 11,404 | closed | Documentation 404 error | For this page: https://huggingface.co/transformers/examples.html
several (all?) of the links in the "The Big Table of Tasks" section are getting a 404 error.
Ex: https://github.com/huggingface/transformers/tree/master/examples/token-classification
| 04-23-2021 17:48:06 | 04-23-2021 17:48:06 | thanks alot <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>[https://huggingface.co/transformers/examples.html](https://huggingface.co/transformers/examples.html) now points towards [https://huggingface.co/docs/transformers/main/en/examples](https://huggingface.co/docs/transformers/main/en/examples), which is a 404. |
transformers | 11,403 | closed | metric is uninitialized when csv data is supplied to example/pytorch/text-classification/run_glue_no_trainer.py | ## Environment info
- `transformers` version: 4.5.1
- Platform: Darwin-18.7.0-x86_64-i386-64bit
- Python version: 3.7.0
- PyTorch version (GPU?): 1.8.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: distributed
- OS type and version: Mac OSX 10.14.6
### Who can help
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
## Information
Model I am using (Bert, XLNet ...): distilbert-base-uncased
The problem arises when using:
* [ x] the official example scripts: (give details below)
Running the script: transformers/examples/pytorch/text-classification/run_glue_no_trainer.py
With parameters: --model_name_or_path distilbert-base-uncased --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3 --train_file piracy_train.csv --validation_file piracy_validation.csv --output_dir /data/output/distilbert-base-uncased-piracy-no-trainer
Yields the error:
`
Some weights of the model checkpoint at distilbert-base-uncased were not used when initializing DistilBertForSequenceClassification: ['vocab_transform.weight', 'vocab_transform.bias', 'vocab_layer_norm.weight', 'vocab_layer_norm.bias', 'vocab_projector.weight', 'vocab_projector.bias']
- This IS expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: ['pre_classifier.weight', 'pre_classifier.bias', 'classifier.weight', 'classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
100%|██████████| 2/2 [00:00<00:00, 12.75ba/s]
100%|██████████| 1/1 [00:00<00:00, 45.35ba/s]
04/22/2021 14:47:33 - INFO - __main__ - Sample 598 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'input_ids': [101, 6084, 1997, 16298, 1024, 2006, 5641, 2233, 2418, 2012, 5511, 19961, 11396, 1999, 2597, 2410, 1011, 4720, 2078, 1011, 28714, 1011, 2322, 2063, 2019, 19842, 2988, 2048, 10756, 19801, 2018, 7333, 2176, 8301, 21807, 2008, 5411, 1996, 19842, 2000, 2306, 1015, 5830, 1012, 27120, 4273, 3036, 2136, 3662, 4255, 1998, 8301, 21807, 11672, 1012, 6258, 2003, 3647, 1012, 102], 'labels': 1}.
04/22/2021 14:47:33 - INFO - __main__ - Sample 65 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'input_ids': [101, 2105, 2260, 2078, 2213, 22064, 1997, 11937, 9148, 1011, 11937, 9148, 2479, 5137, 1012, 2048, 10027, 2152, 1011, 3177, 6242, 5411, 1037, 9625, 6839, 14128, 1012, 3040, 2992, 1996, 8598, 3626, 21900, 1998, 8878, 2811, 28405, 2543, 21290, 2015, 1012, 5137, 3212, 2038, 2042, 11925, 2011, 27527, 2557, 1012, 1996, 10027, 6242, 5411, 2000, 1037, 3292, 1997, 3156, 5563, 2013, 1996, 2911, 1998, 2333, 2185, 1012, 1996, 2911, 7943, 2014, 6019, 2000, 1996, 2279, 3417, 1997, 7688, 1012, 102], 'labels': 1}.
04/22/2021 14:47:33 - INFO - __main__ - Sample 877 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'input_ids': [101, 7387, 1024, 2006, 2570, 2254, 1037, 9625, 6839, 2988, 2108, 2628, 2379, 2597, 5709, 1011, 2321, 2078, 4002, 2509, 1011, 2410, 2063, 3155, 3963, 13221, 2148, 1997, 16738, 1012, 1996, 2911, 2001, 7283, 2628, 2012, 1037, 3292, 1997, 1021, 2661, 1998, 2439, 1996, 10027, 6258, 2044, 1037, 3177, 3623, 1998, 2607, 2689, 1012, 102], 'labels': 1}.
04/22/2021 14:47:33 - INFO - __main__ - ***** Running training *****
04/22/2021 14:47:33 - INFO - __main__ - Num examples = 1173
04/22/2021 14:47:33 - INFO - __main__ - Num Epochs = 3
04/22/2021 14:47:33 - INFO - __main__ - Instantaneous batch size per device = 32
04/22/2021 14:47:33 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 32
04/22/2021 14:47:33 - INFO - __main__ - Gradient Accumulation steps = 1
04/22/2021 14:47:33 - INFO - __main__ - Total optimization steps = 111
33%|███▎ | 37/111 [07:48<14:24, 11.69s/it]Traceback (most recent call last):
File "/Users/daraghhartnett/Projects/D3M/neural_text/code/transformers/examples/pytorch/text-classification/run_glue_no_trainer.py", line 441, in <module>
main()
File "/Users/daraghhartnett/Projects/D3M/neural_text/code/transformers/examples/pytorch/text-classification/run_glue_no_trainer.py", line 406, in main
metric.add_batch(
UnboundLocalError: local variable 'metric' referenced before assignment
33%|███▎ | 37/111 [07:49<15:39, 12.69s/it]
`
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x ] my own task or dataset: (give details below)
Simple single sentence text classification
## To reproduce
Steps to reproduce the behavior:
1. Pick any csv dataset with a train and validation files and run the transformers/examples/pytorch/text-classification/run_glue_no_trainer.py script using the following parameters:
2. --model_name_or_path distilbert-base-uncased --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3 --train_file piracy_train.csv --validation_file piracy_validation.csv --output_dir /data/output/distilbert-base-uncased-piracy-no-trainer
3. This will yield an error as the metric variable is not initialized when an optional args.task_name is not specified.
`
Some weights of the model checkpoint at distilbert-base-uncased were not used when initializing DistilBertForSequenceClassification: ['vocab_transform.weight', 'vocab_transform.bias', 'vocab_layer_norm.weight', 'vocab_layer_norm.bias', 'vocab_projector.weight', 'vocab_projector.bias']
- This IS expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: ['pre_classifier.weight', 'pre_classifier.bias', 'classifier.weight', 'classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
100%|██████████| 2/2 [00:00<00:00, 12.75ba/s]
100%|██████████| 1/1 [00:00<00:00, 45.35ba/s]
04/22/2021 14:47:33 - INFO - __main__ - Sample 598 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'input_ids': [101, 6084, 1997, 16298, 1024, 2006, 5641, 2233, 2418, 2012, 5511, 19961, 11396, 1999, 2597, 2410, 1011, 4720, 2078, 1011, 28714, 1011, 2322, 2063, 2019, 19842, 2988, 2048, 10756, 19801, 2018, 7333, 2176, 8301, 21807, 2008, 5411, 1996, 19842, 2000, 2306, 1015, 5830, 1012, 27120, 4273, 3036, 2136, 3662, 4255, 1998, 8301, 21807, 11672, 1012, 6258, 2003, 3647, 1012, 102], 'labels': 1}.
04/22/2021 14:47:33 - INFO - __main__ - Sample 65 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'input_ids': [101, 2105, 2260, 2078, 2213, 22064, 1997, 11937, 9148, 1011, 11937, 9148, 2479, 5137, 1012, 2048, 10027, 2152, 1011, 3177, 6242, 5411, 1037, 9625, 6839, 14128, 1012, 3040, 2992, 1996, 8598, 3626, 21900, 1998, 8878, 2811, 28405, 2543, 21290, 2015, 1012, 5137, 3212, 2038, 2042, 11925, 2011, 27527, 2557, 1012, 1996, 10027, 6242, 5411, 2000, 1037, 3292, 1997, 3156, 5563, 2013, 1996, 2911, 1998, 2333, 2185, 1012, 1996, 2911, 7943, 2014, 6019, 2000, 1996, 2279, 3417, 1997, 7688, 1012, 102], 'labels': 1}.
04/22/2021 14:47:33 - INFO - __main__ - Sample 877 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'input_ids': [101, 7387, 1024, 2006, 2570, 2254, 1037, 9625, 6839, 2988, 2108, 2628, 2379, 2597, 5709, 1011, 2321, 2078, 4002, 2509, 1011, 2410, 2063, 3155, 3963, 13221, 2148, 1997, 16738, 1012, 1996, 2911, 2001, 7283, 2628, 2012, 1037, 3292, 1997, 1021, 2661, 1998, 2439, 1996, 10027, 6258, 2044, 1037, 3177, 3623, 1998, 2607, 2689, 1012, 102], 'labels': 1}.
04/22/2021 14:47:33 - INFO - __main__ - ***** Running training *****
04/22/2021 14:47:33 - INFO - __main__ - Num examples = 1173
04/22/2021 14:47:33 - INFO - __main__ - Num Epochs = 3
04/22/2021 14:47:33 - INFO - __main__ - Instantaneous batch size per device = 32
04/22/2021 14:47:33 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 32
04/22/2021 14:47:33 - INFO - __main__ - Gradient Accumulation steps = 1
04/22/2021 14:47:33 - INFO - __main__ - Total optimization steps = 111
33%|███▎ | 37/111 [07:48<14:24, 11.69s/it]Traceback (most recent call last):
File "/Users/daraghhartnett/Projects/D3M/neural_text/code/transformers/examples/pytorch/text-classification/run_glue_no_trainer.py", line 441, in <module>
main()
File "/Users/daraghhartnett/Projects/D3M/neural_text/code/transformers/examples/pytorch/text-classification/run_glue_no_trainer.py", line 406, in main
metric.add_batch(
UnboundLocalError: local variable 'metric' referenced before assignment
33%|███▎ | 37/111 [07:49<15:39, 12.69s/it]
`
## Expected behavior
Since providing your own csv files is allowed, the metric object should be initialized when no task_name is provided.
| 04-23-2021 17:09:35 | 04-23-2021 17:09:35 | Thanks for flagging. The PR mentioned above will make it default to accuracy. Of course, you're free to change it to whatever you need!<|||||>Glad to be of help!
Ah, I did not see that - I only looked in the Issues to see if it had already been reported. I will check the PR's as well the next time.
Thanks very much!<|||||>The PR did not exist before you flagged the issue ;-) I opened it to fix it!<|||||>Ah! Excellent! I am using the PR you proposed locally so I am back in business :) |
transformers | 11,402 | closed | Positional embeddings are not applied when input embeddings are passed in for Pytorch DistilBert model | ## Environment info
- `transformers` version: 4.1.1
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.6.6
- PyTorch version (GPU?): 1.8.0+cpu (False)
- Tensorflow version (GPU?): 2.4.0 (False)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@julien-c
Models:
- albert, bert, xlm: @LysandreJik
## Information
Model I am using (DistilBertModel):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Load pre-trained models for DistilBertForSequenceClassification and DistilBertTokenizer
2. Encode input text
3. Find input embeddings by passing in input ids through pre-trained model's word emebedding layer
4. Forward pass model with encoded input ids
5. Forward pass the model with input embeddings found in step 3
6. Compare the logits of steps 4 and 5
```python
from transformers import DistilBertTokenizer, DistilBertForSequenceClassification
input_text = '''This is some sample text. But I would like a model prediction on this.'''
pre_trained_model = 'distilbert-base-uncased'
model = DistilBertForSequenceClassification.from_pretrained(pre_trained_model)
tokenizer = DistilBertTokenizer.from_pretrained(pre_trained_model)
encoded_tokens = tokenizer.encode_plus(input_text, add_special_tokens=True, return_token_type_ids=True,
return_tensors='pt')
input_embeds = model.distilbert.embeddings.word_embeddings(encoded_tokens['input_ids'])
scores_for_input_ids = model(input_ids=encoded_tokens['input_ids'], attention_mask=encoded_tokens["attention_mask"])
scores_for_input_embeds = model(inputs_embeds=input_embeds, attention_mask=encoded_tokens["attention_mask"])
print('Logits for input ids', scores_for_input_ids.logits)
print('Logits for input embeds', scores_for_input_embeds.logits)
```
Output
Logits for input ids tensor([[-0.0721, 0.0499]], grad_fn=<AddmmBackward>)
Logits for input embeds tensor([[ 0.0675, -0.0452]], grad_fn=<AddmmBackward>)
## Expected behavior
The logits returned for steps 4 and 5 above should be the same. For other pytorch models such as Bert and Roberta as well as Tensorflow implementation of DistilBert (TFDistilBert) the logits returned for steps 4 and 5 are the same.
```python
from transformers import DistilBertTokenizer, BertForSequenceClassification
input_text = '''This is some sample text. But I would like a model prediction on this.'''
pre_trained_model = 'bert-base-uncased'
model = BertForSequenceClassification.from_pretrained(pre_trained_model)
tokenizer = DistilBertTokenizer.from_pretrained(pre_trained_model)
encoded_tokens = tokenizer.encode_plus(input_text, add_special_tokens=True, return_token_type_ids=True,
return_tensors='pt')
input_embeds = model.bert.embeddings.word_embeddings(encoded_tokens['input_ids'])
scores_for_input_ids = model(input_ids=encoded_tokens['input_ids'], attention_mask=encoded_tokens["attention_mask"])
scores_for_input_embeds = model(inputs_embeds=input_embeds, attention_mask=encoded_tokens["attention_mask"])
print('Logits for input ids', scores_for_input_ids.logits)
print('Logits for input embeds', scores_for_input_embeds.logits)
```
Output
Logits for input ids tensor([[-0.1336, 0.1173]], grad_fn=<AddmmBackward>)
Logits for input embeds tensor([[-0.1336, 0.1173]], grad_fn=<AddmmBackward>)
I digged deeper into this in the transformers library and found out that positional embeddings are not applied when input embeddings are passed for the model forward pass particularly in the DistilBert model. On the other hand if the input ids are passed in, input embeddings are calculated from input ids and positional embeddings are applied on top of that before passing into the underlying transformer. In https://github.com/huggingface/transformers/blob/master/src/transformers/models/distilbert/modeling_distilbert.py lines 479-482.
| 04-23-2021 16:23:17 | 04-23-2021 16:23:17 | That's correct indeed, and seems like a bug to me. Would you like to open a PR to fix the issue?<|||||>Thanks for the confirmation. Yes, I should be able to do that.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,402 | closed | Positional embeddings are not applied when input embeddings are passed in for Pytorch DistilBert model | ## Environment info
- `transformers` version: 4.1.1
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.6.6
- PyTorch version (GPU?): 1.8.0+cpu (False)
- Tensorflow version (GPU?): 2.4.0 (False)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@julien-c
Models:
- albert, bert, xlm: @LysandreJik
## Information
Model I am using (DistilBertModel):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Load pre-trained models for DistilBertForSequenceClassification and DistilBertTokenizer
2. Encode input text
3. Find input embeddings by passing in input ids through pre-trained model's word emebedding layer
4. Forward pass model with encoded input ids
5. Forward pass the model with input embeddings found in step 3
6. Compare the logits of steps 4 and 5
```python
from transformers import DistilBertTokenizer, DistilBertForSequenceClassification
input_text = '''This is some sample text. But I would like a model prediction on this.'''
pre_trained_model = 'distilbert-base-uncased'
model = DistilBertForSequenceClassification.from_pretrained(pre_trained_model)
tokenizer = DistilBertTokenizer.from_pretrained(pre_trained_model)
encoded_tokens = tokenizer.encode_plus(input_text, add_special_tokens=True, return_token_type_ids=True,
return_tensors='pt')
input_embeds = model.distilbert.embeddings.word_embeddings(encoded_tokens['input_ids'])
scores_for_input_ids = model(input_ids=encoded_tokens['input_ids'], attention_mask=encoded_tokens["attention_mask"])
scores_for_input_embeds = model(inputs_embeds=input_embeds, attention_mask=encoded_tokens["attention_mask"])
print('Logits for input ids', scores_for_input_ids.logits)
print('Logits for input embeds', scores_for_input_embeds.logits)
```
Output
Logits for input ids tensor([[-0.0721, 0.0499]], grad_fn=<AddmmBackward>)
Logits for input embeds tensor([[ 0.0675, -0.0452]], grad_fn=<AddmmBackward>)
## Expected behavior
The logits returned for steps 4 and 5 above should be the same. For other pytorch models such as Bert and Roberta as well as Tensorflow implementation of DistilBert (TFDistilBert) the logits returned for steps 4 and 5 are the same.
```python
from transformers import DistilBertTokenizer, BertForSequenceClassification
input_text = '''This is some sample text. But I would like a model prediction on this.'''
pre_trained_model = 'bert-base-uncased'
model = BertForSequenceClassification.from_pretrained(pre_trained_model)
tokenizer = DistilBertTokenizer.from_pretrained(pre_trained_model)
encoded_tokens = tokenizer.encode_plus(input_text, add_special_tokens=True, return_token_type_ids=True,
return_tensors='pt')
input_embeds = model.bert.embeddings.word_embeddings(encoded_tokens['input_ids'])
scores_for_input_ids = model(input_ids=encoded_tokens['input_ids'], attention_mask=encoded_tokens["attention_mask"])
scores_for_input_embeds = model(inputs_embeds=input_embeds, attention_mask=encoded_tokens["attention_mask"])
print('Logits for input ids', scores_for_input_ids.logits)
print('Logits for input embeds', scores_for_input_embeds.logits)
```
Output
Logits for input ids tensor([[-0.1336, 0.1173]], grad_fn=<AddmmBackward>)
Logits for input embeds tensor([[-0.1336, 0.1173]], grad_fn=<AddmmBackward>)
I digged deeper into this in the transformers library and found out that positional embeddings are not applied when input embeddings are passed for the model forward pass particularly in the DistilBert model. On the other hand if the input ids are passed in, input embeddings are calculated from input ids and positional embeddings are applied on top of that before passing into the underlying transformer. In https://github.com/huggingface/transformers/blob/master/src/transformers/models/distilbert/modeling_distilbert.py lines 479-482.
| 04-23-2021 16:23:17 | 04-23-2021 16:23:17 | That's correct indeed, and seems like a bug to me. Would you like to open a PR to fix the issue?<|||||>Thanks for the confirmation. Yes, I should be able to do that.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,401 | closed | Download offile HuggingFace Models in other format than ".bin" Format | # 🚀 Feature request
### Hi,
### Just wandering if there is any other way of downloading Huggingface models in a restricted environment?
For example :
**a.)** we have a network which doesn't allow downloading the model by internet (so the auto download feature in transformer wont work),
**b.)** we can't download Bin files(So cannot download the files from "Models" option in Huggingface.co)
**c.)** Using the Git clone option gives the Proxy error.
So is there any other way which can be used to download the Huggingface Models ?
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| 04-23-2021 15:31:07 | 04-23-2021 15:31:07 | Not sure how to help here... can you download the files from another machine and sync them to the restricted env?<|||||>No, that's the only problem that all the devices within organization are same network and have same policies(as mentioned above).. :)
I see huggingFace has already provided many methods but the challenge is that none of it is working for me ..
But it does support downloading other format(i.e. i recently downloaded models from Easyocr in .pth and Spacy language models) , So just a query if any solution can be provided around that which can help me using Hugginface models in such restricted environment..
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,400 | closed | [Wav2Vec2] Correct conversion script | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-23-2021 13:20:58 | 04-23-2021 13:20:58 | |
transformers | 11,399 | closed | unable to import transformers in Python <3.8 | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.2
- Python version: 3.7
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## To reproduce
Steps to reproduce the behavior:
1. Install transformers
```
conda create -y -n py37-trans python=3.7 transformers -c HuggingFace
conda activate py37-trans
```
2. import transformers throws the following error:
`ModuleNotFoundError: No module named 'importlib_metadata'`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Import should be successful.
<!-- A clear and concise description of what you would expect to happen. -->
| 04-23-2021 12:55:36 | 04-23-2021 12:55:36 | I can add a PR to fix this. |
transformers | 11,398 | closed | RuntimeError: [enforce fail at CPUAllocator.cpp:64] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 237414383616 bytes. Error code 12 (Cannot allocate memory) | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.1
- Platform: linux
- Python version: 3.7
- PyTorch version (GPU?): 1.4
- Tensorflow version (GPU?):
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
Models:
- albert, bert, xlm: @LysandreJik, @sgugger
Model I am using (FlauBert):
The problem arises when trying to produce features with the model, the output which is generated causes the system run out of memory.
* [ ] the official example scripts: (I did not change much , pretty close to the original)
```
import torch
from transformers import FlaubertModel, FlaubertTokenizer
# Choose among ['flaubert/flaubert_small_cased', 'flaubert/flaubert_base_uncased',
# 'flaubert/flaubert_base_cased', 'flaubert/flaubert_large_cased']
modelname = 'flaubert/flaubert_base_cased'
# Load pretrained model and tokenizer
flaubert, log = FlaubertModel.from_pretrained(modelname, output_loading_info=True)
flaubert_tokenizer = FlaubertTokenizer.from_pretrained(modelname, do_lowercase=False)
# do_lowercase=False if using cased models, True if using uncased ones
sentence = "Le chat mange une pomme."
token_ids = torch.tensor([flaubert_tokenizer.encode(sentence)])
last_layer = flaubert(token_ids)[0]
print(last_layer.shape)
# torch.Size([1, 8, 768]) -> (batch size x number of tokens x embedding dimension)
# The BERT [CLS] token correspond to the first hidden state of the last layer
cls_embedding = last_layer[:, 0, :]
```
* [ ] My own modified scripts: (give details below)
```
def get_flaubert_layer(texte):
modelname = "flaubert-base-uncased"
path = './flau/flaubert-base-unc/'
flaubert = FlaubertModel.from_pretrained(path)
flaubert_tokenizer = FlaubertTokenizer.from_pretrained(path)
tokenized = texte.apply((lambda x: flaubert_tokenizer.encode(x, add_special_tokens=True, max_length=512)))
max_len = 0
for i in tokenized.values:
if len(i) > max_len:
max_len = len(i)
padded = np.array([i + [0] * (max_len - len(i)) for i in tokenized.values])
token_ids = torch.tensor(padded)
with torch.no_grad():
last_layer = flaubert(token_ids)[0][:,0,:].numpy()
return last_layer, modelname
```
The tasks I am working on is:
* [ ] Producing vectors/features from a language model and pass it to others classifiers
## To reproduce
Steps to reproduce the behavior:
1. Get transformers library and scikit-learn, pandas and numpy, pytorch
2. Last lines of code
```
# Reading the file
filename = "corpus"
sentences = pd.read_excel(os.path.join(root, filename + ".xlsx"), sheet_name= 0)
data_id = sentences.identifiant
print("Total phrases: ", len(data_id))
data = sentences.sent
label = sentences.etiquette
emb, mdlname = get_flaubert_layer(data) # corpus is dataframe of approximately 40 000 lines
```
Apperently this line produce something which is huge and which take a lot memory :
last_layer = flaubert(token_ids)[0][:,0,:].numpy()
I would have expected it run but I think the fact that I pass the whole dataset to the model is causing the system to break, so I wanted to know if it possible to tell the model to process the data set maybe 500 lines or 1000 lines at at a time so as to not pass the whole dataset. I know that , there is this parameter : batch_size which can be used but since I am not training a model but merely using it to produces embeddings as input for others classifiers ,
Do you perhaps know how to modify the batch size so the whole dataset is not treated. I am not really familiar with this type of architecture. In the example , they just put one single sentence but in my case I load a whole dataset (dataframe). ?
My expectation is to make the model treat all the sentences and then produced the vectors I need for the task of classification. | 04-23-2021 12:30:04 | 04-23-2021 12:30:04 | You should pass along small batches to the model to avoid this error: you should create a loop that goes over the I in `range(0, len(padded), batch_size)` and passes along the `padded[i: i+batch_size]` to your model, then concatenates the predictions back together.
Also note that this is not a bug in Transformers or a feature request so I invite you to continue the discussion on the [forums](https://discuss.huggingface.co/) if you need further assistance.<|||||>I experienced this same error when using the sentiment analysis pipeline on a list of strings. I set the model argument in the pipeline to `"nlptown/bert-base-multilingual-uncased-sentiment"`.<|||||>I am experiencing the same error even on a batch size of 2. <|||||>same problem |
transformers | 11,397 | closed | PreTrainedTokenizerFast.save_pretrained() ERROR | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1.1
- Platform: macOS-11.1-arm64-arm-64bit
- Python version: 3.9.1
- PyTorch version (GPU?): 1.8.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
## Description
It a bizarre error. When I run
```
from transformers import AutoTokenizer
t=AutoTokenizer.from_pretrained('distilroberta-base')
t.save_pretrained('vocab/')
```
It gives me the following errors:
```
>>> t.save_pretrained('vocab/')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/liyucheng/miniforge3/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 2005, in save_pretrained
return self._save_pretrained(
File "/Users/liyucheng/miniforge3/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py", line 528, in _save_pretrained
vocab_files = self.save_vocabulary(save_directory, filename_prefix=filename_prefix)
File "/Users/liyucheng/miniforge3/lib/python3.9/site-packages/transformers/models/gpt2/tokenization_gpt2_fast.py", line 172, in save_vocabulary
files = self._tokenizer.model.save(save_directory, name=filename_prefix)
TypeError: PyModel.save() got an unexpected keyword argument: name
```
It looks that the saving process called the `save_vocabulary` function in `gpt2/tokenization_gpt2_fast.py`. But the tokenizer I wanted to save is `distilroberta-base`.
Does anyone get some ideas? | 04-23-2021 11:31:47 | 04-23-2021 11:31:47 | It is because the version of `tokenizers` lib and the version of `transformers` lib do not match. |
transformers | 11,396 | closed | Fix small typo in text | # What does this PR do?
Fixes small typo
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
Documentation + maintained examples: @sgugger
-->
| 04-23-2021 11:26:27 | 04-23-2021 11:26:27 | |
transformers | 11,395 | closed | [Blenderbot] Integration Test should be slow | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Test takes more than 30seconds on every commit, let's make it slow
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-23-2021 10:00:08 | 04-23-2021 10:00:08 | Thanks! |
transformers | 11,394 | closed | [Flax] Correct Flax <=> PyTorch conversion | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Original BERT weights were saved in a weird format where `gamma` was used as the parameter name for the LayerNorm weight. This should be taken into account when converting to Flax.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-23-2021 09:24:37 | 04-23-2021 09:24:37 | |
transformers | 11,393 | closed | [Flax] Typo | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes a typo in the examples `run_mlm_flax.py` script.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-23-2021 09:21:43 | 04-23-2021 09:21:43 | |
transformers | 11,392 | closed | MayBe There is a bug with class DebertaV2PredictionHeadTransform | I got an **error** like this:
`RuntimeError: mat1 dim 1 must match mat2 dim` when Use official Debertav2 mlm .
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:4.5.0.dev0
- Platform: Ubuntu 18.04.3 LTS
- Python version: 3.6
- PyTorch version (GPU?): 1.7.1+cu101
- Tensorflow version (GPU?):
- Using GPU in script?:yes
- Using distributed or parallel set-up in script?: NO
### Who can help
@LysandreJik @sgugger @patrickvonplaten
## Information
Model I am using DebertaV2 For MLM :
The problem arises when code using :
* [ ] the official example scripts: [transformers/models/deberta_v2/modeling_deberta_v2.py](https://github.com/huggingface/transformers/blob/9f72e8f4e1e767c5f608dd135199e592255b8a69/src/transformers/models/deberta_v2/modeling_deberta_v2.py#L1178) line at 1178 to 1213
* [x] official scripts like This:
```python
# copied from transformers.models.bert.BertPredictionHeadTransform with bert -> deberta
class DebertaV2PredictionHeadTransform(nn.Module):
def __init__(self, config):
super().__init__()
self.dense = nn.Linear(config.hidden_size, config.hidden_size)
if isinstance(config.hidden_act, str):
self.transform_act_fn = ACT2FN[config.hidden_act]
else:
self.transform_act_fn = config.hidden_act
self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
def forward(self, hidden_states):
hidden_states = self.dense(hidden_states)
hidden_states = self.transform_act_fn(hidden_states)
hidden_states = self.LayerNorm(hidden_states)
return hidden_states
# copied from transformers.models.bert.BertLMPredictionHead with bert -> deberta
class DebertaV2LMPredictionHead(nn.Module):
def __init__(self, config):
super().__init__()
self.transform = DebertaV2PredictionHeadTransform(config)
# The output weights are the same as the input embeddings, but there is
# an output-only bias for each token.
self.decoder = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
self.bias = nn.Parameter(torch.zeros(config.vocab_size))
# Need a link between the two variables so that the bias is correctly resized with `resize_token_embeddings`
self.decoder.bias = self.bias
def forward(self, hidden_states):
hidden_states = self.transform(hidden_states)
hidden_states = self.decoder(hidden_states)
return hidden_states
```
I got an **error** like this:
`RuntimeError: mat1 dim 1 must match mat2 dim`
> Traceback (most recent call last):
File "train_pre_model.py", line 257, in <module>
trainer.train(resume_from_checkpoint=last_checkpoint)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1120, in train
tr_loss += self.training_step(model, inputs)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1524, in training_step
loss = self.compute_loss(model, inputs)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1556, in compute_loss
outputs = model(**inputs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/data_parallel.py", line 161, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/data_parallel.py", line 171, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply
output.reraise()
File "/usr/local/lib/python3.6/dist-packages/torch/_utils.py", line 428, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1159, in forward
prediction_scores = self.cls(sequence_output)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1224, in forward
prediction_scores = self.predictions(sequence_output)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1213, in forward
hidden_states = self.decoder(hidden_states)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/linear.py", line 93, in forward
return F.linear(input, self.weight, self.bias)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1692, in linear
output = input.matmul(weight.t())
RuntimeError: mat1 dim 1 must match mat2 dim 0
* [ ] my own modified scripts:
* when I refer from https://github.com/microsoft/DeBERTa/blob/master/DeBERTa/deberta/mlm.py ,change [transformers/models/deberta_v2/modeling_deberta_v2.py] TO
```python
class DebertaV2PredictionHeadTransform(nn.Module):
def __init__(self, config):
super().__init__()
self.dense = nn.Linear(config.hidden_size, config.embedding_size)
if isinstance(config.hidden_act, str):
self.transform_act_fn = ACT2FN[config.hidden_act]
else:
self.transform_act_fn = config.hidden_act
self.LayerNorm = nn.LayerNorm(config.embedding_size, eps=config.layer_norm_eps)
def forward(self, hidden_states):
hidden_states = self.dense(hidden_states)
hidden_states = self.transform_act_fn(hidden_states)
hidden_states = self.LayerNorm(hidden_states)
return hidden_states
# copied from transformers.models.bert.BertLMPredictionHead with bert -> deberta
class DebertaV2LMPredictionHead(nn.Module):
def __init__(self, config):
super().__init__()
self.transform = DebertaV2PredictionHeadTransform(config)
# The output weights are the same as the input embeddings, but there is
# an output-only bias for each token.
self.decoder = nn.Linear(config.embedding_size, config.vocab_size, bias=False)
self.bias = nn.Parameter(torch.zeros(config.vocab_size))
# Need a link between the two variables so that the bias is correctly resized with `resize_token_embeddings`
self.decoder.bias = self.bias
def forward(self, hidden_states):
hidden_states = self.transform(hidden_states)
#print(hidden_states.size())
#print(self.decoder)
hidden_states = self.decoder(hidden_states)
return hidden_states
```
The code it can works well,but I can't undestand why it can work when the official can not
| 04-23-2021 07:27:00 | 04-23-2021 07:27:00 | Hey @startnew,
Sorry could you add some code that we could copy-paste into a terminal to reproduce the error? :-) I don't quite follow here - is it the official code (in `src/transformes`) that doesn't work or specific adapted code?<|||||>Thank you for your reply, @patrickvonplaten , you can reproduce the error I encountered by opening the colab link below
[https://colab.research.google.com/drive/1DiMkU0lEeZqj2AT9DrafP_X-PDZMzxuK#scrollTo=GeOz-1Ix-5HE](https://colab.research.google.com/drive/1DiMkU0lEeZqj2AT9DrafP_X-PDZMzxuK#scrollTo=GeOz-1Ix-5HE)<|||||>refer from offical config ,the difference is I give an custum "embedding_size" and the "embeddding_size" is not equal to "hidden_size",the official use "hidden_size” as "embedding_size", I guess this is the cause of the error<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @startnew,
It's sadly a bit too time consuming to dive into another repo - could you maybe post a short code snippet that shows your error without using any external github repos or code?
Usually, you should be able to customize the `"embedding_size"` configuration parameter |
transformers | 11,391 | closed | Fix typos in README for text-classification | # What does this PR do?
`transformers/examples/pytorch/text-classification/run_glue_no_trainer.py` has `max_length` instead of `max_seq_length` while the README uses `--max_seq_length` as example commands for `run_glue_no_trainer.py`. This PR fixes the typos.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger, @patil-suraj | 04-23-2021 07:07:54 | 04-23-2021 07:07:54 | |
transformers | 11,390 | closed | S3 checkpoints not working with distributed training on sagemaker | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.0
- Platform: AWS Sagemaker
- Python version: 3.6
- PyTorch version (GPU?): 1.7.1
- Tensorflow version (GPU?):
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger
## Information
Model I am using (Bert, XLNet ...): gpt-neo
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Use the run_clm.py example script to finetune gpt-neo in Sagemaker with either torch.distributed.launch, or using Sagemaker distributed model parallel (say on a p4d.24xlarge with 8 gpus)
2. Only the first checkpoint is synced to the checkpoint_s3_uri location. Subsequent checkpoints do not appear in S3
3. Also, at the end of the training job, it spends around 1 hour in the "Uploading" state and ends with the error below.
InternalServerError: We encountered an internal error. Please try again.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I expected the training to work normally, and all the checkpoints and final model to get synced to the S3 location.
NB: training is working when I don't use the checkpoint_s3_uri (with both torch.distributed.launch and sagemaker distributed model parallel).
Also with a single gpu (on a p3.2xlarge), training with checkpoint_s3_uri is working, all the checkpoints and final model are synced to S3.
| 04-23-2021 06:13:59 | 04-23-2021 06:13:59 | cc @philschmid <|||||>Hey @laphang,
Could you please share your `estimator` configuration? that would help debug and reproduce your problem. Thanks!
<|||||>@laphang I tried to reproduce your error and for me its works using the following `HuggingFace` estimator.
```python
# estimator
huggingface_estimator = HuggingFace(entry_point='run_glue.py',
source_dir='./scripts',
metrics_definition=metric_definitions,
instance_type=instance_type,
instance_count=instance_count,
volume_size=volume_size,
role=role,
transformers_version='4.4.2',
pytorch_version='1.6.0',
checkpoint_s3_uri=f's3://{sess.default_bucket()}/checkpoints',
py_version='py36',
distribution= distribution,
hyperparameters = hyperparameters,
debugger_hook_config=False)
```
This estimator just extends the estimator from our [04_distributed_training_model_parallelism](https://github.com/huggingface/notebooks/blob/master/sagemaker/04_distributed_training_model_parallelism/sagemaker-notebook.ipynb) and includes the `checkpoint_s3_uri`.

> ## Environment info
> * `transformers` version: 4.5.0
> * Platform: AWS Sagemaker
> * Python version: 3.6
> * PyTorch version (GPU?): 1.7.1
> * Tensorflow version (GPU?):
> * Using GPU in script?: yes
> * Using distributed or parallel set-up in script?: yes
Reading your **environment** it seems that you are not yet using the new Hugging Face Deep Learning Container for Amazon SageMaker. Is that true? or have you update them? <|||||>@philschmid
Ah yes, I'm still using the PyTorchEstimator and installing transformers via requirements.txt. I'll try again with the HuggingFace Estimator and get back to you guys. Thanks for the quick response.<|||||>@philschmid yeah, made the changes below from using the PyTorch estimator to the HuggingFace one, and now distributed training with s3 checkpoints is working properly now (training job completes successfully, and all the checkpoints are synced to s3). It's working both using Sagemaker distributed model parallel, and also using torch.distributed.launch
Also just wanted to say that I was pleasantly surprised with how seamlessly Transformers is working with SageMaker model parallel. Great work guys!
```
before:
estimator = PyTorch(base_job_name=job_name,
entry_point = 'run_clm.py',
source_dir=source_dir,
code_location=output_path,
role=role,
framework_version='1.7.1',
py_version='py3',
hyperparameters=hyperparameters,
tags=tags,
output_path=output_path,
checkpoint_s3_uri=checkpoint_path,
instance_count=1,
instance_type='ml.p4d.24xlarge',
distribution= distribution,
use_spot_instances=train_use_spot_instances,
max_run=train_max_run,
max_wait=train_max_wait,
metric_definitions=metric_definition
)
after:
estimator = HuggingFace(base_job_name=job_name,
entry_point = 'run_clm.py',
source_dir=source_dir,
code_location=output_path,
role=role,
transformers_version='4.4.2',
pytorch_version='1.6.0',
py_version='py36',
hyperparameters=hyperparameters,
tags=tags,
output_path=output_path,
checkpoint_s3_uri=checkpoint_s3_uri,
debugger_hook_config=False,
instance_count=1,
instance_type='ml.p4d.24xlarge',
distribution= distribution,
use_spot_instances=train_use_spot_instances,
max_run=train_max_run,
max_wait=train_max_wait,
metric_definitions=metric_definition
)
```
<|||||>@laphang that are great news and thank you for the kind words! 🤗
Should you have any questions or problems in the future feel free to tag me directly in the issue.<|||||>@philschmid I am getting the same error as @laphang was getting, even with hugging face estimator. Only, the first checkpoints is getting saved in the checkpoint_uri_location and rest don't appear in s3. After the end of training job, it is taking an hour showing uploading and ends with an error "InternalServerError: We encountered an internal error. Please try again".
It has started since I added sagemaker distributed data parallel into Hugging face estimator. It has kind of become a blocker for our model training, any help would be really appreciated.
`distribution = {'smdistributed':{'dataparallel':{ 'enabled': True }}}
huggingface_estimator = HuggingFace(
entry_point='train.py',
source_dir='./scripts',
sagemaker_sess=sess,
instance_type='ml.p4d.24xlarge',
instance_count=1,
volume_size=60,
code_location=output_path,
output_path=output_path,
checkpoint_s3_uri=checkpoint_s3_uri,
tensorboard_output_config=tensorboard_output_config,
role=role,
transformers_version='4.6.1',
pytorch_version='1.7.1',
py_version='py36',
hyperparameters = hyperparameters,
distribution=distribution
)`<|||||>Hey @Harshitcmd,
could maybe share your training script? which `TrainingArguments` are you using?
For
> After the end of training job, it is taking an hour showing uploading and ends with an error "InternalServerError: We encountered an internal error. Please try again".
It might be possible that you are saving your checkpoint in `/opt/ml/model` (which will be uploaded to s3 after training) and it gets through saving the checkpoints.
<|||||>Hey @philschmid thanks for replying.
I have been saving my checkpoints into "check_dir": "/opt/ml/checkpoints". Before integrating data parallelism with p4d.24xlarge I was using p3.2xlarge with the same training arguments and there all the checkpoints were getting saved into s3 on the go itself.
Plz have a look into my training arguments.
`
training_args = TrainingArguments(
output_dir=args.check_dir,
num_train_epochs=args.epochs,
per_device_train_batch_size=args.train_batch_size,
per_device_eval_batch_size=args.eval_batch_size,
eval_accumulation_steps=1,
warmup_ratio=args.warmup_steps,
evaluation_strategy="no",
logging_dir=f"/opt/ml/output/tensorboard/",
learning_rate=float(args.learning_rate),
save_total_limit=10,
save_steps = 200,
logging_steps = 20,
)
# create Trainer instance
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=test_dataset,
tokenizer=tokenizer,
data_collator=data_collator,
callbacks=[TensorBoardCallback]
)`
<|||||>You might need to add `overwrite_output_dir` to your `TrainingArguments`
> overwrite_output_dir (bool, optional, defaults to False) – If True, overwrite the content of the output directory. Use this to continue training if output_dir points to a checkpoint directory.
I added it for example like that
```python
overwrite_output_dir=True if get_last_checkpoint(args.output_dir) is not None else False,
```
and to solve your upload issue you should save the model into `/opt/ml/model`. <|||||>Hey @philschmid,
I tried adding overwrite_output_dir=True, it's partially solved my issue. Now, the checkpoints are in sync with s3(all the checkpoints and model artifacts are getting saved at the desired location). Even though all the checkpoints got uploaded to the s3 it has showed the status as **Uploading** for an hour and ended with an internal error(weird).
PS: When I didn't integrate the data parallelism with the same instance type (p4d.24xlarge) everything worked seamlessly. |
transformers | 11,389 | closed | Distributed DataSampler has fixed data order despite random seeds. | When using a distributed data loader with `shuffle = True` in the Hugging Face trainer, it calls the underlying torch data loader. If `shuffle` is set to True, the data loader seeds the generator with `seed + epoch` ([here](https://github.com/pytorch/pytorch/blob/f84a50109f794d4feab922056b77d7c358076776/torch/utils/data/distributed.py#L100)).
When calling the data loader in HF trainer ([here](https://github.com/huggingface/transformers/blob/3ed5e97ba04ce9b24b4a7161ea74572598a4c480/src/transformers/trainer.py#L553)), the seed is _not_ passed to the torch data loader and thereby gets set to the default seed of 0. This means the data loader generator will always gets initialized to the epoch, despite a different seed to HF.
I would think we'd want the data order to be random, too.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.0.dev0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.6
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes (with DeepSpeed)
### Who can help
@sgugger (trainer)
## Information
Model I am using (Bert, XLNet ...): GPT2
The problem arises when using:
* The hugging face trainer with a distributed data sampler
The tasks I am working on is:
* Training GPT2 from scratch using DDP with DeepSpeed
## To reproduce
Steps to reproduce the behavior:
Using a different seed with distributed data loader does not change the data order.
## Expected behavior
The random seed should be passed to the data loader so the data order to randomized with the seed changing.
| 04-23-2021 05:14:38 | 04-23-2021 05:14:38 | Follow-up note (part of @lorr1's team that encountered this). This is particularly insidious for any sort of code that tries training with multiple random seeds; there's an assumption that across seeds, weight initialization (for pre-training, fine-tuning weights), dropout, AND data order are all different (and all do have significant bearing on results).
Consistent data order (as in the existing code) runs counter to that expectation.<|||||>I'm guessing we want a new argument to control that seed though, not the current `args.seed` that is set at the beginning of training, what do you think? <|||||>I think the seed set at the beginning of training would be fine -- that would be the expected behavior (weights get randomly initialized, then data order is random _conditioned on a single seed_.
Adding a separate seed just for data order means it's just one more thing you need to keep track of.
There's a backwards compatibility issue here possibly (if folks doing multiple random seeds worth of runs have been relying on/reporting those results), but this feels like the simplest solution?<|||||>It's definitely the easiest solution. For the backward compatibility issue, I hope users save their version of Transformers and PyTorch along the seeds for reproducibility. PyTorch does not guarantee the same results across versions (we had an issue with multinomial that changed behavior for instance). So I think it's fine to change the behavior, especially as it's a bug fix.
Will make a PR with the change. |
transformers | 11,388 | closed | CUDA OOM in the middle of training when the training data is large | ## Environment info
- `transformers` version: 4.6.0.dev0
- Platform: Linux-3.10.0-1160.24.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core
- Python version: 3.7.9
- PyTorch version (GPU?): 1.8.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@sgugger @patrickvonplaten, @patil-suraj
## Information
I am using https://github.com/huggingface/transformers/blob/9e147d31f67a03ea4f5b11a5c7c3b7f8d252bfb7/examples/seq2seq/run_seq2seq.py to train MT5/base on custom parallel data. The code works well when the training data is <=100K but throws CUDA out of memory error in the middle of training when I train with 200K (or beyond) data.
The error message is here:
loading configuration file https://huggingface.co/google/mt5-base/resolve/main/config.json from cache at /scratch/st-amuham01-1/ganeshjw/projects/mt5_beluga/cache_dir/5ebfd830555547194403d6803baa127970de59b443c04b7a1a60b16a97ed3958.3950cd4aaa701cb6f55a976ff996001a5fb09bbbe7ba9084619949d9016f519e
Model config MT5Config {
"_name_or_path": "/home/patrick/hugging_face/t5/mt5-base",
"architectures": [q
"T5ForConditionalGeneration"
],
"d_ff": 2048,
"d_kv": 64,
"d_model": 768,
"decoder_start_token_id": 0,
"dropout_rate": 0.1,
"eos_token_id": 1,
"feed_forward_proj": "gated-gelu",
"initializer_factor": 1.0,
"is_encoder_decoder": true,
"layer_norm_epsilon": 1e-06,
"model_type": "mt5",
"num_decoder_layers": 12,
"num_heads": 12,
"num_layers": 12,
"output_past": true,
"pad_token_id": 0,
"relative_attention_num_buckets": 32,
"tie_word_embeddings": false,
"tokenizer_class": "T5Tokenizer",
"transformers_version": "4.6.0.dev0",
"use_cache": true,
"vocab_size": 250112
}
loading configuration file https://huggingface.co/google/mt5-base/resolve/main/config.json from cache at /scratch/st-amuham01-1/ganeshjw/projects/mt5_beluga/cache_dir/5ebfd830555547194403d6803baa127970de59b443c04b7a1a60b16a97ed3958.3950cd4aaa701cb6f55a976ff996001a5fb09bbbe7ba9084619949d9016f519e
Model config MT5Config {
"_name_or_path": "/home/patrick/hugging_face/t5/mt5-base",
"architectures": [
"T5ForConditionalGeneration"
],
"d_ff": 2048,
"d_kv": 64,
"d_model": 768,
"decoder_start_token_id": 0,
"dropout_rate": 0.1,
"eos_token_id": 1,
"feed_forward_proj": "gated-gelu",
"initializer_factor": 1.0,
"is_encoder_decoder": true,
"layer_norm_epsilon": 1e-06,
"model_type": "mt5",
"num_decoder_layers": 12,
"num_heads": 12,
"num_layers": 12,
"output_past": true,
"pad_token_id": 0,
"relative_attention_num_buckets": 32,
"tie_word_embeddings": false,
"tokenizer_class": "T5Tokenizer",
"transformers_version": "4.6.0.dev0",
"use_cache": true,
"vocab_size": 250112
}
Can't load following files from cache: ['added_tokens_file', 'tokenizer_file'] and cannot check if these files are necessary for the tokenizer to operate.
loading file https://huggingface.co/google/mt5-base/resolve/main/spiece.model from cache at /scratch/st-amuham01-1/ganeshjw/projects/mt5_beluga/cache_dir/4764ec347af4d2d6286acbe1d9d630ac0afd8554a4c4a64170e0b663fd2e2412.84ea7af2df68dc8db434d3160aab65cce8ac63ce5b6f7743f8c9a4a14b4f77e2
loading file https://huggingface.co/google/mt5-base/resolve/main/special_tokens_map.json from cache at /scratch/st-amuham01-1/ganeshjw/projects/mt5_beluga/cache_dir/0d7d5b3fc19bf58d4b274990c8bcf5e307726bc18d95f40a1436dfb6a0892f85.294ebaa4cd17bb284635004c92d2c4d522ec488c828dcce0c2471b6f28e3fe82
loading file https://huggingface.co/google/mt5-base/resolve/main/tokenizer_config.json from cache at /scratch/st-amuham01-1/ganeshjw/projects/mt5_beluga/cache_dir/afba33be693521ccefbde6d03b93b5c517d7108ba31f6c08000ed52c2cea45c9.28bbf90ae7962b1b7211c0ce8b2006f968c82439ec9c47e0847ba63642f9435a
loading weights file https://huggingface.co/google/mt5-base/resolve/main/pytorch_model.bin from cache at /scratch/st-amuham01-1/ganeshjw/projects/mt5_beluga/cache_dir/3b7e8056d4ed71d8d7ac2dea78627c4be77ed136399c05b563d4116abfcd9418.1afec9001b62cd5a347e7fd4b664e503ca2377606e11b9ddb8ec1d7b79bc3952
All model checkpoint weights were used when initializing MT5ForConditionalGeneration.
All the weights of MT5ForConditionalGeneration were initialized from the model checkpoint at google/mt5-base.
If your task is similar to the task the model of the checkpoint was trained on, you can already use MT5ForConditionalGeneration for predictions without further training.
100%|██████████| 196/196 [00:46<00:00, 4.22ba/s]
100%|██████████| 1/1 [00:00<00:00, 11.27ba/s]
100%|██████████| 1/1 [00:00<00:00, 6.31ba/s]
***** Running training *****
Num examples = 195996
Num Epochs = 5
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 32
Gradient Accumulation steps = 1
Total optimization steps = 30625
0%| | 0/30625 [00:00<?, ?it/s]/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/torch/nn/parallel/_functions.py:65: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.
warnings.warn('Was asked to gather along dimension 0, but all '
19%|█▉ | 5906/30625 [55:18<4:09:46, 1.65it/s]Traceback (most recent call last):
File "/scratch/st-amuham01-1/ganeshjw/projects/mt5_beluga/run_seq2seq_general.py", line 879, in <module>
main()
File "/scratch/st-amuham01-1/ganeshjw/projects/mt5_beluga/run_seq2seq_general.py", line 625, in main
train_result = trainer.train(resume_from_checkpoint=None)
File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/transformers-4.6.0.dev0-py3.7.egg/transformers/trainer.py", line 1192, in train
tr_loss += self.training_step(model, inputs)
File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/transformers-4.6.0.dev0-py3.7.egg/transformers/trainer.py", line 1590, in training_step
loss = self.compute_loss(model, inputs)
File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/transformers-4.6.0.dev0-py3.7.egg/transformers/trainer.py", line 1622, in compute_loss
outputs = model(**inputs)
File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 167, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 177, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply
output.reraise()
File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/torch/_utils.py", line 429, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/transformers-4.6.0.dev0-py3.7.egg/transformers/models/t5/modeling_t5.py", line 1505, in forward
return_dict=return_dict,
File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/transformers-4.6.0.dev0-py3.7.egg/transformers/models/t5/modeling_t5.py", line 959, in forward
output_attentions=output_attentions,
File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/transformers-4.6.0.dev0-py3.7.egg/transformers/models/t5/modeling_t5.py", line 638, in forward
output_attentions=output_attentions,
File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/transformers-4.6.0.dev0-py3.7.egg/transformers/models/t5/modeling_t5.py", line 545, in forward
output_attentions=output_attentions,
File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/transformers-4.6.0.dev0-py3.7.egg/transformers/models/t5/modeling_t5.py", line 502, in forward
attn_weights, p=self.dropout, training=self.training
File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/torch/nn/functional.py", line 1076, in dropout
return _VF.dropout_(input, p, training) if inplace else _VF.dropout(input, p, training)
RuntimeError: CUDA out of memory. Tried to allocate 96.00 MiB (GPU 0; 31.75 GiB total capacity; 29.60 GiB already allocated; 2.00 MiB free; 30.11 GiB reserved in total by PyTorch)
19%|█▉ | 5906/30625 [55:20<3:51:35, 1.78it/s]
Any help would be highly appreciated. Thanks.
| 04-23-2021 03:55:54 | 04-23-2021 03:55:54 | Hi
I also have observed the same issue with t5-base model and mt5-small model <|||||>Could you maybe make use of the `group_by_length` training argument? This will put the largest batch first to make sure OOM are detected in the very beginning (best feature ever by @sgugger )<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,387 | closed | Implement Fast Tokenization for Deberta | # What does this PR do?
Fixes #10498
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik | 04-23-2021 03:40:41 | 04-23-2021 03:40:41 | @LysandreJik , most of it was easy to figure out by looking at other tokenizers. The setup and testing guidelines were very easy to follow, I was up and running very quickly.
For the fast tokenizers, few things that might help someone like me who is new to the transformers library:
1. A top-level difference between the fast and slow tokenizers. At first, I did not know there was a tokenization library and took some time to figure that out.
2. Overview of how we implement tokenizers. Things like what do the vocab files do and what does the merges_file do. (Although this could just be me.)
3. In some fast tokenizers, there are files listed which are not being used. That confused me initially as I thought we needed those files for a fast tokenizer.
Hope this helps.
<|||||>@LysandreJik,
#10498 mentioned implementing a tokenizer for deberta v2 as well. I have created a new feature request #11529 for that.<|||||>Thank you @ShubhamSanghvi, this is all very helpful. We'll take care of including that in the documentation so that it's clearer from now on.
Will take a look at #11529! |
transformers | 11,386 | closed | [Seq2seq] Add Support for TensorFlow | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds seq2seq support for TensorFlow and its corresponding summarization training and evaluation script.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger, @patrickvonplaten, @LysandreJik
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-22-2021 21:25:00 | 04-22-2021 21:25:00 | Hi! I'm the TF maintainer for 🤗 Transformers right now. Thanks for this, the code quality looks really good! There's one issue, though - we're currently trying to move away from `TFTrainer` and use more native, idiomatic TF code based on Keras. You can see an example of the kind of TFTrainer-free approach we're working on [here](https://github.com/huggingface/transformers/blob/master/examples/tensorflow/text-classification/run_text_classification.py).
As a result, we probably can't accept `Seq2SeqTFTrainer` in the main library right now, but we're definitely planning on adding a Seq2Seq TF example soon. If you'd like, you can try to convert this PR to a TFTrainer-free Seq2Seq example script and put it in /examples/tensorflow, but I understand if that's a lot of work and you don't want to bother right now! <|||||>Hi @Rocketknight1! Thanks for your feedback, I understand. This actually emerged as a side product from a project that I'm currently working on so I thought I'd share this. But it's good to know the direction you're heading. If you're planning to add some seq2seq examples for Keras in the next few weeks then it's fine to close this PR I guess. Otherwise I will probably rewrite this code in order to be aligned with the Huggingface library and its overall direction. Do you have any date on your mind for the seq2seq TF examples?<|||||>Hi, sorry for the delay! We are indeed planning to include those models, hopefully in a month or so. I don't have an exact date, but it's on my To Do list.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,385 | closed | [docs]Incorrect way of input encoding for "multiple choice" models in documentation? | In the documentation about "xxForMultipleChoice" models like BERT, ALBERT, RoBERTa, the examples goes like [this](https://huggingface.co/transformers/model_doc/bert.html#bertformultiplechoice):
```
>>> from transformers import BertTokenizer, BertForMultipleChoice
>>> import torch
>>> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
>>> model = BertForMultipleChoice.from_pretrained('bert-base-uncased')
>>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
>>> choice0 = "It is eaten with a fork and a knife."
>>> choice1 = "It is eaten while held in the hand."
>>> labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
>>> encoding = tokenizer([[prompt, prompt], [choice0, choice1]], return_tensors='pt', padding=True)
>>> outputs = model(**{k: v.unsqueeze(0) for k,v in encoding.items()}, labels=labels) # batch size is 1
...
```
In current version (4.5.1), the `encoding` actually consists of 2 sentences: `[prompt + prompt]`, and `[choice0 + choice1]`, which to my knowledge is incorrect, as each encoded sentence should include one prompt and one choice. I think the `encoding` supposed to be like:
```
tokenizer([[prompt, choice0], [prompt, choice1]], return_tensors='pt', padding=True)
```
So, is there anything wrong? | 04-22-2021 20:48:41 | 04-22-2021 20:48:41 | @Riroaki I agree with you! It should be:
```python
tokenizer([[prompt, choice0], [prompt, choice1]], return_tensors='pt', padding=True)
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@sgugger @SBrandeis
Please have a look at the example scripts.🙏<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I'm running into this issue as well.
I'm not super familiar working with multiple choice models, but I think that given #6074, and the [run_swag.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/multiple-choice/run_swag.py) example, these should be passed as two lists, instead of one.
In other words, in the example, instead of
```python
encoding = tokenizer([[prompt, prompt], [choice0, choice1]], return_tensors='pt', padding=True)
```
it should be
```python
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors='pt', padding=True)
```
It would be awesome to make this two-character-deletion change, as it just tripped me up when starting working on a multiple choice model!<|||||>This has been fixed, but you need to switch to the master documentation to see the change. |
transformers | 11,384 | closed | some issue in loading local txt file as Dataset for run_mlm.py | 
first of all, I tried to load 3 .txt files as a dataset (sure that the directory and permission is OK.), I face with the below error.
> FileNotFoundError: [Errno 2] No such file or directory: 'c'
by removing one of the training .txt files It's fixed and although if I put all file as training it's ok


after this, my question is how could I use this defined Dataset for run_mlm.py for from scratch pretraining.
by using --train_file path_to_train_file just can use one .txt , .csv or, .json file. I tried to set my defined Dataset as --dataset_name but the below issue occurs.
> Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 336, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py", line 291, in cached_path
use_auth_token=download_config.use_auth_token,
File "/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py", line 621, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/dataset/dataset.py
> During handling of the above exception, another exception occurred:
> Traceback (most recent call last):
File "run_mlm.py", line 486, in <module>
main()
File "run_mlm.py", line 242, in main
datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name, cache_dir=model_args.cache_dir)
File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 719, in load_dataset
use_auth_token=use_auth_token,
File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 347, in prepare_module
combined_path, github_file_path
FileNotFoundError: Couldn't find file locally at dataset/dataset.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.6.0/datasets/dataset/dataset.py.
The file is also not present on the master branch on github.
| 04-22-2021 19:34:54 | 04-22-2021 19:34:54 | _I tried to load 3 .txt files as a dataset_ - bad idea, try to merge them in one, then let's see what you'll get (which error message or result).<|||||>Try this one:
```
from pathlib import Path
paths = [str(x) for x in Path(".").glob("**/*.txt")]
```
P.S. Next time just copy-paste code, don't screen it<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,383 | closed | Fixed trainer total_flos relaoding in distributed mode | # What does this PR do?
There was a bug on the `total_flos` quantity when loading/reloading trainer states in distributed mode: when reloading a training, every process started from the total amount of floating-point operations. The next time they were aggregated, this caused the sum of the operations of all processes to be inflated. This PR fixes this behaviour by only storing a `current_flos` variable per process that comes back to zero every time it is logged, and keeping the total amount separate in the trainer state.
# Who to tag?
@sgugger | 04-22-2021 15:50:06 | 04-22-2021 15:50:06 | |
transformers | 11,382 | closed | Fix Trainer with remove_unused_columns=False | # What does this PR do?
A bug was introduced by mistake in #11343 when `remove_unused_columns=False`. This PR fixes that.
Fixes #11381 | 04-22-2021 14:43:08 | 04-22-2021 14:43:08 | |
transformers | 11,381 | closed | Trainer._remove_unused_columns() returns None | ## Environment info
- `transformers` version: 4.6.0.dev0
- Platform: Linux-4.15.0-134-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.1 (False)
- Tensorflow version (GPU?): 2.4.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@sgugger, @LysandreJik
## Information
`Trainer._remove_unused_columns()` returns None in case `args.remove_unused_columns` is `False`, instead of returning the given dataset.
Related to #11343.
Model I am using (Bert, XLNet ...): BERT
The problem arises when using:
* [x] the official example scripts: (give details below) run_mlm/glue/...
* [ ] my own modified scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Set `TrainingArguments.remove_unused_columns=False`
2. Train/eval/test your model using `Trainer`
3. The dataset would be None, and so the following exception would raise:
```
Traceback (most recent call last):
...
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/guyrosin/src/transformers/src/transformers/trainer.py", line 814, in num_examples
self.num_examples(train_dataloader) if train_dataset_is_sized else total_train_batch_size * args.max_steps
File "/home/guyrosin/src/transformers/src/transformers/trainer.py", line 814, in num_examples
self.num_examples(train_dataloader) if train_dataset_is_sized else total_train_batch_size * args.max_steps
File "/home/guyrosin/src/transformers/src/transformers/trainer.py", line 814, in num_examples
return len(dataloader.dataset)
TypeError: object of type 'NoneType' has no len()
```
## Expected behavior
`Trainer._remove_unused_columns()` should always return a dataset. | 04-22-2021 14:35:49 | 04-22-2021 14:35:49 | Thank you so much for flagging! This should be fixed by the PR above. |
transformers | 11,380 | closed | [Examples] Fixes inconsistency around eval vs val and predict vs test | # What does this PR do?
Fixes #10165
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stas00 @sgugger
| 04-22-2021 12:24:06 | 04-22-2021 12:24:06 | Hi @sgugger and @stas00,
I have made changes in the following way,
| Earlier | Now |
| ---- | ---- |
| test | predict |
| test_examples | predict_examples |
| test_dataset | predict_dataset |
| max_test_samples | max_predict_samples |
| val | eval |
| val_examples | eval_examples |
| val_dataset | eval_dataset |
| max_val_samples | max_eval_samples |
* I have also made changes in the trainer code for the above variables
* I have modified the template file accordingly
* `examples/pytorch/question-answering/run_qa_no_trainer.py` don't have code complete code for predict stage. Shall we need to add?
<|||||>Hi @sgugger,
I was trying to test [run_qa_no_trainer.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa_no_trainer.py) and I came to know that when I passed `--test_file` it was giving me an error since we don't have such [argument](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa_no_trainer.py#L87) in the file.
When i give `--do_predict` it gives an error at this [line](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa_no_trainer.py#L507) since I was using squad data and it doesn't have `test` set
But as you said it will work perfectly for the dataset with `testset`.<|||||>Ah! now I get it. We can add support for a `--test_file` in another PR, yes!<|||||>I am exploring other files, I will add more changes in docs and other files<|||||>You're doing a very painful task, @bhadreshpsavani, as @sgugger commented we unfortunately can't normalize these 2 names fully both because of the backcompat and also where in some cases something is called validation/test split :( Thank you for doing this important work and bearing with all these setbacks.<|||||>Hi @stas00,
It's totally fine.
I was expecting more suggestions because I did a lot of changes for all files in a single go. I am enjoying this coding work that's important for me!<|||||>Hi @stas00,
Can we use this command?
`git push --force-with-lease origin myfeature`
I read that this is safe than `--force` and [many organizations even using this.](https://stackoverflow.com/questions/41283955/github-keeps-saying-this-branch-is-x-commits-ahead-y-commits-behind)<|||||>In general all PRs are isolated until they are merged, so if you make a mistake on your own PR, in the worst case you will make a mess of your own changes, but it won't impact the master. So feel free to experiment.
**edit:** this is for sure for when you don't have write access to upstream master, or if you work in your own fork. I'm not sure if it's the same when one does have a write access and is working directly on the source. I find it much safer to always do all the work in my own fork and merge upstream via PRs.
I have never used this particular flag before, so if it's safer go for it.<|||||>`run_tests_torch` CI job has been flakey as of recent, You can always force a CI restart if you see one or more CI jobs are failing unrelated to your commit with an empty commit:
```
git commit --allow-empty -m "Trigger CI"
git push
```<|||||>This is a cool command. I will definitely need this. I will note this down!
Thanks<|||||>Hi @sgugger and @stas00 ,
I have reverted trainer changes and updated the example pytorch readme.
Please let me know if we need to make more changes <|||||>@sgugger, should these not be covered too?
```
$ grep -Ir max_val_samples
examples/tensorflow/text-classification/run_text_classification.py: max_val_samples: Optional[int] = field(
examples/tensorflow/text-classification/run_text_classification.py: if data_args.max_val_samples is not None:
examples/tensorflow/text-classification/run_text_classification.py: eval_dataset = eval_dataset.select(range(data_args.max_val_samples))
examples/research_projects/wav2vec2/run_common_voice.py: max_val_samples: Optional[int] = field(
examples/research_projects/wav2vec2/run_common_voice.py: if data_args.max_val_samples is not None:
examples/research_projects/wav2vec2/run_common_voice.py: eval_dataset = eval_dataset.select(range(data_args.max_val_samples))
examples/research_projects/wav2vec2/run_common_voice.py: max_val_samples = data_args.max_val_samples if data_args.max_val_samples is not None else len(eval_dataset)
examples/research_projects/wav2vec2/run_common_voice.py: metrics["eval_samples"] = min(max_val_samples, len(eval_dataset))
tests/sagemaker/scripts/pytorch/run_glue_model_parallelism.py: max_val_samples: Optional[int] = field(
tests/sagemaker/scripts/pytorch/run_glue_model_parallelism.py: if data_args.max_val_samples is not None:
tests/sagemaker/scripts/pytorch/run_glue_model_parallelism.py: eval_dataset = eval_dataset.select(range(data_args.max_val_samples))
tests/sagemaker/scripts/pytorch/run_glue_model_parallelism.py: max_val_samples = data_args.max_val_samples if data_args.max_val_samples is not None else len(eval_dataset)
tests/sagemaker/scripts/pytorch/run_glue_model_parallelism.py: metrics["eval_samples"] = min(max_val_samples, len(eval_dataset))
```
```
$ grep -Ir max_test_samples
examples/tensorflow/text-classification/run_text_classification.py: max_test_samples: Optional[int] = field(
examples/tensorflow/text-classification/run_text_classification.py: if data_args.max_test_samples is not None:
examples/tensorflow/text-classification/run_text_classification.py: test_dataset = test_dataset.select(range(data_args.max_test_samples))
tests/sagemaker/scripts/pytorch/run_glue_model_parallelism.py: max_test_samples: Optional[int] = field(
tests/sagemaker/scripts/pytorch/run_glue_model_parallelism.py: if data_args.max_test_samples is not None:
tests/sagemaker/scripts/pytorch/run_glue_model_parallelism.py: test_dataset = test_dataset.select(range(data_args.max_test_samples))
```<|||||>This PR should be rebased on master and deal with `examples/tensorflow/text-classification/run_text_classification.py` that was added recently yes.
`examples/research_projects/wav2vec2/run_common_voice.py` is a research-project so is not actively maintained and pinned to a version of transformers, I would leave it out of this PR.
For `tests/sagemaker/scripts/pytorch/run_glue_model_parallelism.py`, I wouldn't touch it since it's a test script (which will ultimately be replaced by a TF example) but I'll let @philschmid decide on this one.<|||||>> For `tests/sagemaker/scripts/pytorch/run_glue_model_parallelism.py`, I wouldn't touch it since it's a test script (which will ultimately be replaced by a TF example) but I'll let @philschmid decide on this one.
Is a custom version for `SageMakerTrainer`, this will be removed after the deprecation of `SageMakerTrainer`, there is no need for adjustments. <|||||>Thanks for adjusting! I think this is good to merge, @stas00 if you agree I'll let you click on the button :-) |
transformers | 11,379 | closed | Correctly cast num_train_epochs to int | The num_train_epochs arg in `training_args.py` is actually a float, so we cast it to int before it goes to Keras.
| 04-22-2021 12:21:56 | 04-22-2021 12:21:56 | |
transformers | 11,378 | closed | Remove max length beam scorer | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #11040
Modifies ad per the comments from Pull #11122
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patil-suraj ; @patrickvonplaten ; @Narsil
Note: I have modified code in the `./tests/` where, `max_length` was used with `beam_scorer`.
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-22-2021 11:51:54 | 04-22-2021 11:51:54 | Awesome job @GeetDsa - this looks good to me :-)
Also pinging @patil-suraj and @Narsil for review.<|||||>@Narsil - do you think we need a test here or should this be fine without?<|||||>Hi @patrickvonplaten , Most of the errors are caused because of "max_length" attribute used while creating `beam_scorer` object in the test cases. These errors are arising as the beam_scorer attribute is removed from the source code. Infact, I had modified the files under `tests/*` to take account for this.(This can be found in my commits. Sorry that I did not had different commits for the actual source code and the tests). So, may be can you review the files that I have modified under `tests/*` as well?<|||||>I think we shouldn´t modify the tests if possible (thats what make backward compatiblity enforced).
Instead, we should probably :
- add some warning for users that used the raw components/functions
- point the towards a better solution
- modify the tests that DO raise warnings to make sure we do raise them (and also swallow them in the logs) (they become backward compatibility tests their names should probably reflect that too) (We can remove some backward tests if they are very redundant btw)
- Have new tests that point show new API.
- Higher level tests (within models) should not be affected.
@GeetDsa I can do this for you if you want.<|||||>> I think we shouldn´t modify the tests if possible (thats what make backward compatiblity enforced).
>
> Instead, we should probably :
>
> * add some warning for users that used the raw components/functions
> * point the towards a better solution
> * modify the tests that DO raise warnings to make sure we do raise them (and also swallow them in the logs) (they become backward compatibility tests their names should probably reflect that too) (We can remove some backward tests if they are very redundant btw)
> * Have new tests that point show new API.
> * Higher level tests (within models) should not be affected.
> @GeetDsa I can do this for you if you want.
Hi @Narsil, I think it would be better if you can do it, as I don't really know how to deal with tests.<|||||>> > I think we shouldn´t modify the tests if possible (thats what make backward compatiblity enforced).
> > Instead, we should probably :
> >
> > * add some warning for users that used the raw components/functions
> > * point the towards a better solution
> > * modify the tests that DO raise warnings to make sure we do raise them (and also swallow them in the logs) (they become backward compatibility tests their names should probably reflect that too) (We can remove some backward tests if they are very redundant btw)
> > * Have new tests that point show new API.
> > * Higher level tests (within models) should not be affected.
> > @GeetDsa I can do this for you if you want.
Hi @Narsil, I think it would be better if you can do it, as I don't really know how to deal with tests.
<|||||>I'll take care of it<|||||>@Narsil - I think we don't need to add a test here. By design the PR prevents errors such as the one mentioned in the issue - It's not possible anymore to pass `max_length` to `BeamSearchScorer` and thus a test doesn't make much sense here IMO.<|||||>Will wait for https://github.com/huggingface/transformers/pull/11442 to be merged before rebasing this one to merge<|||||>@patrickvonplaten But if users were using the BeamScorer object, that's a breaking change, isn't it ?<|||||>> @patrickvonplaten But if users were using the BeamScorer object, that's a breaking change, isn't it ?
Yes true. IMO, no functionality is lost though because:
- Previously, if one had passed `max_length` to both `def beam_search(...)` and `BeamSearchScorer(...)`, then there would have been a bug (see issue). The correct way of fixing the bug (while still allowing `BeamSearchScorer(...)` to accept `max_length`) would have been to overwrite `BeamSearchScorer's` max_length with `beam_search(...)`'s max_length. On the other hand it's never possible to **not** pass `max_length` to `def beam_search(...)` => therefore I think either way the `max_length` arg to `BeamSearchScorer` is useless (the `max_length` value of `beam_search(...)` would have been preferred in any way.
=> However, we could/should probably add `**kwargs` to `BeamSearchScorer` that throws a warning if `max_length` is passed & says that's it's deprecated. This would be cleaner overall and have no breaking changes - what do you think @Narsil ?<|||||>Yep, that's what I had in mind, just accept it, raise a warning (and ignore it, exactly as it used to if I understand correctly) <|||||>> Yep, that's what I had in mind, just accept it, raise a warning (and ignore it, exactly as it used to if I understand correctly)
Done! Could you review one last time and merge if ok for you? @Narsil <|||||>@patrickvonplaten, thank you for taking care of it. I found a small issue in the warning message. The message provided in the latest commit is ""`max_length` should be passed directly to `beam_search(...)`, `beam_sample(...)`"", shouldn't you include "generate(..)" as well, as sometime, `generate(...)` inherently calls `beam_search(..,)` or `group_beam_search(..,)`, or other relevant functions.<|||||>> @patrickvonplaten, thank you for taking care of it. I found a small issue in the warning message. The message provided in the latest commit is ""`max_length` should be passed directly to `beam_search(...)`, `beam_sample(...)`"", shouldn't you include "generate(..)" as well, as sometime, `generate(...)` inherently calls `beam_search(..,)` or `group_beam_search(..,)`, or other relevant functions.
Hey @GeetDsa,
It should be fine, since the warning is impossible to be triggered by `generate(...)` since when one calls `generate(...)`, one cannot pass `max_length` to `BeamScorer` |
transformers | 11,377 | open | new call for model addition | # What does this PR do?
This PR add a "call for model addition" to add [GLM](https://github.com/THUDM/GLM) model into the lib. | 04-22-2021 11:44:54 | 04-22-2021 11:44:54 | Think we still need to fill out a couple of things before publishing it :-) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>unstale<|||||>@patil-suraj @patrickvonplaten
I have started implementation of GLM model in HF. Need some input on implementation part.
1. Original GLM model is implemented using torch parallel distribution , In HF implementation, are we going to keep torch parallel distributed or convert to normal architecture(without parallelism )
2. There are six version of models (like GLM -base, GLM- large…etc), in which two diffrent tokenization is used Wordpiece and BPE. Like GLM-base and large is having wordpiece and 'GLE-Roberta'and 'GLE-large' is having BPE.
|
transformers | 11,376 | closed | Wav2vec2: comparison to original implementation | # 🚀 Reproducibility challenge
During the fine-tuning week I realized some of the smaller details in the implementation are a bit different than the original fairseq implementation. @patrickvonplaten
Original Code: https://github.com/pytorch/fairseq/blob/master/fairseq/models/wav2vec/wav2vec2.py
🤗 Code: https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_wav2vec2.py
For example in `compute_mask_indices` function comparing scripts:
- There are naming differences, not that important as long as it is *documented* somewhere(?) for people transitioning to 🤗 & trying to replicate with the same hyper-parameters.
`mask_time_prob=mask_prob (in fairseq)`
`mask_time_length=mask_length(in fairseq)`
`mask_feature_prob=mask_channel_length(in fairseq)`
`mask_feature_length=mask_channel_length (in fairseq)`
Also with the naming of different dropouts.
- But there are also seem to be some un-specified parameters:
`no_overlap`,`min_space`.
https://github.com/pytorch/fairseq/blob/05b86005bcca0155319fa9b81abfd69f63c06906/fairseq/models/wav2vec/wav2vec2.py#L348
Not sure how these effect, didn't look deeply, wanted to report just in case.
## Motivation
I realized while digging deeper because there was minor differences on results while finetuning (might be also be a fairseq problem https://github.com/pytorch/fairseq/issues/1448)
## Your contribution
opening this issue 😛
| 04-22-2021 11:19:28 | 04-22-2021 11:19:28 | Hey @cceyda - thanks for the issue!
Yes, this was an intentional choice since those parameters seemed to be the same in all experiments after studying the paper. If people want to change those params, we could always add them in a later version.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,375 | closed | Output probability from `model.generate` for TF models | # 🚀 Feature request
Since pytorch models already have the option to output probablities when using `model.generate(...)`, I wanted to ask if there's any chance if this will also be implemented for TensorFlow models?
| 04-22-2021 10:58:55 | 04-22-2021 10:58:55 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,374 | closed | [Flax] Correct typo | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Wrong name for dropout was used.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-22-2021 10:38:36 | 04-22-2021 10:38:36 | |
transformers | 11,373 | closed | Add space | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes a typo
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-22-2021 09:08:29 | 04-22-2021 09:08:29 | |
transformers | 11,372 | closed | [run_translation.py] fix typo | line 380: forced less a letter r: model.config.foced_bos_token_id = forced_bos_token_id --> model.config.forced_bos_token_id = forced_bos_token_id
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-22-2021 08:10:10 | 04-22-2021 08:10:10 | |
transformers | 11,371 | closed | [examples] UserWarning: `max_length` is deprecated | Not sure how many example scripts are affected by this:
```
src/transformers/generation_utils.py:963: UserWarning: `max_length` is deprecated in this function, use
`stopping_criteria=StoppingCriteriaList(MaxLengthCriteria(max_length=max_length))` instead.
```
getting this with at least `examples/pytorch/translation/run_translation.py`
| 04-22-2021 04:28:45 | 04-22-2021 04:28:45 | This one also having same warning `examples/pytorch/summarization/run_summarization.py `<|||||>Would you like to tackle this one, @bhadreshpsavani, next? Absolutely no need to say yes ;)<|||||>Sure, I will take this issue,
I don't have any extra issue with me apart from this!<|||||>Hi @stas00,
I am not very sure that if this is expected behavior or needed to fix,
It's coming from this code blocks
https://github.com/huggingface/transformers/blob/4e7bf94e7280d2b725ac4644dbe9808560afa5d8/src/transformers/generation_utils.py#L962-L966
Generally, a warning is for a good purpose, right?
we are not passing any `max_length` but still this warning is coming so I think I need to fix that part in the code somewhere<|||||>Because of the below default value, the warning is coming,
https://github.com/huggingface/transformers/blob/4e7bf94e7280d2b725ac4644dbe9808560afa5d8/examples/pytorch/translation/run_translation.py#L139-L144
https://github.com/huggingface/transformers/blob/4e7bf94e7280d2b725ac4644dbe9808560afa5d8/examples/pytorch/summarization/run_summarization.py#L150-L155
We are passing it in trainer and it uses generation utils at below code
https://github.com/huggingface/transformers/blob/4e7bf94e7280d2b725ac4644dbe9808560afa5d8/examples/legacy/seq2seq/seq2seq_trainer.py#L220-L224<|||||>Thank you for this investigation, @bhadreshpsavani - that's very helpful. It looks like the change was introduced just a day before I filed this Issue.
@Narsil, could we please check with you on this deprecation you introduced in https://github.com/huggingface/transformers/commit/aad95c7cdebc24e780e5a5cf39d832c015e40075
1. Unless I'm missing something the deprecation doesn't seem to be complete since `max_length` is actively used in the same function:
https://github.com/huggingface/transformers/blob/4e7bf94e7280d2b725ac4644dbe9808560afa5d8/src/transformers/generation_utils.py#L919
I am not sure how a deprecated variable is still used normally in the logic...
Also it's not documented as deprecated:
```
max_length (:obj:`int`, `optional`, defaults to 20):
The maximum length of the sequence to be generated.
```
2. it now generates warnings in the example scripts like `run_translation.py` and `run_summarization.py` - so if there is a new way could we please atomically adjust all the places that are now impacted by this change? The examples ideally should be in sync with API changes, since their purpose is to correctly demonstrate how to use the library.
The main entry point leading to this warning in seq2seq examples is:
https://github.com/huggingface/transformers/blob/4e7bf94e7280d2b725ac4644dbe9808560afa5d8/src/transformers/trainer_seq2seq.py#L161-L171
Thank you!
<|||||>Hi @stas00 ,
Yes, it's my fault 1 deprecation too much on the `generate` function !
Just submitted a new PR to remove this extra warnings (which is incorrect).
`max_length` is used by `generate` but not by the subsequent functions.
|
transformers | 11,370 | closed | ERRORS: run_mlm_performer.py | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: tfs4.5.1
- Platform: ubuntu18.04
- Python version: 3.8
- PyTorch version (GPU?): 1.7
- Tensorflow version (GPU?): 2.3.0
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
I try to use the run_mlm_performer.py
`TOKENIZERS_PARALLELISM=false python run_mlm_performer.py --model_name_or_path ../cache_model/bert-base-chinese/ --tokenizer_name ../cache_model/bert-base-chinese/ --train_file ../data/wikicorpus_zh_one_article_per_line-jianti.txt --do_train --fp16 --output_dir ./test-mlm --max_seq_length 512 --per_device_train_batch_size 256 --reinitialize --overwrite_output_dir True --preprocessing_num_workers 8`
And
raise AttributeError(
jax._src.traceback_util.FilteredStackTrace: AttributeError: "FlaxBertSelfAttention" object has no attribute "dropout_rate"
You will find that FlaxBertSelfAttention no defined dropout_rate... | 04-22-2021 03:57:27 | 04-22-2021 03:57:27 | I tried to use self.config.attention_probs_dropout_prob instead of config.dropout_rate. It seems worked.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,369 | closed | Fix typo | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes typo in `/src/transformers/generation_utils.py`, change `defaults tp 1.0` to `defaults to 1.0`.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-22-2021 02:44:50 | 04-22-2021 02:44:50 | |
transformers | 11,368 | open | Megatron fused CUDA kernels to improve Hugging Face model classes' scalability | # 🚀 Feature request
Support for custom fused CUDA kernels with HF model classes.
## Motivation
It appears that Hugging Face model classes do not scale very well as-is unlike Megatron-LM, even when the latter is configured with a degree of model-parallelization = 1 for a "fair" performance comparison.
One of the presumed reasons for this is that Megatron-LM leverages custom fused CUDA kernels written by NVIDIA, specifically [these](https://github.com/NVIDIA/Megatron-LM/blob/aed2f75e209e525c842aec7c044af7acae2a4614/megatron/model/transformer.py#L26L27).
Could we get variants of existing HF classes (perhaps for `GPT2Model`, `GPT2LMHeadModel`, etc.) such that the variants leverage some/all of these fused CUDA kernels? All this while still ensuring that one can load the original pre-trained weights into these variant classes.
Any guidance/low-level thoughts towards making this happen would also be greatly useful!
@thomwolf @patrickvonplaten @LysandreJik @stas00 | 04-22-2021 01:03:49 | 04-22-2021 01:03:49 | I think the biggest barrier to using custom CUDA kernel is that it'd require `transformers` to move from a python-only package, to a compilation-required type of package (even if JIT), which in my experience is the type of a package that is far from trivial to use and often raises a barrier to entry.
If I'm not mistaken some fused kernels have been pushed upstream into the pytorch-core, so if you know of any that we could receive precompiled via pytorch, then we can definitely use those.
And if they aren't and you have some resources to initiate the conversation - it'd definitely help to request that such kernels will be added to pytorch-core. Definitely tag me if I do start such a thread at pytorch Issues.
-----
I love your spirit of proposing various performance optimizations, @g-karthik and I'd love to work on all of those you have been proposing here and at Deepspeed issues, but so far I find no free resources to do so and all my time is spent on making things work. |
transformers | 11,367 | closed | Replace double occurrences as the last step | This PR fixes an issue with the SPM converters (ALBERT and XLNet) where it would replace some characters by whitespace - after removing double whitespace occurrences. This meant that if double whitespace were to appear thanks to this replacement, they would be kept until the end of encoding, leading to a mismatch between SentencePiece and `tokenizers`.
Fixes https://github.com/huggingface/transformers/issues/11358 | 04-21-2021 22:03:27 | 04-21-2021 22:03:27 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,366 | closed | RuntimeError: CUDA error: device-side assert triggered | ```
if torch.cuda.is_available():
dev = "cuda:0"
else:
dev = "cpu"
device = torch.device(dev)
bert = BertForSequenceClassification.from_pretrained(args.model_name_or_path)
bert = bert.to(device)
```
raise RuntimeError: CUDA error: device-side assert triggered
nvidia-smi:

| 04-21-2021 20:29:20 | 04-21-2021 20:29:20 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,365 | closed | Index out of range in self with fine-tuned DPR Context Encoder | ## Environment info
- `transformers` version: 4.5.1
- Platform: Linux-5.4.0-72-generic-x86_64-with-glibc2.27
- Python version: 3.8.6
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: Have tried with and without
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik
## Information
Model I am using: `DPR Question and Context Encoders`
Getting Index out of range in self error for embeddings when trying to apply a locally fine-tuned version for each of the DPR encoders. Previous issues point to a difference in `vocab lengths` or tokens but nothing has been changed there. The `model max length` also is consistent at 512.
Code:
```
from transformers import DPRContextEncoderTokenizerFast, DPRContextEncoder
ctx_tok = DPRContextEncoderTokenizerFast.from_pretrained('/data/riddler/checkpoints/adapted_dpr/ctx-encoder-2021-04-19-checkpoint-11000')
model = DPRContextEncoder.from_pretrained('/data/riddler/checkpoints/adapted_dpr/ctx-encoder-2021-04-19-checkpoint-11000')
input_ids = ctx_tok("Hello, is my dog cute ?", return_tensors='pt')["input_ids"]
embeddings = model(input_ids).pooler_output
```
Error:
```
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-91-24fb6846809e> in <module>
----> 1 embeddings = model(input_ids['input_ids'])
/venv/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
/venv/lib/python3.8/site-packages/transformers/models/dpr/modeling_dpr.py in forward(self, input_ids, attention_mask, token_type_ids, inputs_embeds, output_attentions, output_hidden_states, return_dict)
573 return_dict=return_dict,
574 )
--> 575
576 if not return_dict:
577 return outputs[1:]
/venv/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
/venv/lib/python3.8/site-packages/transformers/models/dpr/modeling_dpr.py in forward(self, input_ids, attention_mask, token_type_ids, inputs_embeds, output_attentions, output_hidden_states, return_dict)
170 return_dict: bool = False,
171 ) -> Union[BaseModelOutputWithPooling, Tuple[Tensor, ...]]:
--> 172 outputs = self.bert_model(
173 input_ids=input_ids,
174 attention_mask=attention_mask,
/venv/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
/venv/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
962 encoder_hidden_states=encoder_hidden_states,
963 encoder_attention_mask=encoder_extended_attention_mask,
--> 964 past_key_values=past_key_values,
965 use_cache=use_cache,
966 output_attentions=output_attentions,
/venv/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
/venv/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds, past_key_values_length)
204 if self.position_embedding_type == "absolute":
205 position_embeddings = self.position_embeddings(position_ids)
--> 206 embeddings += position_embeddings
207 embeddings = self.LayerNorm(embeddings)
208 embeddings = self.dropout(embeddings)
/venv/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
/venv/lib/python3.8/site-packages/torch/nn/modules/sparse.py in forward(self, input)
122
123 def forward(self, input: Tensor) -> Tensor:
--> 124 return F.embedding(
125 input, self.weight, self.padding_idx, self.max_norm,
126 self.norm_type, self.scale_grad_by_freq, self.sparse)
/venv/lib/python3.8/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1812 # remove once script supports set_grad_enabled
1813 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1814 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1815
1816
IndexError: index out of range in self
```
| 04-21-2021 19:48:00 | 04-21-2021 19:48:00 | @LysandreJik do you have insight on what could be causing this error? Thanks!<|||||>Hi! I'm trying to reproduce but as I don't have your checkpoint this is proving complicated. Could you provide a reproducible example/colab so I can take a look?
Also, it seems you've shared some of the stack trace but not the entire stack trace. It would be helpful to see the full error to see where the issue originates from.<|||||>@LysandreJik thanks for your response! I just updated the error to include the entire stack trace. Hope that is helpful.
I'm not sure how to create a reproducible example since the error is based on the fine-tuned checkpoint. I can send what the config file looks like for that model if that is helpful<|||||>Here is my code for creating the BiEncoder model object from the separate context and question encoders in order to fine-tune DPR as well as the code for saving the checkpoints. Maybe that will help. `self.model` refers to the biencoder.
```
# DPR BiEncoder Model Class
class BiEncoder(torch.nn.Module):
def __init__(self, query_model, ctx_model):
super(BiEncoder, self).__init__()
self.query_model = query_model
self.ctx_model = ctx_model
def forward(self, query_ids, query_attn_mask, ctx_ids, ctx_attn_mask):
#query_embed = self.question_model(query_ids).pooler_output
query_embed = self.query_model(query_ids, attention_mask=query_attn_mask)[0]
#ctx_embed = self.ctx_model(ctx_ids).pooler_output
ctx_embed = self.ctx_model(ctx_ids, attention_mask=ctx_attn_mask)[0]
return query_embed, ctx_embed
# Load Model
def get_model(query_encoder_path, ctx_encoder_path):
# Question Encoder
query_encoder = DPRQuestionEncoder.from_pretrained(query_encoder_path)
# Context Encoder
ctx_encoder = DPRContextEncoder.from_pretrained(ctx_encoder_path)
# Initialize Dual Encoder
biencoder = BiEncoder(query_encoder, ctx_encoder)
return biencoder
# Get Optimizer
def get_optimizer(self):
optimizer_grouped_parameters = [{'params': [p for n,p in self.model.named_parameters()],
'params': [p for n,p in self.model.named_parameters()]}]
return AdamW(optimizer_grouped_parameters, lr=self.lr)
# Save Checkpoint
query_encoder = self.model.query_model
ctx_encoder = self.model.ctx_model
query_model_path = os.path.join(self.cp_subdir, f'query-encoder-{self.exp_date}-checkpoint-{self.global_step}')
ctx_model_path = os.path.join(self.cp_subdir, f'ctx-encoder-{self.exp_date}-checkpoint-{self.global_step}')
query_encoder.save_pretrained(query_model_path) #save encoders
ctx_encoder.save_pretrained(ctx_model_path)
```<|||||>Is there a way for you to upload your checkpoints on the hub so that I can take a look and try to reproduce locally? I'm curious to see if the configuration has a too small `max_position_embeddings` leading to the overflow.<|||||>The `max_position_embeddings` in the config file are 512, which matches the DPR default. The vocab size also matches the default.<|||||>I encountered the same issue in Bertweet:
https://colab.research.google.com/drive/1cEtC98hIfB-2I-Tcxsp_OEau4rNmjXJw?usp=sharing<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,364 | closed | [Flax] Big FlaxBert Refactor | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR does a major refactor of FlaxBert in Transformers, notably:
- Costume LayerNorm and Embedding layers are replaced by the official ones. This should significantly reduce maintenance cost at the expense of two more general lines in the conversion script.
- A couple of bugs are fixed, *e.g.*, BERT uses the *non-approximated* GELU and not the fast one -> this fixes some minor differences when comparing PyTorchBERT vs FlaxBERT
- Weight Tying is added, which should be done for, *e.g.* `FlaxBertForMaskedLM`
- Weights can now also be converted the other way around Flax => PyTorch
Sorry for putting quite a lot of things into one PR, but there are very much intertwined here.
Also, I will have to re-upload some flax weights so that they correspond to the new weight structure (see [here](https://github.com/huggingface/transformers/pull/10977))
@avital @marcvanzee, I had an issue when saving/loading flax weights for which I opened an issue [here](https://github.com/google/flax/issues/1261). At the moment, I solve it by manually transforming every numpy array into a jax DeviceArray, see [here](https://github.com/huggingface/transformers/pull/11364/files#r617853008) - not sure if there is a better solution.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-21-2021 17:43:09 | 04-21-2021 17:43:09 | |
transformers | 11,363 | closed | torch_xla/csrc/tensor_methods.cpp:880 : Check failed: xla::ShapeUtil::Compatible(shapes.back(), tensor_shape) | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.0
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: TPU
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patrickvonplaten
## Information
I am using BigBirdForSequenceClassification and BigBirdTokenizer for a simple text classification problem on Google Colab TPU:
The problem arises when using:
* [ ] my own modified scripts: (Script shared) If I use the BigBirdForSequenceClassification model, I start getting weird errors on TPU.
```
from pathlib import Path
def read_imdb_split(split_dir):
split_dir = Path(split_dir)
texts = []
labels = []
for label_dir in ["pos", "neg"]:
for text_file in (split_dir/label_dir).iterdir():
texts.append(text_file.read_text())
labels.append(0 if label_dir is "neg" else 1)
return texts, labels
train_texts, train_labels = read_imdb_split('aclImdb/train')
test_texts, test_labels = read_imdb_split('aclImdb/test')
train_texts, train_labels = read_imdb_split('aclImdb/train')
test_texts, test_labels = read_imdb_split('aclImdb/test')
from sklearn.model_selection import train_test_split
train_texts, val_texts, train_labels, val_labels = train_test_split(train_texts, train_labels, test_size=.2)
from transformers import BigBirdTokenizer
tokenizer = BigBirdTokenizer.from_pretrained('google/bigbird-roberta-base')
train_encodings = tokenizer(train_texts, truncation=True, padding=True)
val_encodings = tokenizer(val_texts, truncation=True, padding=True)
test_encodings = tokenizer(test_texts, truncation=True, padding=True)
import torch
class IMDbDataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
item['labels'] = torch.tensor(self.labels[idx])
return item
def __len__(self):
return len(self.labels)
train_dataset = IMDbDataset(train_encodings, train_labels)
val_dataset = IMDbDataset(val_encodings, val_labels)
test_dataset = IMDbDataset(test_encodings, test_labels)
from transformers import BigBirdForSequenceClassification, Trainer, TrainingArguments
import torch_xla.distributed.xla_multiprocessing as xmp
import torch_xla.core.xla_model as xm
def main():
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1, # total number of training epochs
per_device_train_batch_size=1, # batch size per device during training
per_device_eval_batch_size=1, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs', # directory for storing logs
logging_steps=10,
)
model = BigBirdForSequenceClassification.from_pretrained('google/bigbird-roberta-base')
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=val_dataset # evaluation dataset
)
trainer.train()
def _mp_fn(index):
main()
xmp.spawn(_mp_fn, args=(), nprocs=1, start_method='fork')
```
The tasks I am working on is:
* [ ] my own task or dataset: Using the IMDB Dataset for Text Classification
## To reproduce
Steps to reproduce the behavior:
1. Setup TPU-client on google Colab: !pip install cloud-tpu-client https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.8-cp37-cp37m-linux_x86_64.whl
2. Download the dataset:
a. !wget http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
b. !tar -xf aclImdb_v1.tar.gz
3. Execute the given script
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
RuntimeError Traceback (most recent call last)
<ipython-input-14-38fb8a22e1a3> in <module>()
----> 1 xmp.spawn(_mp_fn, args=(), nprocs=1, start_method='fork')
7 frames
/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py in spawn(fn, args, nprocs, join, daemon, start_method)
384 pf_cfg = _pre_fork_setup(nprocs)
385 if pf_cfg.num_devices == 1:
--> 386 _start_fn(0, pf_cfg, fn, args)
387 else:
388 return torch.multiprocessing.start_processes(
/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py in _start_fn(index, pf_cfg, fn, args)
321 # environment must be fully setup before doing so.
322 _setup_replication()
--> 323 fn(gindex, *args)
324
325
<ipython-input-12-0ed5b032dbf1> in _mp_fn(index)
32
33 def _mp_fn(index):
---> 34 main()
<ipython-input-12-0ed5b032dbf1> in main()
29 )
30
---> 31 trainer.train()
32
33 def _mp_fn(index):
/usr/local/lib/python3.7/dist-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, **kwargs)
1099 self.control = self.callback_handler.on_epoch_begin(self.args, self.state, self.control)
1100
-> 1101 for step, inputs in enumerate(epoch_iterator):
1102
1103 # Skip past any already trained steps if resuming training
/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/parallel_loader.py in __next__(self)
32
33 def __next__(self):
---> 34 return self.next()
35
36 def __len__(self):
/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/parallel_loader.py in next(self)
44 if self._mark_step_batch_count <= self._batches_yielded:
45 self._batches_yielded = 0
---> 46 xm.mark_step()
47 else:
48 self._batches_yielded += 1
/usr/local/lib/python3.7/dist-packages/torch_xla/core/xla_model.py in mark_step()
716 torch_xla._XLAC._xla_step_marker(
717 torch_xla._XLAC._xla_get_default_device(), [],
--> 718 wait=xu.getenv_as('XLA_SYNC_WAIT', bool, False))
719 # Only emit metrics from the first local device index, to avoid emitting the
720 # same values from different threads.
RuntimeError: Error while lowering: s64[1,2368]{1,0} aten::copysign, pad=(0, 19, 0, 0), value=0
Error: /pytorch/xla/torch_xla/csrc/helpers.h:100 : Check failed: scalar_value.isIntegral()
*** Begin stack trace ***
tensorflow::CurrentStackTrace()
torch_xla::XlaHelpers::ScalarValue(c10::Scalar, xla::PrimitiveType, xla::XlaBuilder*)
torch_xla::ir::ops::ConstantPadNd::Lower(torch_xla::ir::LoweringContext*) const
torch_xla::ir::LoweringContext::LowerNode(torch_xla::ir::Node const*)
torch_xla::ir::LoweringContext::LoweringContext(std::string const&, torch_xla::Device, absl::lts_2020_02_25::Span<torch_xla::ir::Node const* const>, std::unordered_map<torch_xla::ir::Node const*, torch_xla::ir::Util::EmitStatus, std::hash<torch_xla::ir::Node const*>, std::equal_to<torch_xla::ir::Node const*>, std::allocator<std::pair<torch_xla::ir::Node const* const, torch_xla::ir::Util::EmitStatus> > >)
torch_xla::XLATensor::Compile(std::vector<torch_xla::XLATensor, std::allocator<torch_xla::XLATensor> > const&, absl::lts_2020_02_25::Span<std::string const>, torch_xla::XLATensor::SyncTensorCollection const&, torch_xla::XLATensor::PostOrderData*)
torch_xla::XLATensor::SyncTensorsGraphInternal(std::vector<torch_xla::XLATensor, std::allocator<torch_xla::XLATensor> >*, absl::lts_2020_02_25::Span<std::string const>, torch_xla::XLATensor::SyncTensorsConfig const&)
torch_xla::XLATensor::SyncTensorsGraph(std::vector<torch_xla::XLATensor, std::allocator<torch_xla::XLATensor> >*, absl::lts_2020_02_25::Span<std::string const>, bool, bool)
torch_xla::XLATensor::SyncLiveTensorsGraph(torch_xla::Device const*, absl::lts_2020_02_25::Span<std::string const>, bool)
_PyMethodDef_RawFastCallKeywords
_PyCFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyObject_FastCall_Prepend
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
_PyEval_EvalFrameDefault
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
PyEval_EvalCode
_PyMethodDef_RawFastCallKeywords
_PyCFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
_PyObject_Call_Prepend
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
_PyFunction_FastCallDict
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
_PyFunction_FastCallDict
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyObject_Call_Prepend
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyObject_Call_Prepend
_PyObject_FastCallKeywords
_PyMethodDef_RawFastCallDict
PyCFunction_Call
_PyEval_EvalFrameDefault
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
PyEval_EvalCode
_PyMethodDef_RawFastCallKeywords
_PyCFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
_PyFunction_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalCodeWithName
_PyFunction_FastCallDict
_Py_UnixMain
__libc_start_main
_start
*** End stack trace ***
Scalar type not supported
Python Frames:
```
Similarly, once I got the following error:
```
RuntimeError: torch_xla/csrc/tensor_methods.cpp:880 : Check failed: xla::ShapeUtil::Compatible(shapes.back(), tensor_shape)
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Model training should have started but instead got the error.
| 04-21-2021 17:15:10 | 04-21-2021 17:15:10 | We didn't check yet whether BigBird works on TPU. We should put it on the roadmap (cc @vasudevgupta7) .<|||||>It should be interesting to check which operation (in bigbird) is causing problem on TPU :)<|||||>@vasudevgupta7 @patrickvonplaten Thanks. Please let us know if there was any update. Thanks!<|||||>Hi @mabdullah1994, sorry I missed your comment. I was checking bigbird on colab-tpu. I found that bigbird is working on TPU when we are not passing `attention_mask` (& only passing input_ids) into `model.forward()`. I will try to have a deeper look at it & will try to fix it some time soon.
Checkout this [notebook](https://colab.research.google.com/github/vasudevgupta7/bigbird/blob/main/notebooks/bigbird_tpu.ipynb) with TPU runtime.<|||||>Hi @vasudevgupta7 . Thanks for the update. Please let us know when this is fixed. Need this kind of urgently. Thanks!<|||||>@patrickvonplaten @vasudevgupta7 Any expected time frame, where we might expect it to work with `trainer` on TPUs? I am having the exact same problem, reproduction on synthetic dataset here on [Colab](https://colab.research.google.com/drive/1I6DR07ppQBTYBLatvGjBj70xsgVCOd4z?usp=sharing).
Or can we only use your script for the time being? |
transformers | 11,362 | closed | Training a TimeSFormer for video classification | My input data are the feature maps, instead of raw images. and have the form : (4,50,1,1,256)
mini_batch=4 / frames=50 / channels=1 / H=1 / W= 256
The parameters of the TimeSformer are :
TimeSformer(
dim = 128,
image_size = 256,
patch_size = 16,
num_frames = 50,
num_classes = 2,
depth = 12,
heads = 8,
dim_head = 32,
attn_dropout = 0.,
ff_dropout = 0.
)
In order to check if my network is working, I have tried to make it overfit by using only 6 training data and 2 validation data of the same shape as before (4,50,1,1,256).
But the accuracy I'm getting is in oscillation and never reaches a value >80% and my training loss is not decreasing it's always around 0.6900 - 06950
My training function and parameters are:



I would appreciate any suggestion.
thank you
| 04-21-2021 16:29:25 | 04-21-2021 16:29:25 | Hello! Is this a `transformers` issue? Where does `transformers` come into play?<|||||>>
>
> Hello! Is this a `transformers` issue? Where does `transformers` come into play?
Hi !! Actually, here I am showing only the issues I'm getting when training the TimeSFormer model,
you can find more information about the model in this link: https://github.com/lucidrains/TimeSformer-pytorch <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,361 | closed | Move old TF text classification script to legacy | 04-21-2021 16:13:28 | 04-21-2021 16:13:28 | ||
transformers | 11,360 | closed | Merge new TF example script | New branch for merging the new example because I'm scared of rebasing after that big of a change! | 04-21-2021 15:43:09 | 04-21-2021 15:43:09 | |
transformers | 11,359 | closed | [testing doc] bring doc up to date | Following up to https://github.com/huggingface/transformers/pull/11350 edited `testing.rst` to update outdated information.
@sgugger | 04-21-2021 15:38:09 | 04-21-2021 15:38:09 | |
transformers | 11,358 | closed | Different results between `AlbertTokenizer` and `AlbertTokenizerFast` modules with a new `spiece.model` file | Hello!
I would like to ask your opinion about a tokenizer behavior. In a project, I have to train a new tokenizer to re-pretrain an Albert model. I don't know if I did something wrong (and if I did, I'd love to know!) but for the moment a text is not tokenized in the same way with `AlbertTokenizer` and `AlbertTokenizerFast`.
Thanks a lot for your time in advance :smile:
## To reproduce
Steps to reproduce the behavior:
1. Training a tokenizer with [sentencepiece library](https://github.com/google/sentencepiece). The resulting tokenizer is saved under the name `spiece.model`. I can share it if needed.
2. Assuming that only the `spiece.model` file is in the root. Run the following blocs of code:
```python
tokenizer_dir_path = "."
text = "a\n b"
```
Cell:
```python
albert_tokenizer = AlbertTokenizer.from_pretrained(tokenizer_dir_path)
print("ids", albert_tokenizer.encode(text))
print("ids -> ids_token",albert_tokenizer.convert_ids_to_tokens(albert_tokenizer.encode(text)))
```
Output:
```bash
ids [2, 1842, 5132, 3]
ids -> ids_token ['[CLS]', '▁a', '▁b', '[SEP]']
```
Cell:
```python
albert_tokenizer_fast = AlbertTokenizerFast.from_pretrained(tokenizer_dir_path)
print("ids", albert_tokenizer_fast.encode(text))
print("ids -> ids_token",albert_tokenizer_fast.convert_ids_to_tokens(albert_tokenizer_fast.encode(text)))
```
Output:
```Bash
ids [2, 1127, 266, 3157, 3]
ids -> ids_token ['[CLS]', '▁a', '▁', '▁b', '[SEP]']
```
Cell:
```python
sp = spm.SentencePieceProcessor(model_file=os.path.join(tokenizer_dir_path, "spiece.model"))
print("ids", sp.encode(text))
print("ids -> ids_token", sp.id_to_piece(sp.encode(text)))
```
Output:
```bash
ids [1127, 3157]
ids -> ids_token ['▁a', '▁b']
```
Other variations:
I also tried to instantiate the tokenizer like this `AlbertTokenizerFast(vocab_file=os.path.join(tokenizer_dir_path, "spiece.model"))`.
## Expected behavior
I expected to have the same result with the modules: `AlbertTokenizer` and `AlbertTokenizerFast`. In particular, I did not expect "\n" to be tokenized by "_" in the case of `AlbertTokenizerFast`.
## Environment info
- `transformers` version: 4.5.1
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): Albert
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
| 04-21-2021 14:23:49 | 04-21-2021 14:23:49 | Hey @SaulLu, thanks a lot for the detailed issue. I managed to reproduce the issue by using another tokenizer on the hub, `codegram/calbert-tiny-uncased`, which has the same issue.
@n1t0 helped me identify the issue, and we have a fix in #11367. Would you mind trying it out and let me know if it fixes your issue? The correct behavior is that of the slow tokenizer, there's an excess space in the fast tokenizer encoding.
You can either checkout the branch and install from source - or you can install the following in your env:
```
pip install -U git+https://github.com/huggingface/transformers@fix-albert-converter
```<|||||>Hey @LysandreJik !
Thank you very much for your detailed answer! I tested the fix provided on #11367. The tokenization is now the same with the previous example `text="a\n b"` which became `['[CLS]', '▁a', '▁b', '[SEP]']` ! :+1:
Unfortunately, I have a similar inconsistency that was not resolved with the following example:
```
text="\n"
```
Cell:
```python
albert_tokenizer = AlbertTokenizer.from_pretrained(tokenizer_dir_path)
print("ids -> ids_token",albert_tokenizer.convert_ids_to_tokens(albert_tokenizer.encode(text)))
```
Output:
```bash
ids -> ids_token ['[CLS]', '[SEP]']
```
Cell:
```python
albert_tokenizer_fast = AlbertTokenizerFast.from_pretrained(tokenizer_dir_path)
print("ids -> ids_token",albert_tokenizer_fast.convert_ids_to_tokens(albert_tokenizer_fast.encode(text)))
```
Output:
```Bash
ids -> ids_token ['[CLS]', '▁', '[SEP]']
```
Cell:
```python
sp = spm.SentencePieceProcessor(model_file=os.path.join(tokenizer_dir_path, "spiece.model"))
print("ids -> ids_token", sp.id_to_piece(sp.encode(text)))
```
Output:
```bash
ids -> ids_token []
```<|||||>If it can help, I started to compare the tokenizer.json files between the one contained in `albert-base-v2` and the one obtained by running the command :
```
albert_tokenizer_fast = AlbertTokenizerFast.from_pretrained("SaulLu/albert-bn-dev")
albert_tokenizer_fast.save_pretrained("./albert-bn-dev-fast")
```
An extract of `albert-base-v2` `tokenizer.json`'s file is:
```
{
"version": "1.0",
"truncation": null,
"padding": null,
"added_tokens": [
{
"id": 0,
"special": true,
"content": "<pad>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false
},
{
"id": 1,
"special": true,
"content": "<unk>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false
},
{
"id": 2,
"special": true,
"content": "[CLS]",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false
},
{
"id": 3,
"special": true,
"content": "[SEP]",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false
},
{
"id": 4,
"special": true,
"content": "[MASK]",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false
}
],
"normalizer": {
"type": "Sequence",
"normalizers": [
{ "type": "Replace", "pattern": { "String": "``" }, "content": "\"" },
{ "type": "Replace", "pattern": { "String": "''" }, "content": "\"" },
{ "type": "NFKD" },
{ "type": "StripAccents" },
{ "type": "Lowercase" },
{
"type": "Precompiled",
"precompiled_charsmap": "..."
}
]
},
"pre_tokenizer": {
"type": "Sequence",
"pretokenizers": [
{ "type": "WhitespaceSplit" },
{ "type": "Metaspace", "replacement": "▁", "str_rep": "▁", "add_prefix_space": true }
]
},
"post_processor": {
"type": "TemplateProcessing",
"single": [
{ "SpecialToken": { "id": "[CLS]", "type_id": 0 } },
{ "Sequence": { "id": "A", "type_id": 0 } },
{ "SpecialToken": { "id": "[SEP]", "type_id": 0 } }
],
"pair": [
{ "SpecialToken": { "id": "[CLS]", "type_id": 0 } },
{ "Sequence": { "id": "A", "type_id": 0 } },
{ "SpecialToken": { "id": "[SEP]", "type_id": 0 } },
{ "Sequence": { "id": "B", "type_id": 1 } },
{ "SpecialToken": { "id": "[SEP]", "type_id": 1 } }
],
"special_tokens": {
"[SEP]": { "id": "[SEP]", "ids": [3], "tokens": ["[SEP]"] },
"[CLS]": { "id": "[CLS]", "ids": [2], "tokens": ["[CLS]"] }
}
},
"decoder": {
"type": "Metaspace",
"replacement": "▁",
"str_rep": "▁",
"add_prefix_space": true
},
"model": {
"unk_id": 1,
"vocab": [...]
}
}
```
An extract of `albert-bn-dev-fast` `tokenizer.json`'s file is:
```
{
"version": "1.0",
"truncation": null,
"padding": null,
"added_tokens": [
{
"id": 0,
"special": true,
"content": "<pad>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false
},
{
"id": 1,
"special": true,
"content": "<unk>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false
},
{
"id": 2,
"special": true,
"content": "[CLS]",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false
},
{
"id": 3,
"special": true,
"content": "[SEP]",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false
},
{
"id": 4,
"special": true,
"content": "[MASK]",
"single_word": false,
"lstrip": true,
"rstrip": false,
"normalized": true
}
],
"normalizer": {
"type": "Sequence",
"normalizers": [
{ "type": "Replace", "pattern": { "String": "``" }, "content": "\"" },
{ "type": "Replace", "pattern": { "String": "''" }, "content": "\"" },
{ "type": "NFKD" },
{ "type": "StripAccents" },
{ "type": "Lowercase" },
{
"type": "Precompiled",
"precompiled_charsmap": "..."},
{ "type": "Replace", "pattern": { "Regex": " {2,}" }, "content": " " }
]
},
"pre_tokenizer":{"type":"Metaspace","replacement":"▁","add_prefix_space":true},
},
"post_processor": {
"type": "TemplateProcessing",
"single": [
{ "SpecialToken": { "id": "[CLS]", "type_id": 0 } },
{ "Sequence": { "id": "A", "type_id": 0 } },
{ "SpecialToken": { "id": "[SEP]", "type_id": 0 } }
],
"pair": [
{ "SpecialToken": { "id": "[CLS]", "type_id": 0 } },
{ "Sequence": { "id": "A", "type_id": 0 } },
{ "SpecialToken": { "id": "[SEP]", "type_id": 0 } },
{ "Sequence": { "id": "B", "type_id": 1 } },
{ "SpecialToken": { "id": "[SEP]", "type_id": 1 } }
],
"special_tokens": {
"[CLS]": { "id": "[CLS]", "ids": [2], "tokens": ["[CLS]"] },
"[SEP]": { "id": "[SEP]", "ids": [3], "tokens": ["[SEP]"] }
}
},
"decoder": { "type": "Metaspace", "replacement": "▁", "add_prefix_space": true },
"model": {
"type": "Unigram",
"unk_id": 1,
"vocab": [...]
}
}
```
By replacing the content of `pre_tokenizer` key with `{
"type": "Sequence",
"pretokenizers": [
{ "type": "WhitespaceSplit" },
{ "type": "Metaspace", "replacement": "▁", "str_rep": "▁", "add_prefix_space": true }
]
}` in the `albert-bn-dev-fast/tokenizer.json` file, then the Fast tokenizer returns the same results as the slow on the 2 discussed examples. :smiley:
<|||||>Hi @SaulLu, sorry for getting back so late on this.
I think you've stumbled upon another difference between the slow and fast tokenizers which would need to be patched within `tokenizers` directly.
Is it of great importance to your task? While it is an issue, I would argue it is quite low priority as such a use-case seems rare and the issue doesn't seem to have a huge impact. Please let me know if you think this should be bumped up. |
transformers | 11,357 | closed | possible mistake in documentation | Looking at description of the parameter "decoder_input_ids" in "forward" method of BartForConditionalGeneration/T5ForConditionalGeneration, I see following:
BartForConditionalGeneration:
decoder_input_ids - ... For translation and summarization training, decoder_input_ids should be provided. If no decoder_input_ids is provided, the model will create this tensor by shifting the !!INPUT_IDS!! to the right for denoising pretraining following the paper.
T5ForConditionalGeneration:
decoder_input_ids - ... To know more on how to prepare decoder_input_ids for pretraining take a look at T5 Training. If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_input_ids takes the value of !!INPUT_IDS!!.
Looks like there should be LABELS instead of INPUT_IDS.
Thanks,
@patrickvonplaten, @patil-suraj
| 04-21-2021 13:20:53 | 04-21-2021 13:20:53 | Hey @shyrma,
you are right I think - do you mind opening a PR to fix it? :-) <|||||>I'm not sure how to make a fix in an appropriate way, since both classes (for example) BartModel and BartForConditionalGeneration
use the same doc string BART_INPUTS_DOCSTRING `@add_start_docstrings_to_model_forward(BART_INPUTS_DOCSTRING)`.
BART_INPUTS_DOCSTRING contains mistake in respect to BartForConditionalGeneration only.<|||||>Hi @shyrma
There were few other mistakes in the docs for almost all seq-2-seq models. I took care of it! Thanks a lot for pointing this out.<|||||>Hi guys
Mistake is still present in documentation (forward method):
https://huggingface.co/transformers/model_doc/bart.html#bartforconditionalgeneration
https://huggingface.co/transformers/model_doc/t5.html#t5forconditionalgeneration
<|||||>HI @shyrma
for BART and mBART, this is actually correct. BART can be used for seq classification for these tasks it just uses the `input_ids` as `decoder_input_ids`. So the doc-string is right when it says `input_ids`.
Also, for T5 you are looking at the stable version doc, the changes are on master right now, and will be reflected in stable in next release. https://huggingface.co/transformers/master/model_doc/t5.html#t5forconditionalgeneration <|||||>I consider only BartForConditionalGeneration and T5ForConditionalGeneration.
> Also, for T5 you are looking at the stable version doc, the changes are on master right now, and will be reflected in stable in next release
Great! And what about BartForConditionalGeneration?<|||||>as I said above BART can use `input_ids` to create `decoder_input_ids` when `labels` are not present. So the docstring for BART is correct.<|||||>Currently one can find following explanation of the parameter "decoder_input_ids" of BartForConditionalGeneration forward method:
`decoder_input_ids - ... If no decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right for denoising pretraining following the paper.`
Do I understand correctly that you argue this is correct explanation ?<|||||>Yes, that's the correct explanation and that's true for tasks like sequence classification and question answering as well, for these tasks BART uses the same `input_ids` as `decoder_input_ids`.<|||||>Hmm, I'm not sure. And corresponding code tells that it is not true:
```
if labels is not None:
if decoder_input_ids is None:
decoder_input_ids = shift_tokens_right(
labels, self.config.pad_token_id, self.config.decoder_start_token_id
)
```<|||||>That's correct, that code prepares `decoder_input_ids` from `lables`, when `labels` are not `None, but when they are `None`, `input_ids` are used.
https://github.com/huggingface/transformers/blob/8d43c71a1ca3ad322cc45008eb66a5611f1e017e/src/transformers/models/bart/modeling_bart.py#L1147-L1152<|||||>I'm sorry I meant this piece of code (dealing with BartForConditionalGeneration, not with BartModel)
https://github.com/huggingface/transformers/blob/8d43c71a1ca3ad322cc45008eb66a5611f1e017e/src/transformers/models/bart/modeling_bart.py#L1283-L1287
And looks like explanation in docs should be following:
`decoder_input_ids - ... If no decoder_input_ids is provided, the model will create this tensor by shifting the labels to the right for denoising pretraining following the paper.`
that is replace "inputs_ids" by "labels" |
transformers | 11,356 | closed | Whyhttps://github.com/huggingface/transformers/tree/master/examples/pplm | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 04-21-2021 11:09:14 | 04-21-2021 11:09:14 | The html doesn‘t work. I can't see "https://github.com/huggingface/transformers/tree/master/examples/pplm".<|||||>Did something went wrong? A lot of the scripts are gone like there used to be examples for clm and plm in transformers/examples/language-modeling but now there's only run_mlm_noflax.py<|||||>Soga,haha. Thank you!<|||||>Apparently they are moving stuff around. So what was originally `transformers/examples/language-modeling` has become `transformer/examples/pytorch/language-modeling` now. So maybe you can look around to find if they've moved what you're looking for into some other folder.<|||||>I got it. Thank you very much!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,355 | closed | Fix token_type_ids error for big_bird model. | I run the follow code, but get strange outputs, the `token_type_ids` is shorter than `input_ids`:
```python
def demo():
model_name = "./resources/bigbird-roberta-base"
tokenizer = AutoTokenizer.from_pretrained(model_name)
text = 'With power of science ,'
max_length = 10
encoded_tokens = tokenizer.encode_plus(
text=text,
add_special_tokens=True,
max_length=max_length,
truncation=True if max_length is not None else False,
return_tensors=None,
return_offsets_mapping=tokenizer.is_fast,
return_attention_mask=False,
return_token_type_ids=True,
return_special_tokens_mask=True,
)
print(encoded_tokens)
```

Seem it miss the `create_token_type_ids_from_sequences` method, so it run code in [here](https://github.com/huggingface/transformers/blob/95dab34d5588fb155dfed8293ac2fbb1217a95a7/src/transformers/tokenization_utils_base.py#L2665) as default. So I copy the method in BERT to fix it.
I know maybe the big_bird model doesn't need `token_type_ids`, but we also have to make sure it return the right result if we set `return_token_type_ids = True`. | 04-21-2021 11:07:59 | 04-21-2021 11:07:59 | @sgugger <|||||>I think the BigBird QA model actually uses 16 token type ids (https://huggingface.co/google/bigbird-base-trivia-itc/blob/main/config.json), but this is an exception I think and the default case is 2 token type ids => so this looks good to me.
@vasudevgupta7 what do you think?<|||||>Yes, both base & large pre-trained checkpoints accept 2 token type ids while only trivia-qa checkpoint accepts 16 token type ids. So, this should be good. |
transformers | 11,354 | closed | Question-answering pipeline failing with Nonetype exception when selecting spans with tokens outside of the context | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: '4.6.0.dev0'
- Platform: Linux Mint 20
- Python version: 3.7.10
- PyTorch version (GPU?): GPU
- Tensorflow version (GPU?): NA
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@LysandreJik
## Information
Model I am using (Bert, XLNet ...): camembert (specifically [etalab-ia/camembert-base-squadFR-fquad-piaf](https://huggingface.co/etalab-ia/camembert-base-squadFR-fquad-piaf))
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task:
* [x] my own task or dataset: Question Answering with own SQuAD-like dataset
## To reproduce
When using a `question-answering` pipeline, if the context is too small (or if the model can't find multiple candidates), the produced scores will be zero and thus when sorting and filtering for `topk > 1`, we may return random indices of zero score values which correspond to tokens that **are not** in the context, but in the question. This sorting and index returning happens [here](https://github.com/huggingface/transformers/blob/95dab34d5588fb155dfed8293ac2fbb1217a95a7/src/transformers/pipelines/question_answering.py#L406).
Asking for an index that does not exist in the context returns a `None` down the line (in function `enc.word_to_chars()` [here](https://github.com/huggingface/transformers/blob/95dab34d5588fb155dfed8293ac2fbb1217a95a7/src/transformers/pipelines/question_answering.py#L376)). This bug may be related to this issue https://github.com/huggingface/transformers/issues/9843.
This suite of events finally produce this exception:
```
Traceback (most recent call last):
File "/home/pavel/.config/JetBrains/PyCharmCE2021.1/scratches/bug_transf.py", line 25, in <module>
print(nlp({'question': questions[0], 'context': text}, topk=20, handle_impossible_answer=True, max_seq_len=256, doc_stride=128))
File "/home/pavel/miniconda3/envs/piaf-ml/lib/python3.7/site-packages/transformers/pipelines.py", line 1968, in __call__
for s, e, score in zip(starts, ends, scores)
File "/home/pavel/miniconda3/envs/piaf-ml/lib/python3.7/site-packages/transformers/pipelines.py", line 1968, in <listcomp>
for s, e, score in zip(starts, ends, scores)
TypeError: 'NoneType' object cannot be interpreted as an integer
```
## Full Context
We are building a Retriever (ES with bm25) + Reader (QA with the above mentioned model) search engine with the haystack library. In this setting, we test with different lengths for the contexts where the QA model will find the answer. We are also testing for different values of `topk`.
As an example, if I have a 1001 words context and I set the max length to 1000, I will split the document in two sub-documents, one with the first 1000 words and the other with the last word. Thus my second sub-document will be very small. These type of small documents will be passed to the transformers QA pipeline which will usually generate the above exception when `topk` is greater than one.
Steps to reproduce the behavior:
```python
from transformers import pipeline
nlp = pipeline('question-answering', model='etalab-ia/camembert-base-squadFR-fquad-piaf', tokenizer='etalab-ia/camembert-base-squadFR-fquad-piaf')
question = "Comment bénéficier du billet de congé annuel de la SNCF à tarif réduit ?"
context = "perle"
result = nlp({'question': question, 'context': context}, topk=20, handle_impossible_answer=True, max_seq_len=256, doc_stride=128)
print(result)
```
## Proposed Solution
Given that in `self.decode` we return the indices of the context tokens to create the answers, we could re-filter them to make sure that we will use context-tokens indices to generate the spans later on. Just like this (replacing this [line](https://github.com/huggingface/transformers/blob/95dab34d5588fb155dfed8293ac2fbb1217a95a7/src/transformers/pipelines/question_answering.py#L344)):
```python
starts, ends, scores = self.decode(start_, end_, kwargs["topk"], kwargs["max_answer_len"])
desired_spans = np.in1d(starts, undesired_tokens.nonzero()) & np.in1d(ends, undesired_tokens.nonzero())
starts = starts[desired_spans]
ends = ends[desired_spans]
scores = scores[desired_spans]
```
I have a [branch](https://github.com/psorianom/transformers/blob/e96afad34bc872b4fc9318d45a551e0c33f3de8c/src/transformers/pipelines/question_answering.py#L346) here ready to be PRequested if you agree with this solution.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I would like to have an answer with valid spans even if they are lower than the required `topk` parameter.
<!-- A clear and concise description of what you would expect to happen. -->
| 04-21-2021 09:42:19 | 04-21-2021 09:42:19 | Hi, thank you for opening such a detailed issue. I know @Narsil has some experience with the question answering pipeline and has probably been confronted to that issue in the past.
Nicolas, what do you think of the fix proposed above?
Either way we should work on a clearer error message.<|||||>The proposed fix seems reasonable.
A few things:
- `decode` is already supposed to do the proper filtering, so we should probably move the logic in the function and document the changes in the docstring.
IMO it makes sense to return less than `topk` if there are not enough options available within `context`.
- Also we should probably add a tests for this use case (the low hanging fruit is adding the exact excerpt as a `slow` test).
- The `numpy` doc recommends using `isin` instead of `in1d` for new code: https://numpy.org/doc/stable/reference/generated/numpy.in1d.html<|||||>Hi @LysandreJik and @Narsil, thank you for your quick answers.
I will modify my solution as suggested. I will have to change my solution's logic because in `decode` we are not aware of the `undesired_tokens`. Maybe I could just add it as parameter or maybe some masking over the zero-valued scores would be preferable ?
I will also look into the tests.
<|||||>I think adding it as an argument is fine. Maybe let's make it optional to keep backward compatibility though, @LysandreJik ?<|||||>Sounds good! <|||||>Great! I am working on this. I will make a PR as soon as I can. |
transformers | 11,353 | closed | T5 Gradient Checkpointing | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Partially fixes #6564. This is inspired by @xFinal's workaround. However, instead of modifying PyTorch's implementation of gradient checkpointing, I only modify `T5Block` here and replace the None value with a dummy tensor that requires a gradient.
~Gradient checkpointing is only enabled for the encoder. Some of the outputs of the decoder don't require gradients and will cause problems. PyTorch 1.8.0 has fixed this (see note 1 below).~
Additional notes:
1. `require_grad = True` for the dummy tensor is no longer required since PyTorch 1.8.0 ([From this PR](https://github.com/pytorch/pytorch/pull/45934)).
2. None as a return value is allowed since [this PyTorch PR](https://github.com/pytorch/pytorch/pull/52422). It has not been released yet (the latest release is 1.8.1 at the time of writing). We won't even need the dummy tensor after that.
3. I tested this code locally with PyTorch 1.7.1.
4. I did not write any additional test because it seems that [test_training_gradient_checkpointing](https://github.com/huggingface/transformers/blob/81009b7a5c5cb183a9275c15bf347bdc988b02c4/tests/test_modeling_common.py#L242) in `ModelTesterMixin` already covers it.
~EDIT: I forgot this PR only covers the encoder, not the decoder. The description has been updated to reflect this fact.~
EDIT: This PR covers the decoder now.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). (Updated docstring for T5Config.)
- [ ] Did you write any new necessary tests? (Already covered by existing tests.)
## Who can review?
@patrickvonplaten (T5)
@sgugger (Documentation)
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-21-2021 04:49:25 | 04-21-2021 04:49:25 | |
transformers | 11,352 | closed | [deepspeed] fix resume from checkpoint | This PR fixes a bug that most likely somehow got exposed (not caused) by https://github.com/huggingface/transformers/pull/11318 - surprisingly the same test worked just fine before that other PR.
@sgugger
| 04-21-2021 03:55:55 | 04-21-2021 03:55:55 | Indeed. I removed it during merge. Thank you for the reminder, @sgugger |
transformers | 11,351 | closed | fine tuning encoder decoder for custom language translation | Hello everyone,
I would like to know if you can train a BERT2GPT model (or other models) for translation of customized languages (from scratch). I need to translate gloss signals from ASL to English.
I have already looked for tutorials on the internet, but most of them are for the task of generating text, I cannot find tutorials for translating text.
I read about EncoderDecoder, I think it's possible, I just don't know how to make a notebook to perform training from scratch using the hugging face models.
Could you help me? Has anyone done something like that? | 04-21-2021 01:14:45 | 04-21-2021 01:14:45 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead?
You might also find [this notebook](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb) showing how to train an encoder-decoder model interesting.
Thanks!<|||||>Hello
Thanks for reply.
I opened a question in the forum, I am waiting for some help.
Thank you again!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,350 | closed | Examples reorg | # What does this PR do?
As discussed internally, this PR reorganizes the `examples` folder to make clean subfolders for PyTorch and TensorFlow. This way, each example can have its own requirements including the proper backend and there is no headache to determine who will be first in the README.
It also splits the seq2seq folder in two: translation and summarization.
Finally it moves the content of `examples/test_data` to `tests/fixtures/tests_samples` which is more adapted.
In passing, it updates references to the examples that were moved. | 04-21-2021 01:13:55 | 04-21-2021 01:13:55 | `tests/deepspeed` and `tests/extended` need to be updated too following the rename. Thank you.
```
pip install fairscale deepspeed
RUN_SLOW=1 pytest tests/deepspeed tests/extended
```<|||||>Which tests are failing for you in `tests/extended`? Everything is passing on my side. For deepspeed I fixed the last path I had forgotten to update but it's impossible for me to run those tests as they all error out since deepspeed is not able to build properly on my setup. Got:
```
!! WARNING !! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Your compiler (c++) is not compatible with the compiler Pytorch was
built with for this platform, which is g++ on linux. Please
use g++ to to compile your extension. Alternatively, you may
compile PyTorch from source using c++, and then you can also use
c++ to compile your extension.
See https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md for help
with compiling PyTorch from source.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! WARNING !!
```
and pretty much every test is a failure.<|||||>All tests now pass (including `fairscale` and `apex`) - one deepspeed test fails but it's unrelated to this PR.
If you have trouble building deepspeed at run time, please consider pre-building it: https://huggingface.co/transformers/master/main_classes/trainer.html#installation and of course report an Issue to Deepspeed if you have a few minutes to do so.
Thank you for fixing this, @sgugger <|||||>Looks like we have some really old dead references too:
```
docs/source/testing.rst:* :prefix_link:`test_seq2seq_examples_multi_gpu.py <examples/seq2seq/test_seq2seq_examples_multi_gpu.py>` - a
docs/source/testing.rst:* :prefix_link:`test_finetune_trainer.py <examples/seq2seq/test_finetune_trainer.py>` - a normal (non-PL) test
docs/source/testing.rst: CUDA_VISIBLE_DEVICES="0,1" RUN_SLOW=1 pytest -sv examples/seq2seq/test_finetune_trainer.py \
docs/source/testing.rst: examples/seq2seq/test_seq2seq_examples_multi_gpu.py
docs/source/testing.rst: data_dir = self.examples_dir / "seq2seq/test_data/wmt_en_ro"
```<|||||>Yes indeed! Those are for the very old scripts (that's not the only place we have those). I wasn't sure how to replace those so if you could point me to the script you want to use instead, I can adapt. I think the whole paragraph may need a rewrite since it has been a long time.<|||||>> Yes indeed! Those are for the very old scripts (that's not the only place we have those). I wasn't sure how to replace those so if you could point me to the script you want to use instead, I can adapt. I think the whole paragraph may need a rewrite since it has been a long time.
Fixed here https://github.com/huggingface/transformers/pull/11359 |
transformers | 11,349 | closed | [Wav2Vec2] Fix special tokens for Wav2Vec2 tokenizer | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR fixes https://github.com/huggingface/transformers/issues/10942. Wav2Vec2's vocabulary can
consist of multi-character tokens which should then nevertheless be treated as single atomic tokens when encoding/decoding.
=> This PR ensures such behavior and fixes the issue attached to this PR.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-20-2021 21:29:55 | 04-20-2021 21:29:55 | |
transformers | 11,348 | closed | 'Tensor' object has no attribute 'size' | I am trying to implement a transformer for language classification using TensorFlow below is the model code I used but it throws an error of tensor object has no attribute size please help for the notebook you can visit the below link
https://github.com/waqarkaleemkhan/Transformer_for_Language_classification/blob/master/Transformer_for_language_classification/model.ipynb
from transformers import AutoModel
bert = AutoModel.from_pretrained('bert-base-cased')
input_ids = tf.keras.layers.Input(shape=(SEQ_LEN,), name='input_ids', dtype='int32')
mask = tf.keras.layers.Input(shape=(SEQ_LEN,), name='attention_mask', dtype='int32')
embeddings = bert(input_ids, attention_mask=mask)
X = tf.keras.layers.LSTM(64)(embeddings)
X = tf.keras.layers.BatchNormalization()(X)
X = tf.keras.layers.Dense(64, activation='relu')(X)
X = tf.keras.layers.Dropout(0.1)(X)
y = tf.keras.layers.Dense(3, activation='softmax', name='outputs')(X)
| 04-20-2021 21:05:35 | 04-20-2021 21:05:35 | Hi! You're using `AutoModel`, which is a pytorch model, with TensorFlow instructions. Change to `TFAutoModel`!<|||||>Hi @LysandreJik Thanks for your response when I try to import TFAutoModel from TensorFlow it gives an error that cannot import name 'TFAutomodel' from 'transformers' (unknown location)
my environment
python =3.7
TensorFlow= 2.0
can you please guide me further what to do<|||||>What's your `transformers` version? Can you install a more recent version of `tensorflow` and see if it fixes your issue?<|||||>I have updated my TensorFlow and transformer but still I have the same issue <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,347 | closed | Extract metric_key_prefix during NotebookProgressCallback.on_evaluate | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR upgrades `NotebookProgressCallback` to detect `metric_key_prefix` when `on_evaluate` is called. Useful when users override `Trainer.evaluate` and pick a non-standard prefix for train / eval / test.
Forum link where this topic was discussed: https://discuss.huggingface.co/t/logging-training-accuracy-using-trainer-class/5524?u=lewtun
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger
| 04-20-2021 20:34:08 | 04-20-2021 20:34:08 | Mmm, looks like there is a problem in the tests though.<|||||>Yes, part of the problem is coming from `CallbackHandler.on_evaluate` which does not have a `kwargs` in the signature: https://github.com/huggingface/transformers/blob/f1b938fda81d4b9e8ab435cb7f37f71c9b7cbb1e/src/transformers/trainer_callback.py#L361
Adding `kwargs` there seems to work, so my question is whether we should also add `kwargs` to the other class functions (e.g. `on_train_begin` etc)? Since `CallbackHandler` is a subclass of `TrainerCallback`, this would preserve the function signatures in the derived class<|||||>This would be a breaking change for all users that have implemented their custom `TrainerCallback`. Maybe we can just not try to pass the `eval_prefix` and just look for anything that is `xxx_loss` in the metrics dictionary, then `xxx` is the `eval_prefix`?<|||||>Ah good point. I've followed your suggestion instead 😃 <|||||>It seems the torch tests timed out (not sure how my changes could induce that). Would it be possible to rerun the CI? |
transformers | 11,346 | closed | [contributing doc] explain/link to good first issue | This PR expands the contributing doc to helps users find `Good First Issue`/`Good Second Issue` issues.
@LysandreJik, @sgugger | 04-20-2021 19:09:35 | 04-20-2021 19:09:35 | |
transformers | 11,345 | closed | absolute embeddings in Deberta | The paper says they add absolute position embeddings at the last layer, however, the models is still using the addition of position embeddings and word embeddings.
In configuration_deberta.py
------------
position_biased_input (:obj:`bool`, `optional`, defaults to :obj:`True`):
Whether add absolute position embedding to content embedding.
default setting: position_biased_input=True
| 04-20-2021 18:58:40 | 04-20-2021 18:58:40 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>> The paper says they add absolute position embeddings at the last layer, however, the models is still using the addition of position embeddings and word embeddings.
>
> ## In configuration_deberta.py
> position_biased_input (:obj:`bool`, `optional`, defaults to :obj:`True`):
> Whether add absolute position embedding to content embedding.
>
> default setting: position_biased_input=True
It's disabled by the model_config.json along with the model repository.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>The implementation of Deberta is actually not the same as the paper, the absolute embeddings should be added at the last two layers. |
transformers | 11,344 | closed | [run_summarization.py] wrong dataset leads to CUDA error:s | Feeding `--dataset_name cnn_dailymail` to `--model_name_or_path google/pegasus-xsum` leads to lots of errors from pytorch - perhaps there is a way to detect that the dataset is inappropriate and give a nice relevant assert instead?
You'd think that `--dataset_name cnn_dailymail` and `--dataset_name xsum` should be interchangeable...
```
python examples/seq2seq/run_summarization.py --model_name_or_path google/pegasus-xsum --do_train \
--do_eval --dataset_name cnn_dailymail --dataset_config "3.0.0" \
--output_dir /tmp/tst-summarization --per_device_train_batch_size=1 --per_device_eval_batch_size=1 \
--overwrite_output_dir --predict_with_generate
[....]
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [290,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [290,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [290,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
(crashes w/o traceback here)
```
If I run it on one gpu I get:
```
[...]
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [138,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
return forward_call(*input, **kwargs)
File "/mnt/nvme1/code/huggingface/transformers-gpt-neo-nan/src/transformers/models/pegasus/modeling_pegasus.py", line 763, in forward
layer_outputs = encoder_layer(
File "/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1015, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/nvme1/code/huggingface/transformers-gpt-neo-nan/src/transformers/models/pegasus/modeling_pegasus.py", line 323, in forward
hidden_states, attn_weights, _ = self.self_attn(
File "/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1015, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/nvme1/code/huggingface/transformers-gpt-neo-nan/src/transformers/models/pegasus/modeling_pegasus.py", line 190, in forward
query_states = self.q_proj(hidden_states) * self.scaling
File "/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1015, in _call_impl
return forward_call(*input, **kwargs)
File "/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 94, in forward
return F.linear(input, self.weight, self.bias)
File "/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/functional.py", line 1860, in linear
return torch._C._nn.linear(input, weight, bias)
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)`
```
Thanks.
@sgugger, @patil-suraj
| 04-20-2021 16:53:34 | 04-20-2021 16:53:34 | This fails too:
```
CUDA_LAUNCH_BLOCKING=1 python examples/seq2seq/run_summarization.py \
--model_name_or_path google/pegasus-xsum --do_eval --dataset_name xsum --output_dir output_dir \
--per_device_eval_batch_size=16 --predict_with_generate --max_val_samples 20
```
```
***** Running Evaluation *****
Num examples = 20
Batch size = 16
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
Traceback (most recent call last):
File "examples/seq2seq/run_summarization.py", line 591, in <module>
main()
File "examples/seq2seq/run_summarization.py", line 547, in main
metrics = trainer.evaluate(
File "/mnt/nvme1/code/huggingface/transformers-gpt-neo-nan/src/transformers/trainer_seq2seq.py", line 75, in evaluate
return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
File "/mnt/nvme1/code/huggingface/transformers-gpt-neo-nan/src/transformers/trainer.py", line 1853, in evaluate
output = eval_loop(
File "/mnt/nvme1/code/huggingface/transformers-gpt-neo-nan/src/transformers/trainer.py", line 2005, in evaluation_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/mnt/nvme1/code/huggingface/transformers-gpt-neo-nan/src/transformers/trainer_seq2seq.py", line 167, in prediction_step
generated_tokens = self.model.generate(
File "/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/mnt/nvme1/code/huggingface/transformers-gpt-neo-nan/src/transformers/generation_utils.py", line 931, in generate
model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs)
File "/mnt/nvme1/code/huggingface/transformers-gpt-neo-nan/src/transformers/generation_utils.py", line 413, in _prepare_encoder_decoder_kwargs_for_generation
model_kwargs["encoder_outputs"]: ModelOutput = encoder(input_ids, return_dict=True, **encoder_kwargs)
File "/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1015, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/nvme1/code/huggingface/transformers-gpt-neo-nan/src/transformers/models/pegasus/modeling_pegasus.py", line 721, in forward
embed_pos = self.embed_positions(input_shape)
File "/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1015, in _call_impl
return forward_call(*input, **kwargs)
File "/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/mnt/nvme1/code/huggingface/transformers-gpt-neo-nan/src/transformers/models/pegasus/modeling_pegasus.py", line 139, in forward
return super().forward(positions)
File "/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 156, in forward
return F.embedding(
File "/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/functional.py", line 2037, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: CUDA error: device-side assert triggered
```
```
Collecting environment information...
PyTorch version: 1.9.0a0+git548765d
Is debug build: False
CUDA used to build PyTorch: 11.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.2 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.16.3
Python version: 3.8 (64-bit runtime)
Is CUDA available: True
CUDA runtime version: 11.2.152
GPU models and configuration:
GPU 0: GeForce GTX 1070 Ti
GPU 1: GeForce RTX 3090
```
<|||||>I'm not sure it's a dataset thing. I think there is something wrong inside the Pegasus model, there have been multiple issues with it not working with Trainer.<|||||>Hmm, after updating `datasets` to the latest version the cmd line in OP started to work. But it crashes in the same way if I add `--max_train_samples 20 --max_val_samples 20`.
<|||||>Hi, do you know how to use GPU when running summarization.py? I have 2 GPUs on my computer, but it didn't use them... Thank you very much!<|||||>@liubest, please kindly use https://discuss.huggingface.co/ if you run into troubles after reading [README.md](https://github.com/huggingface/transformers/blob/52166f672ed337934d90cc525c226d93209e0923/examples/pytorch/summarization/README.md), which should cover most of the questions on this example usage.<|||||>> @liubest, please kindly use https://discuss.huggingface.co/ if you run into troubles after reading [README.md](https://github.com/huggingface/transformers/blob/52166f672ed337934d90cc525c226d93209e0923/examples/pytorch/summarization/README.md), which should cover most of the questions on this example usage.
Thank you for your reply. I have one more question and it is not found in the forum. When using run_summarization.py, how to run transformer models like t5-small, facebook/bart-large-cnn without loading pre-trained weights? I only want to train their original model architecture without pre-trained model. Thank you very much!<|||||>You will find probably dozens tutorials if you use google: Please try [huggingface train model from scratch](https://www.google.com/search?channel=fs&q=huggingface+train+model+from+scratch).
Please let's not derail this issue by asking unrelated questions. If you still have a problem please start a new Issue. Thank you!<|||||>I'm also interested in solving this problem. @stas00, let me know if I should look into it<|||||>Yes, please, @patrickvonplaten - thank you!<|||||>@stas00, I checked and the problem simply seems to be that `max_source_length` is too high. It's set to 1024 by default even though Pegasus can only handle `512`. So, the following command should just run fine:
```bash
python examples/pytorch/summarization/run_summarization.py --model_name_or_path google/pegasus-xsum --do_train \
--do_eval --dataset_name cnn_dailymail --dataset_config "3.0.0" \
--output_dir /tmp/tst-summarization --per_device_train_batch_size=1 --per_device_eval_batch_size=1 \
--overwrite_output_dir --predict_with_generate --max_source_length 512
```<|||||>By the way errors like those `/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed` are in my experience very often out of index errors and it helps to run the same code on CPU which then gives a better error message<|||||>> @stas00, I checked and the problem simply seems to be that `max_source_length` is too high. It's set to 1024 by default even though Pegasus can only handle `512`. So, the following command should just run fine:
>
> ```shell
> python examples/pytorch/summarization/run_summarization.py --model_name_or_path google/pegasus-xsum --do_train \
> --do_eval --dataset_name cnn_dailymail --dataset_config "3.0.0" \
> --output_dir /tmp/tst-summarization --per_device_train_batch_size=1 --per_device_eval_batch_size=1 \
> --overwrite_output_dir --predict_with_generate --max_source_length 512
> ```
Thank you for investigating this, @patrickvonplaten - could we programmatically defend against this mismatch?<|||||>> By the way errors like those `/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed` are in my experience very often out of index errors and it helps to run the same code on CPU which then gives a better error message
Yes! so with `CUDA_VISIBLE_DEVICES=""`
we should document this at https://huggingface.co/transformers/troubleshooting.html
Also `CUDA_LAUNCH_BLOCKING=1` is another important debug technique for gpu
<|||||>@stas00 , @patrickvonplaten , Pegasus actually uses SinusoidalPositionalEmbedding, so there is no seq length limit. We should resize the embedding if cur len is greater than the default len. That's what we do in FSMT and M2M100<|||||>On the other hand Pegasus has only been trained on a max length of 512, so I'm not sure whether it's a good idea to "silently" extend the input to a length of 1024 since the model will probably produce garbage, or do you guys have had different experiences @stas00 @patil-suraj ?
Think I'd prefer to reduce max length automatically to model.config.max_position_embeddings and throw a warning<|||||>That makes sense, but even though pegaus is pre-trained with 512, they use different `max_position_embeddings` when fine-tuning
fo example for xsum model `max_position_embeddings` is 512 https://huggingface.co/google/pegasus-xsum/blob/main/config.json#L44
and for cnn_dm, pubmed it is 1024
https://huggingface.co/google/pegasus-pubmed/blob/main/config.json#L38
https://huggingface.co/google/pegasus-pubmed/blob/main/config.json#L38
<|||||>> Think I'd prefer to reduce max length automatically to model.config.max_position_embeddings and throw a warning
This is very likely to be unnoticed.
We misuse warnings too much, they are ok when you have 5 lines of output, when you have 100s of those chances that the user will see it is close to 0. Especially when things seem to work, albeit with setting changes behind the scenes.
I feel that @patil-suraj's suggestion of granting user's wish is a better one and if they get garbage then it's loud and clear that they did something wrong. Here, a warning of asking for a longer value than preset will work, as they are likely to search for the culprit.
And in situations where we know what the user is asking for is surely not going to work, we should assert.<|||||>Ok - good arguments! IMO we should only allow this resizing though for models that use Sinusoidal position embeddings a.k.a. position embeddings that have set `.grad` to False.
In terms of implementation, I'd suggest to add a general `resize_position_embeddings(self, max_posituon_embeddings)` to `PreTrainedModel` that throws a NotImplementedError and is then overwritten in Pegasus<|||||>We should also overwrite the `config.max_position_embeddings` when doing so<|||||>@patrickvonplaten, do you have some resources to come back so that we could complete this issue? It looks like it fell between the cracks. Thank you.<|||||>Ok so the plan is to:
1. Add a `resize_position_embeddings` to `PreTrainedModel` just like we are doing it for the word embeddings
2. `resize_position_embeddings` should probably log or warn depending on whether it's sinus position embeddings or learned ones
3. The function should overwrite `config.max_position_embeddings`
=> Happy to open a PR for this one, but would be great to first hear @LysandreJik and @sgugger's opinion on it as well<|||||>Works for me!<|||||>@sgugger ,can you share your working code?<|||||>No I meant the plan suggested by @patrickvonplaten in the above message works for me. |
transformers | 11,343 | closed | Update to use datasets remove_cloumns method | # What does this PR do?
This PR updates the command used in the `Trainer` to drop the columns not used by the model to take advantage of the latest (well not so latest since it landed in datasets 1.4.0) `remove_columns` method. This adds the advantage of not modifying in place the dataset (as was done before) so the user does not have unexpected changes in their original datasets.
In consequence, the little hack needed in the question answering examples is now unnecessary. | 04-20-2021 15:56:43 | 04-20-2021 15:56:43 | |
transformers | 11,342 | closed | mlflow parameter overflow when training a language adapter | When I train a german adapter, I met a problem, which has been talked a lot in the issues, but there are not very clear solutions.
Traceback (most recent call last):
File "/mnt/localdata/cao/run_de_modeling.py", line 512, in <module>
main()
File "/mnt/localdata/cao/run_de_modeling.py", line 476, in main
trainer.train(model_path=model_path)
File "/home/cao/miniconda3/lib/python3.8/site-packages/transformers/trainer.py", line 748, in train
self.control = self.callback_handler.on_train_begin(self.args, self.state, self.control)
File "/home/cao/miniconda3/lib/python3.8/site-packages/transformers/trainer_callback.py", line 335, in on_train_begin
return self.call_event("on_train_begin", args, state, control)
File "/home/cao/miniconda3/lib/python3.8/site-packages/transformers/trainer_callback.py", line 373, in call_event
result = getattr(callback, event)(
File "/home/cao/miniconda3/lib/python3.8/site-packages/transformers/integrations.py", line 502, in on_train_begin
self.setup(args, state, model)
File "/home/cao/miniconda3/lib/python3.8/site-packages/transformers/integrations.py", line 497, in setup
mlflow.log_params(dict(combined_dict_items[i : i + MLflowCallback.MAX_LOG_SIZE]))
File "/home/cao/miniconda3/lib/python3.8/site-packages/mlflow/tracking/fluent.py", line 475, in log_params
MlflowClient().log_batch(run_id=run_id, metrics=[], params=params_arr, tags=[])
File "/home/cao/miniconda3/lib/python3.8/site-packages/mlflow/tracking/client.py", line 838, in log_batch
self._tracking_client.log_batch(run_id, metrics, params, tags)
File "/home/cao/miniconda3/lib/python3.8/site-packages/mlflow/tracking/_tracking_service/client.py", line 245, in log_batch
self.store.log_batch(run_id=run_id, metrics=metrics, params=params, tags=tags)
File "/home/cao/miniconda3/lib/python3.8/site-packages/mlflow/store/tracking/file_store.py", line 852, in log_batch
_validate_batch_log_data(metrics, params, tags)
File "/home/cao/miniconda3/lib/python3.8/site-packages/mlflow/utils/validation.py", line 232, in _validate_batch_log_data
_validate_param(param.key, param.value)
File "/home/cao/miniconda3/lib/python3.8/site-packages/mlflow/utils/validation.py", line 112, in _validate_param
_validate_length_limit("Param value", MAX_PARAM_VAL_LENGTH, value)
File "/home/cao/miniconda3/lib/python3.8/site-packages/mlflow/utils/validation.py", line 180, in _validate_length_limit
raise MlflowException(
mlflow.exceptions.MlflowException: Param value '{'adapters': {'de': (text_lang, 'bb1c8efb82510bed')}, 'config_map': {text_lang: AdapterConfig(original_ln_before=True, original_ln_after=True, residual_before_ln=True, adapter_residual_before_ln=False, ln_before=False, ln_after=False, mh_adapter=Fals' had length 786, which exceeded length limit of 250
My codes are like this:
CUDA_VISIBLE_DEVICES="4" python3 /mnt/localdata/cao/run_de_modeling.py \
--output_dir=/mnt/localdata/cao/output_language_adapter_de/ \
--model_type=bert \
--model_name_or_path=bert-base-multilingual-cased \
--do_train \
--train_data_file=/mnt/localdata/cao/data_for_model/DE_train.txt \
--do_eval \
--eval_data_file=/mnt/localdata/cao/data_for_model/DE_valid.txt \
--mlm \
--language de \
--train_adapter \
--adapter_config pfeiffer \
--per_gpu_train_batch_size 3 \
--learning_rate 5e-5 \
--cache_dir /mnt/localdata/cao/de_cache_dir/ | 04-20-2021 15:29:22 | 04-20-2021 15:29:22 | |
transformers | 11,341 | closed | Warpping model.generate before exporting into tensorflow savedmodel format | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
I had been using BART model and there i can use model.generate if i am working with checkpoints. However model.generate is not exported when i export the model as savedmodel ".pb" format. So is there a way to wrap model.generate while exporting savedmodel, so that it allows using beam search and generation of summary_ids to generate summary
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
Will allow summary generation with savedmodel format for BART like models that uses model.generate
Related to https://github.com/huggingface/transformers/issues/5443 which has been marked closed.
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| 04-20-2021 15:20:49 | 04-20-2021 15:20:49 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,340 | closed | Remove boiler plate code | # What does this PR do?
In Flax, the logic of every model has to be built purely on a tree of `flax.linen.Module` classes so that the whole model, *e.g.* `FlaxBertForMaskedLMModule`, can automatically be recast as an explicit function. This allows for easy jitting, data parallelism, and model parallelism.
However this also means that no weight parameters can be stored in `FlaxBertForMaskedLMModule`, which is a bit problematic since we need to store the weights from `load_from_pretrained`. This forces us to have two Flax classes:
- `FlaxBertForMaskedLMModule`,
- `FlaxBertForMaskedLM`,
whereas `FlaxBertForMaskedLM` takes care of loading/saving the weights and `FlaxBertForMaskedLMModule` defines the logic (function) of the model. This has led to a lot of boilerplate code since `FlaxBertModel`, `FlaxBertForMaskedLM`, `FlaxBertForPretraining`, ... are essentially all the same:
- they take the same input and pass it to the corresponding `flax.linen.Module` forward function
- they initialize a `FlaxPretrainedModel` with the correct module to inherit the loading/saving functionality
- they make use of the same `init_weights` function.
For BERT, the `__call__` functions take identical inputs across different classes. This assumes that this holds true for more or less all BERT-like models in Flax. However, there are some exceptions:
- A `BertForCausalLM` (needed when we add `FlaxEncoderDecoderModel`) would have to overwrite the `__call__` method as it also takes `encoder_hidden_states`, `encoder_attention_mask` as an input.
- This design makes less sense for *e.g.* T5 since `T5Encoder` takes very much different inputs as `T5ForConditionalGeneration`. Here the `__call__` and `init_weights` methods would then not be placed in `T5PreTrainedModel`.
Overall, I think however that removing this much boilerplate is cleaner and will allow us to implement the important models faster. What do you think @sgugger @LysandreJik @avital ?
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-20-2021 15:17:04 | 04-20-2021 15:17:04 | |
transformers | 11,339 | closed | Perform max_input_tokens truncation with Summarization Pipeline | # 🚀 Feature request
I'd like to be able to set a `max_input_tokens` and configure a `truncation_strategy` in `SummarizationPipeline`.
Please let me know if I am missing something that already allows for this!
## Motivation
I initialize and call a summarization pipeline as follows
```
model = AutoModelForSeq2SeqLM.from_pretrained(pretrained_model_name_or_path=model_name_or_path, revision=model_version)
summarizer = pipeline("summarization", model=model, tokenizer=tokenizer, device=use_gpu)
summaries = summarizer(
contexts,
min_length=self.min_length,
max_length=self.max_length,
return_text=True,
clean_up_tokenization_spaces=self.clean_up_tokenization_spaces,
)
```
Currently when I pass a text that is longer than `"google/pegasus-xsum"`'s 512 token limit, I get the following warning
```
Token indices sequence length is longer than the specified maximum sequence length for this model (768 > 512). Running this sequence through the model will result in indexing errors
```
and my program crashes. I'd like to just be able to set a max input tokens and truncation strategy either when I init or call the `pipeline` (which in turns inits a `SummarizationPipeline`).
## Your contribution
If I am missing something or there already exists a way to get around this problem please let me know! If it wouldn't take too much effort to implement this, I might consider opening a PR. | 04-20-2021 14:04:14 | 04-20-2021 14:04:14 | I think the argument you're looking for is `truncation`! Could you try passing `truncation=True` or `truncation="longest_first"` to your pipeline call?<|||||>Hi @LysandreJik this doesn't seem to be working for me. When I call
```
summarizer = pipeline("summarization", model=model, tokenizer=tokenizer, device=use_gpu, truncation=True)
```
I get
```
<ipython-input-8-315444041dc9> in <module>
5
6 #Summarize
----> 7 summarizer = TransformersSummarizer(model_name_or_path="google/pegasus-xsum")
8
9 p_summarizer = Pipeline()
~/Code/haystack/haystack/summarizer/transformers.py in __init__(self, model_name_or_path, model_version, tokenizer, max_length, min_length, use_gpu, clean_up_tokenization_spaces, separator_for_single_summary, generate_single_summary)
87 tokenizer = model_name_or_path
88 model = AutoModelForSeq2SeqLM.from_pretrained(pretrained_model_name_or_path=model_name_or_path, revision=model_version)
---> 89 self.summarizer = pipeline("summarization", model=model, tokenizer=tokenizer, device=use_gpu, truncation=True)
90 self.max_length = max_length
91 self.min_length = min_length
~/Code/anaconda3/envs/haystack/lib/python3.7/site-packages/transformers/pipelines.py in pipeline(task, model, config, tokenizer, framework, revision, use_fast, **kwargs)
3307 break
3308
-> 3309 return task_class(model=model, tokenizer=tokenizer, modelcard=modelcard, framework=framework, task=task, **kwargs)
~/Code/anaconda3/envs/haystack/lib/python3.7/site-packages/transformers/pipelines.py in __init__(self, *args, **kwargs)
2353 def __init__(self, *args, **kwargs):
2354 kwargs.update(task="summarization")
-> 2355 super().__init__(*args, **kwargs)
2356
2357 self.check_model_type(
TypeError: __init__() got an unexpected keyword argument 'truncation'
```
and when I put the `truncation` argument into the call
```
summaries = self.summarizer(
contexts,
min_length=self.min_length,
max_length=self.max_length,
return_text=True,
clean_up_tokenization_spaces=self.clean_up_tokenization_spaces,
truncation=True
)
```
I get
```
TypeError Traceback (most recent call last)
~/Code/haystack/haystack/pipeline.py in run(self, **kwargs)
121 logger.debug(f"Running node `{node_id}` with input `{node_input}`")
--> 122 node_output, stream_id = self.graph.nodes[node_id]["component"].run(**node_input)
123 except Exception as e:
~/Code/haystack/haystack/summarizer/base.py in run(self, documents, generate_single_summary, **kwargs)
36 if documents:
---> 37 results["documents"] = self.predict(documents=documents, generate_single_summary=generate_single_summary)
38
~/Code/haystack/haystack/summarizer/transformers.py in predict(self, documents, generate_single_summary)
131 clean_up_tokenization_spaces=self.clean_up_tokenization_spaces,
--> 132 truncation=True
133 )
~/Code/anaconda3/envs/haystack/lib/python3.7/site-packages/transformers/pipelines.py in __call__(self, return_tensors, return_text, clean_up_tokenization_spaces, *documents, **generate_kwargs)
2438 attention_mask=inputs["attention_mask"],
-> 2439 **generate_kwargs,
2440 )
~/Code/anaconda3/envs/haystack/lib/python3.7/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
14 with self:
---> 15 return func(*args, **kwargs)
16 return decorate_context
~/Code/anaconda3/envs/haystack/lib/python3.7/site-packages/transformers/generation_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, num_return_sequences, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, **model_kwargs)
502 # add encoder_outputs to model_kwargs
--> 503 model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs)
504
~/Code/anaconda3/envs/haystack/lib/python3.7/site-packages/transformers/generation_utils.py in _prepare_encoder_decoder_kwargs_for_generation(self, input_ids, model_kwargs)
85 }
---> 86 model_kwargs["encoder_outputs"]: ModelOutput = encoder(input_ids, return_dict=True, **encoder_kwargs)
87 return model_kwargs
~/Code/anaconda3/envs/haystack/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
TypeError: forward() got an unexpected keyword argument 'truncation'
During handling of the above exception, another exception occurred:
Exception Traceback (most recent call last)
<ipython-input-4-315444041dc9> in <module>
10 p_summarizer.add_node(component=es_retriever, name="Retriever", inputs=["Query"])
11 p_summarizer.add_node(component=summarizer, name="Summarizer", inputs=["Retriever"])
---> 12 res = p_summarizer.run(query="Who is the father of Arya Stark??", top_k_retriever=10)
13
14 pprint(res)
~/Code/haystack/haystack/pipeline.py in run(self, **kwargs)
123 except Exception as e:
124 tb = traceback.format_exc()
--> 125 raise Exception(f"Exception while running node `{node_id}` with input `{node_input}`: {e}, full stack trace: {tb}")
126 queue.pop(node_id)
127 next_nodes = self.get_next_nodes(node_id, stream_id)
Exception: Exception while running node `Summarizer` with input `{'query': 'Who is the father of Arya Stark??', 'documents': [{'text': "\n===In the Riverlands===\nThe Stark army reaches the Twins, a bridge stronghold controlled by Walder Frey, who agrees to allow the army to cross the river and to commit his troops in return for Robb and Arya Stark marrying two of his children.\nTyrion Lannister suspects his father Tywin, who decides Tyrion and his barbarians will fight in the vanguard, wants him killed. As Tyrion, Bronn, and the prostitute Shae swap stories, Tyrion reveals he was married to a woman his father revealed was a prostitute, and made Tyrion watch as his guardsmen raped her.\nAs a Stark force approaches, Tyrion is trampled in the rush and regains consciousness to find the battle over. Tywin discovers the Stark host was only 2,000 men, not the 20,000 he was led to expect.\nRobb, having divided his forces, defeats Jaime Lannister's army with his remaining 18,000 men and captures Jaime.", 'id': '824a2362-004b-4234-a2fe-d0b82fdca52f', 'score': 11.656843, 'probability': 0.8110895501528345, 'question': None, 'meta': {'name': '450_Baelor.txt'}, 'embedding': None}, {'text': '\n===On the Kingsroad===\nCity Watchmen search the caravan for Gendry but are turned away by Yoren. Gendry tells Arya Stark that he knows she is a girl, and she reveals she is actually Arya Stark after learning that her father met Gendry before he was executed.', 'id': 'a04e3059-a941-4aa1-96e4-da0429c1a617', 'score': 11.3836775, 'probability': 0.8058019827683869, 'question': None, 'meta': {'name': '224_The_Night_Lands.txt'}, 'embedding': None}, {'text': '\n===\'\'A Game of Thrones\'\'===\nSansa Stark begins the novel by being betrothed to Crown Prince Joffrey Baratheon, believing Joffrey to be a gallant prince. While Joffrey and Sansa are walking through the woods, Joffrey notices Arya sparring with the butcher\'s boy, Mycah. A fight breaks out and Joffrey is attacked by Nymeria (Arya\'s direwolf) after Joffrey threatens to hurt Arya. Sansa lies to King Robert about the circumstances of the fight in order to protect both Joffrey and her sister Arya. Since Arya ran off with her wolf to save it, Sansa\'s wolf is killed instead, estranging the Stark daughters.\nDuring the Tourney of the Hand to honour her father Lord Eddard Stark, Sansa Stark is enchanted by the knights performing in the event. At the request of his mother, Queen Cersei Lannister, Joffrey spends a portion of the tourney with Sansa, but near the end he commands his guard Sandor Clegane, better known as The Hound, to take her back to her quarters. Sandor explains how his older brother, Gregor, aka "Mountain that Rides" pushed his face into a brazier of hot coals, for playing with one of his wooden toys.\nAfter Eddard discovers the truth of Joffrey\'s paternity, he tells Sansa that they will be heading back to Winterfell. Sansa is devastated and wishes to stay in King\'s Landing, so she runs off to inform Queen Cersei of her father\'s plans, unwittingly providing Cersei with the information needed to arrest her father. After Robert dies, Sansa begs Joffrey to show mercy on her father and he agrees, if Ned will swear an oath of loyalty, but executes him anyway, in front of Sansa. Sansa is now effectively a hostage in King\'s Landing and finally sees Joffrey\'s true nature, after he forces her to look at the tarred head of her now-deceased father.', 'id': '1d2bb694-88fb-44eb-972d-6a72dd0009a1', 'score': 11.194147, 'probability': 0.8020677650524326, 'question': None, 'meta': {'name': '332_Sansa_Stark.txt'}, 'embedding': None}, {'text': "\n===Season 2===\nGendry travels North with Yoren and other Night's Watch recruits, including Arya Stark (disguised as an orphan boy named 'Arry), Lommy Greenhands, Hot Pie and Jaqen H'ghar. During their journey, they are stopped by the Goldcloaks of the City Watch, who demand that Yoren hand Gendry over to them - King Joffrey has ordered that all of his father Robert's bastards be killed, but Yoren turns the Goldcloaks away. Later, Gendry forces Arya to reveal her true identity, and is surprised to learn she is in fact Ned Stark's daughter. After the Goldcloaks get help from Ser Amory Lorch and his men, they ambush the travelling party. In the chaos, Yoren is killed. Gendry's life is then saved by Arya, who convinces the Goldcloaks that Lommy, who was killed during the attack, was in fact Gendry. Gendry and the rest of the recruits are then escorted to Harrenhal, the ruined castle-turned-prison. Ser Gregor Clegane oversees order here, and arbitrarily has many of the prisoners tortured and killed. Gendry is nearly tortured and killed but is saved by the arrival of Lord Tywin Lannister, who chides Clegane's men for their reckless treatment of the prisoners. Thanks to Jaqen H'ghars help, Arya, Gendry and Hot Pie are able to escape Harrenhal.", 'id': '689dac66-1347-43ea-8456-ab728566f9aa', 'score': 11.098732, 'probability': 0.8001674895900547, 'question': None, 'meta': {'name': '191_Gendry.txt'}, 'embedding': None}, {'text': '\n====Season 1====\nArya accompanies her father Ned and her sister Sansa to King\'s Landing. Before their departure, Arya\'s half-brother Jon Snow gifts Arya a sword which she dubs "Needle". On the Kingsroad, Arya is sparring with a butcher\'s boy, Mycah, when Sansa\'s betrothed Prince Joffrey Baratheon attacks Mycah, prompting Arya\'s direwolf Nymeria to bite Joffrey. Arya shoos Nymeria away so she is not killed, but is furious when Sansa later refuses to support her version of events. Mycah is later killed by Joffrey\'s bodyguard Sandor "The Hound" Clegane, earning him Arya\'s hatred. Ned arranges for Arya to have sword lessons with the Braavosi Syrio Forel, who later defends her from Ser Meryn Trant after Joffrey ascends to the throne and kills the Stark household. Arya flees the Red Keep, accidentally killing a stable boy in her escape, hiding out as a beggar in the streets of King\'s Landing. Ned is eventually taken to the Great Sept of Baelor to face judgment; he spots Arya in the crowd, and alerts the Night\'s Watch recruiter Yoren to her presence. Yoren prevents Arya from witnessing Ned\'s execution and has her pose as a boy, "Arry", to avoid detection as she joins Yoren\'s recruits traveling north to Castle Black.', 'id': '6947d45a-f420-4608-b396-774972193849', 'score': 10.634479, 'probability': 0.7907264571073125, 'question': None, 'meta': {'name': '43_Arya_Stark.txt'}, 'embedding': None}, {'text': '\n===In King\'s Landing===\nAfter Varys tells him that Sansa Stark\'s life is also at stake, Eddard "Ned" Stark agrees to make a false confession and swear loyalty to King Joffrey Baratheon.\nArya Stark finds a crowd gathering to watch her father be judged, and climbs onto the statue of Baelor the Blessed. Ned notices Arya and alerts Night\'s Watch recruiter Yoren. Before Sansa, Cersei Lannister, Joffrey and the Small Council, Ned confesses to treason and swears fealty to Joffrey. Instead of sparing Ned as promised, Joffrey orders him to be executed. Seeing that Arya has been rescued by Yoren, Ned accepts his fate and is beheaded.', 'id': 'eb0b5450-a583-428b-b412-266a70c48e30', 'score': 10.627409, 'probability': 0.7905801782386174, 'question': None, 'meta': {'name': '450_Baelor.txt'}, 'embedding': None}, {'text': "\n==== ''A Storm of Swords'' and ''A Feast for Crows'' ====\nPrior to the Red Wedding, Roose Bolton presents Robb Stark with a piece of Theon's skin, revealing that Ramsay has been flaying him; though disgusted, Robb acquiesces to Theon's further captivity, as Theon's father Balon has recently died and Theon's absence presents a succession crisis for the Ironborn. Following Robb Stark's death, King Tommen Baratheon legitimizes Ramsay as a Bolton. The Lannisters pass off Jeyne Poole as Arya Stark and send her north to be betrothed to Ramsay, with only the Lannisters and Boltons aware she is not the real Arya Stark.", 'id': '2e1f4f84-036c-4a2e-956b-090b20e32b25', 'score': 10.533444, 'probability': 0.788628898071797, 'question': None, 'meta': {'name': '487_Ramsay_Bolton.txt'}, 'embedding': None}, {'text': '\n===House Frey===\n* \'\'\'Walder Frey\'\'\' (seasons 1, 3, 6–7) portrayed by David Bradley. David Bradley Lord Walder Frey, nicknamed the "Late Lord Frey", is the head of House Frey, Lord of the Crossing and bannerman to House Tully. He is known for outliving his many wives (now on his 8th) and siring over 100 children (both bastard and trueborn). Because the use of the Twins became a strategic necessity for Robb\'s host, Walder was able to negotiate marriage contracts for his children to Robb and Arya Stark. But during Season 2 Robb broke his word and married Lady Talisa. For this slight, and willing to take advantage of the war\'s changing fortunes, he conspires with Tywin Lannister and Roose Bolton to betray Robb Stark at the wedding of his liege Edmure Tully, which he insists in return for support of his men. Frey hosts the infamous "Red Wedding" at which Robb Stark, his wife and mother are all murdered, refusing to spare Robb even as Catelyn holds Lady Frey hostage and threatens to slit her throat, which she does. He is subsequently granted Riverrun and its lands (though the title Lord Paramount of the Riverlands passes to Harrenhal and House Baelish) and expresses delight to take another young wife, but his house is irredeemably tarnished by the betrayal and House Tully\'s vassals refuse to submit to his rule. In Season 6, he is outraged when he hears of the Blackfish recapture\' of Riverrun and blames his sons Lothar and Black Walder for allowing him to escape. He then orders them to retake the castle using Edmure Tully as a hostage. Though they successfully retake Riverrun with the help of a Lannister host led by Jaime Lannister, Walder is ambushed shortly afterwards by Arya Stark, who slits his throat in revenge for the Red Wedding. In Season 7, Arya uses Walder\'s face to deceive and poison the rest of his family.\n* \'\'\'Lothar Frey\'\'\' (seasons 3, 6) portrayed by Tom Brooke in season 3, and by Daniel Tuite in season 6. One of Lord Walder Frey\'s many sons, nicknamed “Lame Lothar” because of his twisted leg. He and his half-brother Black Walder are sent by their father to Riverrun to propose a marriage between Lord Edmure Tully and Roslin Frey as terms for House Frey rejoining Robb Stark\'s campaign against the Lannisters. He is one of the first to commence the "Red Wedding", stabbing Talisa Stark in the womb several times and killing her and her unborn child. In the sixth season, he is ordered by Walder to retake Riverrun from Brynden Tully. Though they succeed with Lannister help, he is killed by Arya Stark, who subsequently bakes him into a pie.\n* \'\'\'Black Walder Rivers\'\'\' (seasons 3, 6) portrayed by Tim Plester. One of Lord Walder Frey\'s many bastard sons, nicknamed “Black Walder” for his dark demeanor. He and his half-brother Lame Lothar are sent by their father to Riverrun to propose a marriage between Lord Edmure Tully and Roslin Frey as terms for House Frey rejoining Robb Stark\'s campaign against the Lannister. He kills Catelyn Stark at the Red Wedding, after she slits Lady Frey\'s throat in retaliation for her son\'s death. In the sixth season, he takes part in the siege of Riverrun. Though the Freys reclaim the castle with the help of a Lannister host, Black Walder is killed shortly afterwards along with Lothar by Arya Stark, who bakes them both into a pie.', 'id': '6a8e1b47-e9ca-441c-b8ca-a4d259b58ece', 'score': 10.513748, 'probability': 0.7882182073903166, 'question': None, 'meta': {'name': '349_List_of_Game_of_Thrones_characters.txt'}, 'embedding': None}, {'text': '\n==== \'\'A Game of Thrones\'\' ====\nArya adopts a direwolf cub, which she names Nymeria after a legendary warrior queen. She travels with her father, Eddard, to King\'s Landing when he is made Hand of the King. Before she leaves, her half-brother Jon Snow has a smallsword made for her as a parting gift, which she names "Needle" after her least favorite ladylike activity.\nWhile taking a walk together, Prince Joffrey and her sister Sansa happen upon Arya and her friend, the low-born butcher apprentice Mycah, sparring in the woods with broomsticks. Arya defends Mycah from Joffrey\'s torments and her direwolf Nymeria helps Arya fight off Joffrey, wounding his arm in the process. Knowing that Nymeria will likely be killed in retribution, Arya chases her wolf away; but Sansa\'s direwolf Lady is killed in Nymeria\'s stead and Mycah is hunted down and killed by Sandor Clegane, Joffrey\'s bodyguard.\nIn King\'s Landing, her father discovers Arya\'s possession of Needle, but instead of confiscating it he arranges for fencing lessons under the Braavosi swordmaster Syrio Forel, who teaches her the style of fighting known as "water dancing". After her father\'s arrest, Syrio is killed protecting her and Arya narrowly escapes capture. She later witnesses the public execution of her father before falling under the protection of the Night\'s Watch recruiter Yoren.', 'id': '1642d35f-2d57-4c42-988f-a2b7afc3ac6c', 'score': 10.344947, 'probability': 0.7846745387990397, 'question': None, 'meta': {'name': '43_Arya_Stark.txt'}, 'embedding': None}, {'text': '\n== Character description ==\nGendry was conceived and born in King\'s Landing after Robert\'s Rebellion ended and is one of sixteen (twenty in the television series) bastard children of King Robert Baratheon,. He is portrayed as tall and very muscled, having blue eyes and thick black hair, very similar to his biological father Robert and uncle Renly in their youth (Brienne of Tarth once almost mistook him for Renly for a moment). He is stubborn and easily confused.\nDespite being one of the only four surviving biological children of King Robert (along with Mya Stone, Edric Storm and Bella Rivers), Gendry never knew who his father was. His mother was reported to have been a worker at an alehouse who died when Gendry was still a young boy, and all he remembers of her was that she had blond hair. Later on, Tobho Mott, a master armourer from Qohor, was offered double the customary fee by a "lord" with concealed identity to take Gendry in as a smith apprentice, but accepted him for free after being impressed by the boy\'s physique. Gendry turns out to be a talented apprentice, and likes to spend time polishing a bull head helmet that he proudly made for himself, which earned him the nickname "Bull" by Arya Stark.', 'id': 'bb5166b7-2e87-4095-ae0d-5fa5097f197e', 'score': 9.938516, 'probability': 0.7759666295293488, 'question': None, 'meta': {'name': '191_Gendry.txt'}, 'embedding': None}]}`: forward() got an unexpected keyword argument 'truncation', full stack trace: Traceback (most recent call last):
File "/home/branden/Code/haystack/haystack/pipeline.py", line 122, in run
node_output, stream_id = self.graph.nodes[node_id]["component"].run(**node_input)
File "/home/branden/Code/haystack/haystack/summarizer/base.py", line 37, in run
results["documents"] = self.predict(documents=documents, generate_single_summary=generate_single_summary)
File "/home/branden/Code/haystack/haystack/summarizer/transformers.py", line 132, in predict
truncation=True
File "/home/branden/Code/anaconda3/envs/haystack/lib/python3.7/site-packages/transformers/pipelines.py", line 2439, in __call__
**generate_kwargs,
File "/home/branden/Code/anaconda3/envs/haystack/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
return func(*args, **kwargs)
File "/home/branden/Code/anaconda3/envs/haystack/lib/python3.7/site-packages/transformers/generation_utils.py", line 503, in generate
model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs)
File "/home/branden/Code/anaconda3/envs/haystack/lib/python3.7/site-packages/transformers/generation_utils.py", line 86, in _prepare_encoder_decoder_kwargs_for_generation
model_kwargs["encoder_outputs"]: ModelOutput = encoder(input_ids, return_dict=True, **encoder_kwargs)
File "/home/branden/Code/anaconda3/envs/haystack/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
TypeError: forward() got an unexpected keyword argument 'truncation'
```
<|||||>Is there a way you could share your environment/reproducible code example so that I can take a look? On a recent version, running this fails:
```py
from transformers import pipeline
sum = pipeline("summarization")
sum("hey" * 10000)
```
with the following error:
```
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
IndexError: index out of range in self
```
so I do indeed get an out of range error. However, adding the `truncation` flag:
```py
sum("hey" * 10000, truncation=True)
```
works!<|||||>Hmmm ok I need to look more closely at my code first. Closing for now.<|||||>I think the key here was to put the truncation key in the summarization call instead of the pipeline call -
```
sum = pipeline("summarization")
sum("hey" * 10000, truncation=True)
```
instead of
```
sum = pipeline("summarization", truncation=True)
sum("hey" * 10000)
``` |
transformers | 11,338 | closed | tf generate compatible with tf.function | because the tf generate function is not compatible with tf.function,so i can not use this function in tf server
| 04-20-2021 13:52:18 | 04-20-2021 13:52:18 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,337 | closed | Adding `AutomaticSpeechRecognitionPipeline`. | # What does this PR do?
- Because we added everything to enable this pipeline, we probably
should add it to `transformers`.
- This PR tries to limit the scope and focuses only on the pipeline part
(what should go in, and out).
- The tests are very specific for S2T and Wav2vec2 to make sure both
architectures are supported by the pipeline. We don't use the mixin for
tests right now, because that requires more work in the `pipeline`
function (will be done in a follow up PR).
- Unsure about the "helper" function `ffmpeg_read`. It makes a lot of
sense from a user perspective, it does not add any additional
dependencies (as in hard dependency, because users can always use their
own load mechanism). Meanwhile, it feels slightly clunky to have so much
optional preprocessing.
- The pipeline is not done to support streaming audio right now.
# Future work:
- Add `automatic-speech-recognition` as a `task`. And add the
FeatureExtractor.from_pretrained within `pipeline` function.
- Add small models within tests
- Add the Mixin to tests.
- Make the logic between ForCTC vs ForConditionalGeneration better.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@LysandreJik
@patrickvonplaten
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 04-20-2021 13:04:24 | 04-20-2021 13:04:24 | > Thanks for working on it!
>
> It is very specific to S2T and Wav2Vec2 but I don't think that's too much of an issue, we can adapt later.
>
> Could you add this pipeline to:
>
> * the documentation
Yes !
>
> * the main init
Yes
>
> * the `pipeline` factory method
I wanted to defer this into a follow-up PR, because of the AutoModel quirckness and to avoid making super long PRs.
If you feel we can't have it as a separate PR, I'll start including the work directly here.
>
>
> We will also probably need a new auto model.
|
transformers | 11,336 | closed | M2M-100 SentencePiece model produces tokens that are missing on the fixed dictionary | ## 🐛 Bug
The SentencePiece model for [M2M-100](https://huggingface.co/transformers/model_doc/m2m_100.html) (https://dl.fbaipublicfiles.com/m2m_100/spm.128k.model) generates several tokens that are missing on the fixed dictionary (https://dl.fbaipublicfiles.com/m2m_100/data_dict.128k.txt)
### To Reproduce
Steps to reproduce the behavior:
1. Tokenize the following sentence with the SentencePiece model for M2M-100:
```
import sentencepiece as spm
sentence = "My dog perhaps enjoyed music."
tokenizer = spm.SentencePieceProcessor(model_file =os.path.join(model_path, 'spm.128k.model') )
tokenizer.EncodeAsPieces(sentence)
```
2. See the tokens generated: ['▁My', '▁dog', '▁perhaps', '▁enjoyed', '▁music', '.']
3. If you check the fixed dictionary (data_dict.128k.txt) you will notice that '▁perhaps', '▁enjoyed' are missing and during the encoding process these tokens will be set to **3** which corresponds to the "unkwnown" token.
4. The translations are inaccurate for such cases: "My dog perhaps enjoyed noises." --> (fr) "Mon chien a appris les bruits." (with num_beams = 1)
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
#### Other tokens that will be set to **3** ("unknown" token) after the encoding:
{"
", "̈", "ঞ", "ઞ", "ଙ", "ଞ", "ඈ", "ၡ", "ầ", "ậ", "ẵ", "↳", "啡", \
"圳", "圾", "垃", "础", "礎", "萄", "雰", "됩", "밝", "얻", "\|01f4fa", "\
\|01f924", "୍ଚ", "୍ଷ", "ຜນ", "င့", "ည့", "ቃሴ", "ይማ", "ដើ", "ឌ្", \
"ほと", "やは", "ろん", "イベ", "ッフ", "パソ", "来越", "特朗", "西班", "бәп", "лөш", \
"үек", "խմբ", "سرے", "یین", "इते", "मीण", "িৎস", "ିରଫ", "සාහ", "คโน", \
"จจุ", "ถาน", "ษัท", "ียญ", "เสร", "ຂວງ", "ງິນ", "ຖິງ", "ລ້ວ", "ວາມ", \
"ຫ່ງ", "ຶ້ນ", "່ວມ", "ໍລິ", "အတြ", "គរប", "ភិវ", "ាណិ", "ូមិ", "េតុ", \
"ំនង", "្ងៃ", "システ", " иҫә", " луѓ", " мот", " հաղ", " ճան", " تجه", \
" هیو", " ټکن", " ڊيس", " તરી", " ରିପ", " മേഖ", "зеге", "шкил", \
"шөөр", "ідэн", "әүге", "әүеш", "میشہ", "ंसिर", "म्मू", "समें", \
"ক্টো", "ামলা", "েস্ক", "ਜਵਾਨ", "ਤੂਬਰ", "ਮੇਟੀ", "ਿਆਰਥ", "ંટણી", \
"துகா", "ಪರ್ಕ", "ಬೈಲ್", "ಾಜಿಕ", "මෙයි", "ญี่ป", "ดาห์", "รกิจ", \
"ริ่ม", "ัพท์", "าศาส", "าะห์", "ูนย์", "ຈົ້າ", "ດນາມ", "ມືອງ", \
"ສບຸກ", "ັກໂນ", "ໍາເນ", "က်နှ", "იტომ", "ំព័រ", "្ញុំ", "្មែរ", \
"្លួន", "្លេស", "かもしれ", " күрһ", " эшмә", " مقای", " उन्ह", " कोशि", \
" नोटि", " मोबा", " নিরা", " દિલ્", " માહિ", " ଓଡ଼ି", " ପଟ୍ଟ", " \
ಅಭ್ಯ", " ಕ್ಷೇ", " ಪೊಲೀ", " ವಾಣಿ", " කිහි", " පැමි", " ტერი", "версі", \
"клопе", "сьәлә", "һынса", "աքանչ", "րաժար", "ונטאג", "ترنتی", \
"ورسٹی", "پیوتر", "یبانی", "ंत्री", "क्राउ", "म्मीद", "তিবার", \
"বাদিক", "ুধবার", "ਹਾਨੂੰ", "ଭିନ୍ନ", "ബരിമല", "ගමැති", "ุงเทพ", \
"้อมูล", "ທະວີຕ", "ໍາລັບ", "თიერთ", "უხედა", "ძლიათ", "ხედრო", \
"លរដ្ឋ", "ីដេអូ", "្បាប់", " հանդի", " אוטוב", " דאנאר", " کارشن", " \
इस्ते", " उत्पा", " प्राथ", " ગુજરા", " അദ്ദേ", " ຂ່າວວ", "न्त्री", \
"सन्धान", "্যান্য", "வடிக்க", "ಮಾರ್ಟ್", "วเตอร์", "ังหวัด", "ວຽດນາມ", \
"აშორის", "ាមេរិក", "័ត៌មាន", "្នំពេញ", " тарафы", " төхөөр", " \
Հայաստ", " الفلسط", " ٹیکنال", " განმავ", "тегория", "улланыу", \
"פטעמבער", "বিদ্যাল", "র্জাতিক", "വനന്തപു", "ເຂົ້າຫາ", " қамтама", " \
ສົ່ງໃຫ້", " ສໍາຫລັບ", " სხვადას", "স্পতিবার", "ີໂອເອລາວ", " \
વ્યાખ્યાઓ", "abaihan", "abogon", " achieve", "ahabog", "ahabogang", \
"ahlekel", " akawnt", "akuada", "alakahle", "almudug", "altachd", " \
amih", "aminosang", " anvä", "aphuno", "arangang", "aroaupenc", " \
artíc", "ashayeli", " Azərbay", "ịbụ", " beispi", " benfas", " \
benveng", " bharra", "bingkil", "ịbụl", "BND", " Bucure", " \
businesses", "cabka", " certainly", " Chatro", " citt", "èhófà", \
"eklase", "emmuz", " enjoyed", "erantany", "erzlech", "eshimi", \
"esterd", "esye", " ettev", "ewé", " eyisi", "faktirè", "fthiwe", " \
giin", " Goom", "haichean", "haps", "hathast", " hemib", \
"heqululini", "holoni", " htt", "ibeat", "ibuli", "iddene", \
"idmatan", "igawas", "igbahin", "Igual", "íklad", "ilangkan", \
"imutangan", "isemane", "iyembre", " iyisig", " Izray", " kabungtor", \
" KAHAPON", "ketho", " kinaug", " któr", " lớ", "laseklase", \
"latego", "Lietuv", " lling", "ləq", " mainta", " mmad", " mopak", " \
mümk", "naqi", " nearly", " nëm", "ởng", " nghiệ", "oblèm", "ófà", " \
okuday", " øn", "ópez", " owesifazana", "owever", " paggam", "Pagh", \
"Paghimo", "panid", " particularly", " perhaps", " Phetol", " \
przecie", " qualc", "qubom", "ərçiv", " reported", " rəhb", "ríguez", \
"ərrü", " sagols", " sebaga", "Sekelo", "selves", " Sga", "sgol", " \
społ", " Srednj", "Sulod", "tatge", "though", "tirè", "tụrụ", \
"ughout", "ugnawan", "ujourd", "ulagway", "upenc", "uregwu", "utube", \
"utubong", "uwega", " Uyas", " véh", " vreemdel", "vrier", "winan", " \
wła", " wouldn", "XÍA", " xüs", "yembre", "ynəl", "ynnag", "yoné", " \
Zagre", "zində", "zköz", "zonia", \
"\[Alpha]\[Rho]ί\[Omicron]\[Upsilon]", " \[CapitalDelta]\
\[CurlyEpsilon]\[Kappa]\[CurlyEpsilon]", " \[CurlyEpsilon]\[Pi]\
\[Alpha]\[Gamma]\[Gamma]\[CurlyEpsilon]\[Lambda]", " \[CapitalIota]\
\[Omicron]\[Upsilon]\[Nu]", \
"\[Mu]\[Beta]\[Rho]ί\[Omicron]\[Upsilon]", " \[CapitalNu]\[Omicron]\
\[CurlyEpsilon]", " \[CapitalOmicron]\[Kappa]\[Tau]\[Omega]\[Beta]", \
" \[CapitalSigma]\[CurlyEpsilon]\[Pi]\[Tau]\[CurlyEpsilon]", " \
\[CapitalSigma]\[CapitalUpsilon]\[CapitalRho]\[CapitalIota]\
\[CapitalZeta]", "\[Tau]\[Omega]\[Beta]"
### Expected behavior
I guess that one would expect the SentencePiece model to produce mainly tokens corresponding to the ones in the fixed dictionary (https://dl.fbaipublicfiles.com/m2m_100/data_dict.128k.txt)
PS: I initially reported this bug to FAIR team, but I haven't received an answer yet. (https://github.com/pytorch/fairseq/issues/3463) | 04-20-2021 12:05:15 | 04-20-2021 12:05:15 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,335 | closed | [GPTNeo] create local attention mask ones | # What does this PR do?
This PR refactors GPT Neo such that the causal attention mask for the local attention layer is only computed once per batch in the `GPTNeoModel` class and then shared between the layers instead of re-computing it in each layer.
I've verified that all slow tests are passing.
#### Benchmarks
This PR does not change memory/speed for the forward pass
On master
```
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
/home/suraj/projects/gpt-neo-c 2 32 0.021
/home/suraj/projects/gpt-neo-c 2 128 0.059
/home/suraj/projects/gpt-neo-c 2 512 0.227
/home/suraj/projects/gpt-neo-c 2 1024 0.464
/home/suraj/projects/gpt-neo-c 4 32 0.033
/home/suraj/projects/gpt-neo-c 4 128 0.113
/home/suraj/projects/gpt-neo-c 4 512 0.449
/home/suraj/projects/gpt-neo-c 2 1024 N/A
--------------------------------------------------------------------------------
==================== INFERENCE - MEMORY - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Memory in MB
--------------------------------------------------------------------------------
/home/suraj/projects/gpt-neo-c 2 32 6136
/home/suraj/projects/gpt-neo-c 2 128 6268
/home/suraj/projects/gpt-neo-c 2 512 6790
/home/suraj/projects/gpt-neo-c 2 1024 7472
/home/suraj/projects/gpt-neo-c 4 32 6204
/home/suraj/projects/gpt-neo-c 4 128 6428
/home/suraj/projects/gpt-neo-c 4 512 7456
/home/suraj/projects/gpt-neo-c 4 1024 N/A
--------------------------------------------------------------------------------
```
On this PR
```
=================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
/home/suraj/projects/gpt-neo-c 2 32 0.021
/home/suraj/projects/gpt-neo-c 2 128 0.058
/home/suraj/projects/gpt-neo-c 2 512 0.222
/home/suraj/projects/gpt-neo-c 2 1024 0.453
/home/suraj/projects/gpt-neo-c 4 32 0.032
/home/suraj/projects/gpt-neo-c 4 128 0.11
/home/suraj/projects/gpt-neo-c 4 512 0.439
/home/suraj/projects/gpt-neo-c 4 1024 N/A
--------------------------------------------------------------------------------
==================== INFERENCE - MEMORY - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Memory in MB
--------------------------------------------------------------------------------
/home/suraj/projects/gpt-neo-c 2 32 6136
/home/suraj/projects/gpt-neo-c 2 128 6268
/home/suraj/projects/gpt-neo-c 2 512 6790
/home/suraj/projects/gpt-neo-c 2 1024 7476
/home/suraj/projects/gpt-neo-c 4 32 6204
/home/suraj/projects/gpt-neo-c 4 128 6428
/home/suraj/projects/gpt-neo-c 4 512 7460
/home/suraj/projects/gpt-neo-c 4 1024 N/A
--------------------------------------------------------------------------------
```
I did a micro-benchmark using the 125M model for generation and this PR does give a small speed-up when generating longer sequences.
On master
```
%timeit model.generate(**enc, do_sample=False, max_length=512, min_length=512)
4.63 s ± 25.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit -n 10 model.generate(**enc, do_sample=False, max_length=1024, min_length=1024)
9.63 s ± 549 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
```
On this PR
```
%timeit model.generate(**enc, do_sample=False, max_length=512, min_length=512)
4.25 s ± 189 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit -n 10 model.generate(**enc, do_sample=False, max_length=1024, min_length=1024)
9 s ± 437 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
```
| 04-20-2021 11:49:16 | 04-20-2021 11:49:16 | |
transformers | 11,334 | closed | Bug in GPT2ForSequenceClassification | ## Environment info
google collab
##Problem
When I run the example code provided on huggingface site for GPT2ForSequenceClassification, an error is raised sayin that 'microsoft/dialogrpt' is not a checkpoint. See the bug bellow:

Then , I replace 'microsoft/dialogrpt' by 'gpt2' but when I run the code twice, the logits have diffetent values at each run. After i've dug deeper, I've seen that the problem occurs when the top linear layer is being built. Its weights seem to be set randomly, so they differ from a run to another. Do you have any way to get rid of this problem? Thanks!
| 04-20-2021 10:11:33 | 04-20-2021 10:11:33 | Ah, indeed, I think the identifier is incorrect. It needs to be updated to `microsoft/DialogRPT-updown`. Do you want to open a PR to fix this?
The issue you mention regarding the random weights is because the `gpt2` checkpoint doesn't have a sequence classification head. This is not the case for the checkpoint mentioned above (`microsoft/DialogRPT-updown`) which does have a sequence classification head, so it will not be reinitialized every different run.<|||||>@LysandreJik Thanks a lot! Indeed, the identifier `microsoft/DialogRPT-updown` works! But, can I change the code example on hugging face site through a pull request? Since I won't really change some source code, is it a change that must be made in the Transformers repo? The (wrong) code example is as following

<|||||>Yes, the example is created here: https://github.com/huggingface/transformers/blob/5e04d7086803ae4a3892f4082f2835a756592c2c/src/transformers/models/gpt2/modeling_gpt2.py#L1239
If you can update this, then it will update it in the docs as soon as we merge it.
All the docs visible on the website are in the repository :)<|||||>Ok, thanks! I'll deal with it then :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,333 | closed | Potential bug: Tokens with punctuation are re-tokenized although I've set `is_split_into_words=True` | ## Environment info
- `transformers` version: 4.5.1
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
tokenizers: Hi @LysandreJik!
## Information
I am working on a token classification task where my input is in the following format:
```
texts = [['Foo', 'bar', '.'], ['Hello', 'world', '.']]
tags = [['B-ENT, 'I-ENT', 'O'], ['O', 'O, 'O']]
```
- **Model:** `bert-large-cased`
- **Tokenizer:** `BertTokenizerFast`. I align tokens with their tags as in this [tutorial](https://huggingface.co/transformers/custom_datasets.html#token-classification-with-w-nut-emerging-entities).
## Problem
Although I've set `is_split_into_words=True` in the tokenizer, tokens containing punctuation are tokenized.
## To reproduce
I reproduced the issue in this [Google Colab notebook](https://colab.research.google.com/drive/1mNJ-T6kOaC5_a8C3T_qCXBNFP5WYC5oi?usp=sharing).
## Expected behavior
Since I've set `is_split_into_words=True`, I would expect the tokenizer to keep the tokens as they are and split them into subwords with `##`. For example, if a token is `'foo(bar'`, I would expect it to stay that way, instead of being split into `['foo', '(', 'bar']`.
Thanks a lot for reading the issue! | 04-20-2021 09:52:54 | 04-20-2021 09:52:54 | The `is_split_into_words` should be set to skip pre-tokenization (splitting on whitespace), not tokenization. This flag should be set to `True` if you have split your text into individual words, and you're now looking to have each word split into tokens and converted to IDs.
This seems to be unclear from the documentation, we'll work on improving this.<|||||>Makes sense, thank you for the clarification! I'd be happy to work on this if needed.<|||||>Yes, we would welcome such a contribution! I guess we would need to find all occurrences of that `is_split_into_words` parameter and clarify that pre-tokenization isn't tokenization as one could expect.
We would gladly welcome a PR!<|||||>Great, I will work on this next week! Thanks again for the help! |
transformers | 11,332 | closed | batch_encode_plus set a sort parameter | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
When use `tokenizer.batch_encode_plus`, I hope get the sorted results, so hope have the sort parameter.
Such as: `tokenizer.batch_encode_plus(..., sorted=True)`
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| 04-20-2021 08:23:07 | 04-20-2021 08:23:07 | Sorted by length?
How this helps you?<|||||>> Sorted by length?
> How this helps you?
for next lstm layer<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,331 | closed | [Generate] Remove outdated code | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR refactors `greedy_search()` and `sample()` by removing old code and improving some comments.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-20-2021 08:12:28 | 04-20-2021 08:12:28 | Merging (cc @LysandreJik, @sgugger) |
transformers | 11,330 | closed | Correcting comments in T5Stack to reflect correct tuple order | # What does this PR do?
In order to match the actual order (line 513 and 516, and as accessed in 968), I've changed the order mentioned in comments L962 and L966-967.
## Before submitting
- [ x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR. (Seems like @patrickvonplaten, @patil-suraj are suggested for T5)
| 04-20-2021 06:49:08 | 04-20-2021 06:49:08 | Hi, @talkhaldi thanks for correcting the other comment!
There is one code quality check failing, could run `make style && make quality` and push the code again?<|||||>Hi!
Thanks for your comment! The command you mentioned was requiring a lot of dependencies, some of which conflicted with the venv I was using, so instead of creating a new venv to run it, I removed the extra trailing space which technically was the only extra thing, and hoped it would work, but seems not ^^'
I created a new venv and running the command now...<|||||>Hi @patil-suraj,
So it seems make style works well, but I get this error with make quality:
```
> make quality
black --check� �amples tests src utils
839 files would be left unchanged.
isort --check-only examples tests src utils
python utils/custom_init_isort.py --check_only
flake8 examples tests src utils
make extra_quality_checks
make[1]: Entering directory '/mnt/berry/home/alkhaldi/contribcode/transformers'
python utils/check_copies.py
python utils/check_table.py
python utils/check_dummies.py
python utils/check_repo.py
Traceback (most recent call last):
File "utils/check_repo.py", line 22, in <module>
from transformers.models.auto import get_values
ImportError: cannot import name 'get_values' from 'transformers.models.auto' (unknown location)
make[1]: *** [Makefile:33: extra_quality_checks] Error 1
make[1]: Leaving directory '/mnt/berry/home/alkhaldi/contribcode/transformers'
make: *** [Makefile:42: quality] Error 2
```
Do you have an idea of what's wrong?<|||||>hi @talkhaldi
please make sure you have all the dev deps before running make
to do that you could run `pip install -e ".[dev]"` from the root of the repo.<|||||>Hi @patil-suraj,
I have applied that command successfully, but it seems the tests are still failing :o What could be the problem?
Note: That command changed many files, but I only added modeling_t5.py to the commit. Should I include the changes of all files even though I haven't changed them manually?<|||||>Hi @talkhaldi
I think this is because we upgraded the version of `black`. Could you rebase your branch with master and then push again ?<|||||>Hi @patil-suraj,
The check_code_quality test passes now, but another test fails. Can you please tell me what's wrong? |
transformers | 11,329 | closed | Honor contributors to models | # What does this PR do?
This PR mentions by HF username the person who added a given model in each doc file, and updates the template so this keeps being consistently done.
A few persons are missing, tagging there below. If you are one of the persons tagged, it would be great if you could create a HF account and report your HF username here so we can properly attribute the model addition to you :-)
- Bert japanese: @singletongue
- CamemBert: @louismartin
- DeBerta (v1 and v2): @BigBird01
- ProphetNet and XLM-ProphetNet: @qiweizhen
| 04-20-2021 02:53:38 | 04-20-2021 02:53:38 | Hi, thanks for this update!
I guess my huggingface username is [`camembert`](https://huggingface.co/camembert) ...?<|||||>> # What does this PR do?
> This PR mentions by HF username the person who added a given model in each doc file, and updates the template so this keeps being consistently done.
>
> A few persons are missing, tagging there below. If you are one of the persons tagged, it would be great if you could create a HF account and report your HF username here so we can properly attribute the model addition to you :-)
>
> * Bert japanese: @singletongue
> * CamemBert: @louismartin
> * DeBerta (v1 and v2): @BigBird01
=> Pengcheng He: @BigBird01
> * ProphetNet and XLM-ProphetNet: @qiweizhen
Thanks I just replied my name and account inline.<|||||>Hi @BigBird01, there is no huggingface account with the user name BigBird01 (I'm not talking about the GitHub account but the account on [huggingface.co](https://huggingface.co/)).<|||||>Thanks! It’s DeBERTa😊, DeBERTa (Pengcheng He) (huggingface.co)<https://huggingface.co/DeBERTa>
Thanks!
Pengcheng
<|||||>Thank you for the mentioning!
The HF account for `bert-japanese` models is [`cl-tohoku`](https://huggingface.co/cl-tohoku).<|||||>Failure is spurious, so merging. @qiweizhen if you create (or already have) a HF account, I can add you in a followup PR. |
transformers | 11,328 | closed | Trainer push to hub | # What does this PR do?
This PR begins the work to completely integrate the `Trainer` API with the [model hub](https://huggingface.co/models). It introduces a new mixin `PushToHubMixin` that implements the `push_to_hub` method. That mixin is then subclassed in all objects have a `save_pretrained` method: config, tokenizers, models.
This enables the current API to create a new repo and push the model to it:
```
# Will create and push to https://huggingface.co/usernam/model_id/
model.save_pretrained(model_id, push_to_hub=True)
```
This requires the user to have a valid token, for instance generated by `transformers-cli login` (a useful error message is displayed if that is not the case).
If the repo already exists and the user just wants to update the weights, they can do:
```
model.save_pretrained(model_id, push_to_hub=True, repo_url=my_repo_url)
```
This will work as long as the git credentials of the user are stored locally. If not, a token may need to be passed either with `use_auth_token=True` or `use_auth_token=str_token`.
This also works to update the config or the tokenizer if there is a fix needed:
```
config = AutoConfig.from_pretrained(repo_id)
config.that_arg = fix
config.save_pretrained(local_folder, push_to_hub=True, repo_url=my_repo_url)
```
This PR also adds `Trainer.push_model_to_hub` that can be called after training to push the underlying model to the hub. This is controlled by a new `--push_to_hub` training argument, this last method is called in every example script using the Trainer, so people can start interacting with it.
Follow-up PRs scheduled are:
- have the Trainer automatically generate a sensible model card that can be pushed with the rest
- add the option to upload not only the final model to the hub but all checkpoints, using the versioning system to make it easy to navigate.
In terms of tests, this PR also adds a new environment variable watched by the Transformers library to decide which base url use in all things `from_pretrained`, which allows us to check the things we push to the staging env are actually working with the `from_pretrained` methods. The `tests_git_lfs` job in circle CI is renamed `tests_hub` and activates that env variable then runs all the tests marked with `is_staging_test` (which basically push things to the hub and check they can be used with `from_pretrained`). | 04-20-2021 01:16:48 | 04-20-2021 01:16:48 | Side note, the torchhub test uses the master branch to check which dependency to install (see [here](https://github.com/huggingface/transformers/blob/c0328a6c263494fff527fac7288faa627e3267e0/.github/workflows/github-torch-hub.yml#L36)), which means it can't pass until this PR is merged. Unless I missed something!<|||||>This is awesome - very much looking forward to have this feature!
One thing I'd love to discuss are that they are 4 input arguments to to the `push_to_hub(...)` method - do we really need all of those?
1.) I actually wouldn't put `save_directory` as an argument to `push_to_hub` because I think only the object on which `push_to_hub(...)` is called should be uploaded and not files that are unrelated to the object on which `push_to_hub()` is called.
*E.g.*: If I call `model.save_pretrained("name/of/dir")` the model and config is saved in this dir => I would therefore expect `model.push_to_hub("name_of_repo")` to upload only model to the hub. IMO, this is more intuitive and more concise with how we design `save_pretrained(...)`. I don't really understand why a model object should be responsible to upload files to the hub that are not related to the model itself. Similar to how we call `model.save_pretrained(...)` and `tokenizer.save_pretrained(...)` to save everything we need, we should have to call `model.push_to_hub(...)` and `tokenizer.push_to_hub(...)` IMO.
2.) Also, I think it would be a bit nicer if we could reduce the args: `repo_name`, `repo_url`, `organization` to just `repo_name` and `organization`. I would make `repo_name` mandatory (by default it will be pushed under one's namespace) and `organization` optionally. Do we really need three args to define the repo URL? What do you think?
I feel quite strongly about 1.) to have consistency in the library, less strongly about 2.) |
transformers | 11,327 | closed | run_ner.py example MobileBERT FP16 returns nan loss | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1
- Platform: Linux-5.8.0-44-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes (RTX 2080 Ti)
- Using distributed or parallel set-up in script?: No
### Who can help
@sgugger @stas00 @patil-suraj
## Information
Model I am using MobileBERT:
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: (give the name) conll2003
* [ ] my own task or dataset: (give details below)
## To reproduce
Using the example: https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner.py
Steps to reproduce the behavior:
1. Add training_args.fp16 = True to main() after initializing training_args
2. parameters to run_ner:
```
--model_name_or_path
google/mobilebert-uncased
--dataset_name
conll2003
--output_dir
/path/to/output
--do_eval
--do_train
--do_predict
```
3.loss will return nan
First observed nans popping up from the encoder within the forward call in the MobileBertModel class:
https://huggingface.co/transformers/_modules/transformers/modeling_mobilebert.html
## Expected behavior
When running without FP16, the model trains as expected. Other models that I have tested did not have this issue and converge well with fp16 enabled: RoBERTa, BERT, and DistilBERT. | 04-19-2021 22:01:17 | 04-19-2021 22:01:17 | It looks like mobileBERT was pretrained on TPUs using bfloat16, which then often result in NaNs when using FP16 for further fine-tuning (see #11076 or #10956). You'll be best off training in FP32 or use another model compatible with FP16.<|||||>Makes sense! That's interesting that affects the training on GPUs! I will pass this info on to my colleague who deals with reproducibility! And for now I shall stick with FP32 when fine-tuning the MobileBERT model!
Many thanks for the reply!<|||||>> You'll be best off training in FP32 or use another model compatible with FP16.
And at some point we should also add `--bf16` mode to Trainer, for those who want to do finetuning and inference on hardware that supports it . e.g. high-end Ampere RTX-3090 and A100 should already support it, and of course TPU v2+.
Does it make sense?
FYI, `bf16` AMP is being discussed here: https://github.com/pytorch/pytorch/issues/55374 |
transformers | 11,326 | closed | Parameter missing from state_dict of optimizer when loading from checkpoint | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: `'4.2.0dev0'`
- Platform: Debian
- Python version: `Python 3.6.10 |Anaconda, Inc.| (default, May 8 2020, 02:54:21)`
- PyTorch version (GPU?): `torch-xla-1.6`
- Tensorflow version (GPU?):
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: Yes
- Using TPUs
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ X] the official example scripts: (give details below)
* [ X] my own modified scripts: (give details below)
The tasks I am working on is:
* MLM
## To reproduce
You need to load a model from a checkpoint saved on the TPU.
Steps to reproduce the behavior:
1. Run `run_mlm.py` on any dataset and store a checkpoint. Then load from that checkpoint using the following command.
2. `python transformers/examples/language-modeling/run_mlm.py --warmup_steps 10000 --learning_rate 1e-4 --save_steps 100000 --max_seq_length 512 --logging_steps 50 --overwrite_output_dir --model_name_or_path ../../bucket/model_outputs/en/inverted_order_500K/mlm/checkpoint-10000 --do_train --do_eval --max_steps 500000 --per_device_train_batch_size 16 --per_device_eval_batch_size 16 --train_file ../../bucket/pretrain_data/en/valid.txt --validation_file ../../bucket/pretrain_data/en/valid.txt --output_dir ../../bucket/model_outputs/en/inverted_order_500K/mlm`
3. OR, use this `nohup python transformers/examples/xla_spawn.py --num_cores 8 transformers/examples/language-modeling/run_mlm.py --warmup_steps 10000 --learning_rate 1e-4 --save_steps 100000 --max_seq_length 512 --logging_steps 50 --overwrite_output_dir --model_name_or_path ../../bucket/model_outputs/en/inverted_order_500K/mlm/checkpoint-10000 --do_train --do_eval --max_steps 500000 --per_device_train_batch_size 16 --per_device_eval_batch_size 16 --train_file ../../bucket/pretrain_data/en/valid.txt --validation_file ../../bucket/pretrain_data/en/valid.txt --output_dir ../../bucket/model_outputs/en/inverted_order_500K/mlm`
## Error trace
This error trace uses a modified `Trainer`, but the issue occurs with the original `Trainer` as well.
> Traceback (most recent call last):
> File "transformers/examples/xla_spawn.py", line 85, in <module>
> main()
> File "transformers/examples/xla_spawn.py", line 81, in main
> xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores)
> File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 292, in spawn
> _start_fn(0, pf_cfg, fn, args)
> File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 229, in _start_fn
> fn(gindex, *args)
> File "/home/asd/source_code/Multilingual/transformers/examples/language-modeling/run_mlm_synthetic.py", line 486, in _mp_fn
> main()
> File "/home/asd/source_code/Multilingual/transformers/examples/language-modeling/run_mlm_synthetic.py", line 460, in main
> trainer.train(model_path=model_path)
> File "/home/asd/source_code/Multilingual/transformers/src/transformers/trainer_word_modifications.py", line 666, in train
> self._load_optimizer_and_scheduler(model_path)
> File "/home/asd/source_code/Multilingual/transformers/src/transformers/trainer_word_modifications.py", line 1003, in _load_optimizer_and_scheduler
> self.optimizer.load_state_dict(optimizer_state)
> File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch/optim/optimizer.py", line 123, in load_state_dict
> raise ValueError("loaded state dict contains a parameter group "
> ValueError: loaded state dict contains a parameter group that doesn't match the size of optimizer's group
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Where is the issue?
I've isolated the issue to be a missing parameter in `optimizer_state['state']`. For some reason, index `136` is missing from `optimizer_state['state'].keys()`
The following is the debugger output in function `_load_optimizer_and_scheduler` and just before line `self.optimizer.load_state_dict(optimizer_state)` in block `if is_torch_tpu_available()`.
``` python
>>> optimizer_state['param_groups']
[{'weight_decay': 0.0, 'lr': 0.0001, 'betas': [0.9, 0.999], 'eps': 1e-08, 'correct_bias': True, 'initial_lr': 0.0001, 'params': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53]}, {'weight_decay': 0.0, 'lr': 0.0001, 'betas': [0.9, 0.999], 'eps': 1e-08, 'correct_bias': True, 'initial_lr': 0.0001, 'params': [54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139]}]
>>> optimizer_state['state'].keys()
dict_keys([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 137, 138, 139])
```
## Expected behavior
Load the checkpoint correctly.
<!-- A clear and concise description of what you would expect to happen. -->
| 04-19-2021 21:36:35 | 04-19-2021 21:36:35 | Could you upgrade to the latest version of Transformers and see if the problem persists? I have tried to reproduce but it all works fine on my side.<|||||>I have the same issue.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I'm facing the same problem with `4.10.0.dev0`. @ameet-1997 could you find a solution for this?<|||||>Looks like you already found the solution, thanks for that!
I wasn't able to fix it earlier. |
transformers | 11,325 | closed | Enable added tokens | Currently, the only way to manage adding `AddedToken`s to a tokenizer is via the `tokenizer.add_special_tokens` or `tokenizer.add_tokens` methods; it should also be enabled from the initialization.
Previously this was impossible:
```py
special_tokens = [AddedToken('<special>')]
GPT2Tokenizer.from_pretrained('gpt2', additional_special_tokens=special_tokens)
```
This enables this functionality and adds a test. This also fixes all tokenizers that were ill-configured for that purpose.. | 04-19-2021 20:59:19 | 04-19-2021 20:59:19 | @sgugger, please merge if you're happy with the updated PR :) |
transformers | 11,324 | closed | [Trainer] Add a progress bar for batches skipped | # What does this PR do?
As suggested in #11284, this PR adds a progress bar for the batches skipped when resuming training from a checkpoint as well as a comment telling the user how to deactivate that behavior if they find it too long. | 04-19-2021 20:38:03 | 04-19-2021 20:38:03 | |
transformers | 11,323 | closed | Bug in trainer: substantially different results from restarting from a checkpoint and without | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.5
- Platform: linux
- Python version: 3.7
- PyTorch version (GPU?): 1.8
- Tensorflow version (GPU?): -
- Using GPU in script?: -
- Using distributed or parallel set-up in script?: -
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger @patrickvonplaten, @patil-suraj
## Information
- I am training T5 model and I am resuming the training from a checkpoint
- I have fixed the issue here https://github.com/huggingface/transformers/issues/11294 by freezing the parameters back right after this loads the model from the checkpoint
- I am using "evaluation_strategy": "steps" and I evaluate the model every 10 steps with "save_total_limit": 1
- I modified the save_checkpoint class as below to "save last copy of the model in output_dir" as one need to load a checkpoint from the place the model is left trained, and not from the checkpoint with best evaluation:
```
def _save_checkpoint(self, model, trial, metrics=None):
super()._save_checkpoint(model, trial, metrics)
# Saves the models checkpoints in the main folder.
if self.is_world_process_zero():
# remove the older global_steps.
global_steps = [str(x) for x in Path(self.args.output_dir).glob("global_step*")]
for global_step in global_steps:
shutil.rmtree(global_step)
self.save_model(self.args.output_dir)
if self.deepspeed:
self.deepspeed.save_checkpoint(self.args.output_dir)
else:
# deepspeed.save_checkpoint above saves model/optim/sched
torch.save(self.optimizer.state_dict(), os.path.join(self.args.output_dir, "optimizer.pt"))
with warnings.catch_warnings(record=True) as caught_warnings:
torch.save(self.lr_scheduler.state_dict(), os.path.join(self.args.output_dir, "scheduler.pt"))
reissue_pt_warnings(caught_warnings)
self.state.save_to_json(os.path.join(self.args.output_dir, "trainer_state.json"))
```
then I find the last checkpoint to resume from it from the saved one in output directory as below:
```
def get_last_checkpoint(output_dir):
if os.path.exists(os.path.join(output_dir, 'pytorch_model.bin')):
return output_dir
return None
```
Here is the results without resume for 10 times evaluation:
```
{'loss': 5.0483, 'learning_rate': 6e-07, 'epoch': 0.02}
0%| | 10/60000 [00:07<11:11:04, 1.49it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.54it/s]
{'mrpc_en_eval_loss': 5.382528305053711, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.0, 'mrpc_en_eval_runtime': 1.8421, 'mrpc_en_eval_samples_per_second': 110.741, 'epoch': 0.22}
{'mrpc_en_eval_loss': 5.382528305053711, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.0, 'mrpc_en_eval_runtime': 1.8421, 'mrpc_en_eval_samples_per_second': 110.741, 'epoch': 0.22, 'eval_average_metrics': 0.0}
0%| | 20/60000 [00:20<11:57:29, 1.39it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.56it/s]
{'mrpc_en_eval_loss': 5.180729389190674, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.0, 'mrpc_en_eval_runtime': 1.8179, 'mrpc_en_eval_samples_per_second': 112.218, 'epoch': 0.43}
{'mrpc_en_eval_loss': 5.180729389190674, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.0, 'mrpc_en_eval_runtime': 1.8179, 'mrpc_en_eval_samples_per_second': 112.218, 'epoch': 0.43, 'eval_average_metrics': 0.0}
0%| | 30/60000 [00:33<12:01:13, 1.39it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.52it/s]
{'mrpc_en_eval_loss': 4.810805320739746, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.0, 'mrpc_en_eval_runtime': 1.8421, 'mrpc_en_eval_samples_per_second': 110.743, 'epoch': 0.65}
{'mrpc_en_eval_loss': 4.810805320739746, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.0, 'mrpc_en_eval_runtime': 1.8421, 'mrpc_en_eval_samples_per_second': 110.743, 'epoch': 0.65, 'eval_average_metrics': 0.0}
0%| | 40/60000 [00:45<11:17:50, 1.47it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.54it/s]
{'mrpc_en_eval_loss': 4.203256607055664, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.0098039215686274, 'mrpc_en_eval_runtime': 2.031, 'mrpc_en_eval_samples_per_second': 100.441, 'epoch': 0.87}
{'mrpc_en_eval_loss': 4.203256607055664, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.0098039215686274, 'mrpc_en_eval_runtime': 2.031, 'mrpc_en_eval_samples_per_second': 100.441, 'epoch': 0.87, 'eval_average_metrics': 0.0}
0%| | 50/60000 [00:58<11:42:57, 1.42it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.39it/s]
{'mrpc_en_eval_loss': 3.262455463409424, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.0098039215686274, 'mrpc_en_eval_runtime': 2.1069, 'mrpc_en_eval_samples_per_second': 96.825, 'epoch': 1.09}
{'mrpc_en_eval_loss': 3.262455463409424, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.0098039215686274, 'mrpc_en_eval_runtime': 2.1069, 'mrpc_en_eval_samples_per_second': 96.825, 'epoch': 1.09, 'eval_average_metrics': 0.0}
0%|▏ | 60/60000 [01:13<11:57:15, 1.39it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 1.78it/s]
{'mrpc_en_eval_loss': 1.9655567407608032, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.49019607843137253, 'mrpc_en_eval_gen_len': 3.053921568627451, 'mrpc_en_eval_runtime': 2.8657, 'mrpc_en_eval_samples_per_second': 71.186, 'epoch': 1.3}
{'mrpc_en_eval_loss': 1.9655567407608032, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.49019607843137253, 'mrpc_en_eval_gen_len': 3.053921568627451, 'mrpc_en_eval_runtime': 2.8657, 'mrpc_en_eval_samples_per_second': 71.186, 'epoch': 1.3, 'eval_average_metrics': 0.24509803921568626}
0%|▏ | 70/60000 [01:27<12:14:11, 1.36it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.08it/s]
{'mrpc_en_eval_loss': 0.7519775032997131, 'mrpc_en_eval_f1': 18.404907975460123, 'mrpc_en_eval_accuracy': 34.80392156862745, 'mrpc_en_eval_gen_len': 2.9411764705882355, 'mrpc_en_eval_runtime': 2.6193, 'mrpc_en_eval_samples_per_second': 77.884, 'epoch': 1.52}
{'mrpc_en_eval_loss': 0.7519775032997131, 'mrpc_en_eval_f1': 18.404907975460123, 'mrpc_en_eval_accuracy': 34.80392156862745, 'mrpc_en_eval_gen_len': 2.9411764705882355, 'mrpc_en_eval_runtime': 2.6193, 'mrpc_en_eval_samples_per_second': 77.884, 'epoch': 1.52, 'eval_average_metrics': 26.60441477204379}
0%|▏ | 80/60000 [01:41<12:02:22, 1.38it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.60it/s]
{'mrpc_en_eval_loss': 0.4142318665981293, 'mrpc_en_eval_f1': 75.62500000000001, 'mrpc_en_eval_accuracy': 61.76470588235294, 'mrpc_en_eval_gen_len': 2.1176470588235294, 'mrpc_en_eval_runtime': 1.7878, 'mrpc_en_eval_samples_per_second': 114.109, 'epoch': 1.74}
{'mrpc_en_eval_loss': 0.4142318665981293, 'mrpc_en_eval_f1': 75.62500000000001, 'mrpc_en_eval_accuracy': 61.76470588235294, 'mrpc_en_eval_gen_len': 2.1176470588235294, 'mrpc_en_eval_runtime': 1.7878, 'mrpc_en_eval_samples_per_second': 114.109, 'epoch': 1.74, 'eval_average_metrics': 68.69485294117648}
0%|▏ | 90/60000 [01:54<11:41:23, 1.42it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.54it/s]
{'mrpc_en_eval_loss': 0.3786551058292389, 'mrpc_en_eval_f1': 51.18483412322274, 'mrpc_en_eval_accuracy': 49.50980392156863, 'mrpc_en_eval_gen_len': 2.6519607843137254, 'mrpc_en_eval_runtime': 1.8265, 'mrpc_en_eval_samples_per_second': 111.69, 'epoch': 1.96}
{'mrpc_en_eval_loss': 0.3786551058292389, 'mrpc_en_eval_f1': 51.18483412322274, 'mrpc_en_eval_accuracy': 49.50980392156863, 'mrpc_en_eval_gen_len': 2.6519607843137254, 'mrpc_en_eval_runtime': 1.8265, 'mrpc_en_eval_samples_per_second': 111.69, 'epoch': 1.96, 'eval_average_metrics': 50.34731902239569}
0%|▏ | 100/60000 [02:07<12:01:27, 1.38it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.58it/s]
{'mrpc_en_eval_loss': 0.29472649097442627, 'mrpc_en_eval_f1': 71.01449275362319, 'mrpc_en_eval_accuracy': 60.78431372549019, 'mrpc_en_eval_gen_len': 2.3333333333333335, 'mrpc_en_eval_runtime': 1.812, 'mrpc_en_eval_samples_per_second': 112.581, 'epoch': 2.17}
{'mrpc_en_eval_loss': 0.29472649097442627, 'mrpc_en_eval_f1': 71.01449275362319, 'mrpc_en_eval_accuracy': 60.78431372549019, 'mrpc_en_eval_gen_len': 2.3333333333333335, 'mrpc_en_eval_runtime': 1.812, 'mrpc_en_eval_samples_per_second': 112.581, 'epoch': 2.17, 'eval_average_metrics': 65.89940323955669}
```
Now lets resume from step = 40, while the first 40 steps would get the same results, after resuming the results differ a lot:
```
0%| | 40/60000 [00:07<9:49:41, 1.69it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.62it/s]
{'mrpc_en_eval_loss': 4.203643321990967, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.0098039215686274, 'mrpc_en_eval_runtime': 2.0033, 'mrpc_en_eval_samples_per_second': 101.834, 'epoch': 0.87}
{'mrpc_en_eval_loss': 4.203643321990967, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.0098039215686274, 'mrpc_en_eval_runtime': 2.0033, 'mrpc_en_eval_samples_per_second': 101.834, 'epoch': 0.87, 'eval_average_metrics': 0.0}
0%| | 50/60000 [00:21<12:09:50, 1.37it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.30it/s]
{'mrpc_en_eval_loss': 3.2706634998321533, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.0098039215686274, 'mrpc_en_eval_runtime': 2.2048, 'mrpc_en_eval_samples_per_second': 92.524, 'epoch': 1.09}
{'mrpc_en_eval_loss': 3.2706634998321533, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.0098039215686274, 'mrpc_en_eval_runtime': 2.2048, 'mrpc_en_eval_samples_per_second': 92.524, 'epoch': 1.09, 'eval_average_metrics': 0.0}
0%|▏ | 60/60000 [00:35<12:27:28, 1.34it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.54it/s]
{'mrpc_en_eval_loss': 1.9863247871398926, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.49019607843137253, 'mrpc_en_eval_gen_len': 3.019607843137255, 'mrpc_en_eval_runtime': 2.4126, 'mrpc_en_eval_samples_per_second': 84.557, 'epoch': 1.3}
{'mrpc_en_eval_loss': 1.9863247871398926, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.49019607843137253, 'mrpc_en_eval_gen_len': 3.019607843137255, 'mrpc_en_eval_runtime': 2.4126, 'mrpc_en_eval_samples_per_second': 84.557, 'epoch': 1.3, 'eval_average_metrics': 0.24509803921568626}
0%|▏ | 70/60000 [00:49<12:02:36, 1.38it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.07it/s]
{'mrpc_en_eval_loss': 0.7721647620201111, 'mrpc_en_eval_f1': 18.404907975460123, 'mrpc_en_eval_accuracy': 34.80392156862745, 'mrpc_en_eval_gen_len': 2.946078431372549, 'mrpc_en_eval_runtime': 2.5655, 'mrpc_en_eval_samples_per_second': 79.518, 'epoch': 1.52}
{'mrpc_en_eval_loss': 0.7721647620201111, 'mrpc_en_eval_f1': 18.404907975460123, 'mrpc_en_eval_accuracy': 34.80392156862745, 'mrpc_en_eval_gen_len': 2.946078431372549, 'mrpc_en_eval_runtime': 2.5655, 'mrpc_en_eval_samples_per_second': 79.518, 'epoch': 1.52, 'eval_average_metrics': 26.60441477204379}
0%|▏ | 80/60000 [01:02<12:08:06, 1.37it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.55it/s]
{'mrpc_en_eval_loss': 0.42692506313323975, 'mrpc_en_eval_f1': 74.28571428571428, 'mrpc_en_eval_accuracy': 60.29411764705882, 'mrpc_en_eval_gen_len': 2.142156862745098, 'mrpc_en_eval_runtime': 1.8243, 'mrpc_en_eval_samples_per_second': 111.824, 'epoch': 1.74}
{'mrpc_en_eval_loss': 0.42692506313323975, 'mrpc_en_eval_f1': 74.28571428571428, 'mrpc_en_eval_accuracy': 60.29411764705882, 'mrpc_en_eval_gen_len': 2.142156862745098, 'mrpc_en_eval_runtime': 1.8243, 'mrpc_en_eval_samples_per_second': 111.824, 'epoch': 1.74, 'eval_average_metrics': 67.28991596638654}
0%|▏ | 90/60000 [01:16<12:00:53, 1.39it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.50it/s]
{'mrpc_en_eval_loss': 0.39015302062034607, 'mrpc_en_eval_f1': 45.685279187817265, 'mrpc_en_eval_accuracy': 47.549019607843135, 'mrpc_en_eval_gen_len': 2.7205882352941178, 'mrpc_en_eval_runtime': 1.856, 'mrpc_en_eval_samples_per_second': 109.915, 'epoch': 1.96}
{'mrpc_en_eval_loss': 0.39015302062034607, 'mrpc_en_eval_f1': 45.685279187817265, 'mrpc_en_eval_accuracy': 47.549019607843135, 'mrpc_en_eval_gen_len': 2.7205882352941178, 'mrpc_en_eval_runtime': 1.856, 'mrpc_en_eval_samples_per_second': 109.915, 'epoch': 1.96, 'eval_average_metrics': 46.617149397830204}
0%|▏ | 100/60000 [01:31<12:02:17, 1.38it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.55it/s]
{'mrpc_en_eval_loss': 0.30966323614120483, 'mrpc_en_eval_f1': 68.48249027237354, 'mrpc_en_eval_accuracy': 60.29411764705882, 'mrpc_en_eval_gen_len': 2.426470588235294, 'mrpc_en_eval_runtime': 1.8275, 'mrpc_en_eval_samples_per_second': 111.625, 'epoch': 2.17}
{'mrpc_en_eval_loss': 0.30966323614120483, 'mrpc_en_eval_f1': 68.48249027237354, 'mrpc_en_eval_accuracy': 60.29411764705882, 'mrpc_en_eval_gen_len': 2.426470588235294, 'mrpc_en_eval_runtime': 1.8275, 'mrpc_en_eval_samples_per_second': 111.625, 'epoch': 2.17, 'eval_average_metrics': 64.38830395971618}
```
## Expected behavior
Resuming from a checkpoint needs to get the same results as without
Thank you for your help @sgugger | 04-19-2021 19:18:04 | 04-19-2021 19:18:04 | You will only have perfectly reproducible results using checkpointing if the only randomness comes from the shuffling in your data (this is enforced by the CI). The way this is programmed inside the Trainer is to go through each epoch before the current one (which triggers the random shuffling) and then each batch (which puts you in the same position as before the checkpoint).
Since your results differ slightly, it looks like there are other random calls in your training code, which you did not share. There is no way to have the exact same results while resuming from a checkpoint if this is the case.<|||||>Hi @sgugger thanks for the reply, I do not have any other randomness in my codes, and I am using run_seq2seq.py codes to train t5 models on mrpc dataset, without modifications, I really appreciate your help on this issue as this is really crucial for me to have this working thanks a lot
I initialize only the weights randomly, but I assume huggnigface well taking care of setting seeds, and there is really no other randomness <|||||>@sgugger I confirm also training the vanilla t5 have the same issue exists:
Here is the run for t5-base for 100 steps:
```
{'loss': 6.1045, 'learning_rate': 6e-07, 'epoch': 0.02}
0%| | 10/60000 [00:06<10:25:12, 1.60it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.44it/s]
{'mrpc_en_eval_loss': 6.924696445465088, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.3137254901960786, 'mrpc_en_eval_runtime': 1.9287, 'mrpc_en_eval_samples_per_second': 105.771, 'epoch': 0.22}
{'mrpc_en_eval_loss': 6.924696445465088, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.3137254901960786, 'mrpc_en_eval_runtime': 1.9287, 'mrpc_en_eval_samples_per_second': 105.771, 'epoch': 0.22, 'eval_average_metrics': 0.0}
0%| | 20/60000 [00:27<13:37:00, 1.22it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.49it/s]
{'mrpc_en_eval_loss': 5.22016716003418, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.764705882352941, 'mrpc_en_eval_runtime': 1.8761, 'mrpc_en_eval_samples_per_second': 108.737, 'epoch': 0.43}
{'mrpc_en_eval_loss': 5.22016716003418, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.764705882352941, 'mrpc_en_eval_runtime': 1.8761, 'mrpc_en_eval_samples_per_second': 108.737, 'epoch': 0.43, 'eval_average_metrics': 0.0}
0%| | 30/60000 [00:47<12:58:53, 1.28it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.37it/s]
{'mrpc_en_eval_loss': 1.3517154455184937, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 18.137254901960784, 'mrpc_en_eval_gen_len': 3.2205882352941178, 'mrpc_en_eval_runtime': 1.9678, 'mrpc_en_eval_samples_per_second': 103.67, 'epoch': 0.65}
{'mrpc_en_eval_loss': 1.3517154455184937, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 18.137254901960784, 'mrpc_en_eval_gen_len': 3.2205882352941178, 'mrpc_en_eval_runtime': 1.9678, 'mrpc_en_eval_samples_per_second': 103.67, 'epoch': 0.65, 'eval_average_metrics': 9.068627450980392}
0%| | 40/60000 [01:08<13:00:06, 1.28it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 4.62it/s]
{'mrpc_en_eval_loss': 0.4487058222293854, 'mrpc_en_eval_f1': 81.3953488372093, 'mrpc_en_eval_accuracy': 68.62745098039215, 'mrpc_en_eval_gen_len': 2.0, 'mrpc_en_eval_runtime': 1.0261, 'mrpc_en_eval_samples_per_second': 198.811, 'epoch': 0.87}
{'mrpc_en_eval_loss': 0.4487058222293854, 'mrpc_en_eval_f1': 81.3953488372093, 'mrpc_en_eval_accuracy': 68.62745098039215, 'mrpc_en_eval_gen_len': 2.0, 'mrpc_en_eval_runtime': 1.0261, 'mrpc_en_eval_samples_per_second': 198.811, 'epoch': 0.87, 'eval_average_metrics': 75.01139990880073}
0%| | 50/60000 [01:27<12:31:06, 1.33it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 3.72it/s]
{'mrpc_en_eval_loss': 0.25695744156837463, 'mrpc_en_eval_f1': 83.79204892966361, 'mrpc_en_eval_accuracy': 74.01960784313727, 'mrpc_en_eval_gen_len': 2.0833333333333335, 'mrpc_en_eval_runtime': 1.2653, 'mrpc_en_eval_samples_per_second': 161.228, 'epoch': 1.09}
{'mrpc_en_eval_loss': 0.25695744156837463, 'mrpc_en_eval_f1': 83.79204892966361, 'mrpc_en_eval_accuracy': 74.01960784313727, 'mrpc_en_eval_gen_len': 2.0833333333333335, 'mrpc_en_eval_runtime': 1.2653, 'mrpc_en_eval_samples_per_second': 161.228, 'epoch': 1.09, 'eval_average_metrics': 78.90582838640043}
0%|▏ | 60/60000 [01:47<12:36:18, 1.32it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 4.29it/s]
{'mrpc_en_eval_loss': 0.27573078870773315, 'mrpc_en_eval_f1': 82.11143695014663, 'mrpc_en_eval_accuracy': 70.09803921568627, 'mrpc_en_eval_gen_len': 2.014705882352941, 'mrpc_en_eval_runtime': 1.1521, 'mrpc_en_eval_samples_per_second': 177.063, 'epoch': 1.3}
{'mrpc_en_eval_loss': 0.27573078870773315, 'mrpc_en_eval_f1': 82.11143695014663, 'mrpc_en_eval_accuracy': 70.09803921568627, 'mrpc_en_eval_gen_len': 2.014705882352941, 'mrpc_en_eval_runtime': 1.1521, 'mrpc_en_eval_samples_per_second': 177.063, 'epoch': 1.3, 'eval_average_metrics': 76.10473808291644}
0%|▏ | 70/60000 [02:09<13:15:00, 1.26it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 3.75it/s]
{'mrpc_en_eval_loss': 0.16758881509304047, 'mrpc_en_eval_f1': 87.04318936877075, 'mrpc_en_eval_accuracy': 80.88235294117648, 'mrpc_en_eval_gen_len': 2.2107843137254903, 'mrpc_en_eval_runtime': 1.2665, 'mrpc_en_eval_samples_per_second': 161.075, 'epoch': 1.52}
{'mrpc_en_eval_loss': 0.16758881509304047, 'mrpc_en_eval_f1': 87.04318936877075, 'mrpc_en_eval_accuracy': 80.88235294117648, 'mrpc_en_eval_gen_len': 2.2107843137254903, 'mrpc_en_eval_runtime': 1.2665, 'mrpc_en_eval_samples_per_second': 161.075, 'epoch': 1.52, 'eval_average_metrics': 83.96277115497361}
0%|▏ | 80/60000 [02:30<13:18:49, 1.25it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 3.64it/s]
{'mrpc_en_eval_loss': 0.1627584546804428, 'mrpc_en_eval_f1': 89.86486486486486, 'mrpc_en_eval_accuracy': 85.29411764705883, 'mrpc_en_eval_gen_len': 2.235294117647059, 'mrpc_en_eval_runtime': 1.2734, 'mrpc_en_eval_samples_per_second': 160.198, 'epoch': 1.74}
{'mrpc_en_eval_loss': 0.1627584546804428, 'mrpc_en_eval_f1': 89.86486486486486, 'mrpc_en_eval_accuracy': 85.29411764705883, 'mrpc_en_eval_gen_len': 2.235294117647059, 'mrpc_en_eval_runtime': 1.2734, 'mrpc_en_eval_samples_per_second': 160.198, 'epoch': 1.74, 'eval_average_metrics': 87.57949125596184}
0%|▏ | 90/60000 [02:50<12:35:38, 1.32it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 3.71it/s]
{'mrpc_en_eval_loss': 0.178583025932312, 'mrpc_en_eval_f1': 90.78014184397163, 'mrpc_en_eval_accuracy': 87.25490196078431, 'mrpc_en_eval_gen_len': 2.303921568627451, 'mrpc_en_eval_runtime': 1.2507, 'mrpc_en_eval_samples_per_second': 163.108, 'epoch': 1.96}
{'mrpc_en_eval_loss': 0.178583025932312, 'mrpc_en_eval_f1': 90.78014184397163, 'mrpc_en_eval_accuracy': 87.25490196078431, 'mrpc_en_eval_gen_len': 2.303921568627451, 'mrpc_en_eval_runtime': 1.2507, 'mrpc_en_eval_samples_per_second': 163.108, 'epoch': 1.96, 'eval_average_metrics': 89.01752190237798}
0%|▏ | 100/60000 [03:09<12:29:36, 1.33it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 3.70it/s]
{'mrpc_en_eval_loss': 0.18296584486961365, 'mrpc_en_eval_f1': 88.72727272727272, 'mrpc_en_eval_accuracy': 84.80392156862744, 'mrpc_en_eval_gen_len': 2.338235294117647, 'mrpc_en_eval_runtime': 1.2762, 'mrpc_en_eval_samples_per_second': 159.845, 'epoch': 2.17}
{'mrpc_en_eval_loss': 0.18296584486961365, 'mrpc_en_eval_f1': 88.72727272727272, 'mrpc_en_eval_accuracy': 84.80392156862744, 'mrpc_en_eval_gen_len': 2.338235294117647, 'mrpc_en_eval_runtime': 1.2762, 'mrpc_en_eval_samples_per_second': 159.845, 'epoch': 2.17, 'eval_average_metrics': 86.76559714795007}
```
Now lets see the results of t5-base after resuming from step = 60
```
0%|▏ | 60/60000 [00:06<9:21:55, 1.78it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 4.00it/s]
{'mrpc_en_eval_loss': 0.2794328033924103, 'mrpc_en_eval_f1': 82.11143695014663, 'mrpc_en_eval_accuracy': 70.09803921568627, 'mrpc_en_eval_gen_len': 2.014705882352941, 'mrpc_en_eval_runtime': 1.2224, 'mrpc_en_eval_samples_per_second': 166.887, 'epoch': 1.3}
{'mrpc_en_eval_loss': 0.2794328033924103, 'mrpc_en_eval_f1': 82.11143695014663, 'mrpc_en_eval_accuracy': 70.09803921568627, 'mrpc_en_eval_gen_len': 2.014705882352941, 'mrpc_en_eval_runtime': 1.2224, 'mrpc_en_eval_samples_per_second': 166.887, 'epoch': 1.3, 'eval_average_metrics': 76.10473808291644}
0%|▏ | 70/60000 [00:28<13:22:56, 1.24it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 3.59it/s]
{'mrpc_en_eval_loss': 0.16057834029197693, 'mrpc_en_eval_f1': 88.43537414965986, 'mrpc_en_eval_accuracy': 83.33333333333334, 'mrpc_en_eval_gen_len': 2.2450980392156863, 'mrpc_en_eval_runtime': 1.3058, 'mrpc_en_eval_samples_per_second': 156.222, 'epoch': 1.52}
{'mrpc_en_eval_loss': 0.16057834029197693, 'mrpc_en_eval_f1': 88.43537414965986, 'mrpc_en_eval_accuracy': 83.33333333333334, 'mrpc_en_eval_gen_len': 2.2450980392156863, 'mrpc_en_eval_runtime': 1.3058, 'mrpc_en_eval_samples_per_second': 156.222, 'epoch': 1.52, 'eval_average_metrics': 85.8843537414966}
0%|▏ | 80/60000 [00:48<12:55:04, 1.29it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 3.69it/s]
{'mrpc_en_eval_loss': 0.15957750380039215, 'mrpc_en_eval_f1': 88.81118881118881, 'mrpc_en_eval_accuracy': 84.31372549019608, 'mrpc_en_eval_gen_len': 2.284313725490196, 'mrpc_en_eval_runtime': 1.291, 'mrpc_en_eval_samples_per_second': 158.021, 'epoch': 1.74}
{'mrpc_en_eval_loss': 0.15957750380039215, 'mrpc_en_eval_f1': 88.81118881118881, 'mrpc_en_eval_accuracy': 84.31372549019608, 'mrpc_en_eval_gen_len': 2.284313725490196, 'mrpc_en_eval_runtime': 1.291, 'mrpc_en_eval_samples_per_second': 158.021, 'epoch': 1.74, 'eval_average_metrics': 86.56245715069244}
0%|▏ | 90/60000 [01:11<13:47:58, 1.21it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 3.67it/s]
{'mrpc_en_eval_loss': 0.19618992507457733, 'mrpc_en_eval_f1': 87.17948717948718, 'mrpc_en_eval_accuracy': 82.84313725490196, 'mrpc_en_eval_gen_len': 2.3480392156862746, 'mrpc_en_eval_runtime': 1.2811, 'mrpc_en_eval_samples_per_second': 159.235, 'epoch': 1.96}
{'mrpc_en_eval_loss': 0.19618992507457733, 'mrpc_en_eval_f1': 87.17948717948718, 'mrpc_en_eval_accuracy': 82.84313725490196, 'mrpc_en_eval_gen_len': 2.3480392156862746, 'mrpc_en_eval_runtime': 1.2811, 'mrpc_en_eval_samples_per_second': 159.235, 'epoch': 1.96, 'eval_average_metrics': 85.01131221719457}
0%|▏ | 100/60000 [01:33<12:55:11, 1.29it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 3.75it/s]
{'mrpc_en_eval_loss': 0.21464459598064423, 'mrpc_en_eval_f1': 87.96992481203009, 'mrpc_en_eval_accuracy': 84.31372549019608, 'mrpc_en_eval_gen_len': 2.3823529411764706, 'mrpc_en_eval_runtime': 1.2654, 'mrpc_en_eval_samples_per_second': 161.214, 'epoch': 2.17}
{'mrpc_en_eval_loss': 0.21464459598064423, 'mrpc_en_eval_f1': 87.96992481203009, 'mrpc_en_eval_accuracy': 84.31372549019608, 'mrpc_en_eval_gen_len': 2.3823529411764706, 'mrpc_en_eval_runtime': 1.2654, 'mrpc_en_eval_samples_per_second': 161.214, 'epoch': 2.17, 'eval_average_metrics': 86.14182515111308}
0%|▏
```
<|||||>Dear @sgugger @patrickvonplaten @patil-suraj
Could you kindly have a look into this issue, this is really important to have the checkpointing workings, as in many cases one cannot train the models for larger periods, thnaks <|||||>Following up on @sgugger's suggestion, if I understand the methodology correctly it doesn't quite apply to the generic checkpointing method, but one could subclass the Trainer to save the RNG state at the moment of saving the checkpoint, and then restore the same RNG state on resume. You'd probably need to do that for at least python and pytorch (and numpy and other libraries if you use those).
@dorooddorood606, look into:
```
# before saving
py_rng_state = random.getstate()
pt_rng_state = torch.get_rng_state()
np_rng_state = numpy.random.get_state()
# post resume
random.setstate(py_rng_state)
torch.set_rng_state(pt_rng_state)
numpy.random.set_state(np_rng_state)
```<|||||>Dear @stas00
Thank you very much for following up on this, I implemented this suggestion, and I still see the discrepancies after resuming the checkpoints. I emphasize I tried with "vanilla t5-base" so no changes from huggingface codes. In my own codes, I have some initialization which is the only part with randomness, I would be grateful if you could tell me if there might be an issue with these lines:
```
nn.init.normal_(linear_layer.weight, std=std)
nn.init.zeros_(linear_layer.bias)
```
but still since vanillat t5-base also has this issue, I was wondering if you might think this might be relevant to the trainer code as a general issue? I greatly appreciate it if you could kindly consider this issue.
thanks a lot in advance for the great work you do and your hard efforts.
<|||||>> Thank you very much for following up on this, I implemented this suggestion,
Could we first validate that this was done correctly?
To test you can debug print some random number generated **immediately after saving the checkpoint** and RNG state and doing the same right **after the checkpoint and RNG states were restored** when you run the program 2nd time with resume. If you get the same number generated then we know you restored the RNG state. You probably want to check one for torch and one for python.
> I have some initialization which is the only part with randomness, I would be grateful if you could tell me if there might be an issue with these lines:
>
> ```
> nn.init.normal_(linear_layer.weight, std=std)
This line would definitely impact the RNG state. If you're uncertain you can always debug and generate a random number with that line of code and w/o it and see if it's the same.
So for example one workaround you could do is to restore the RNG state after your custom code above.
Or better don't re-run this line, but save the outcome with the checkpoint and then restore it on subsequent runs, rather the needing to fiddle with RNG states.
<|||||>Dear @stas00
First, I would like to thank you very much for taking your precious time and answering to my question.
I observe that between different runs my codes generate different results. I was assuming since HuggingFace run_glue.py codes set the seeds initially, then it is well taking care of randomness. All my code has is some initialization, like what I sent, coming all after the "set_seed()" function. Considering only one run, putting check-pointing aside, could you kindly tell me if one needs to set seeds before each initialization? shall I bring them all in init_weights function of BERT? I appreciate your response a lot.
Thank you. <|||||>First a few requests, @dorooddorood606
- please don't re-post the same question on Issues and forums, once is plenty - honestly I'm lost at what we are trying to solve here.
- we all appreciate your appreciations, you're clearly a very nice person, but it becomes overbearing when we get copious amounts of it in every post.
- let's focus on the problem only so that the sound-to-noise ratio is manageable.
Thank you!
------------
Now, let's try to summarize what doesn't work.
1. From what I understand you extended the library with your own modifications. And now you're experiencing inconsistent randomness issues when you resume the model, correct?
Does the library produce the expected results if you remove your modifications?
2. Is there an easy way to provide a reproducible example that shows how the main library works correctly and then it breaks when with your modification? Perhaps a simple google colab notebook? If you do that please make sure that it's very easy to quickly see what the problem is and where it comes from. So no production-level hundreds of lines of code, but toy examples if possible.
<|||||>Dear @stas00
Thank you for the remind, I will follow the points you mentioned. I was thinking there is also a bug in the trainer as I was also observing it for the Bert-base model unchanged, but the randomness issue resolved with upgrading to 4.6.0 version of transformers.
<|||||>Dear @stas00
I appreciate your input on the issue of reproducibility from resuming from checkpoints a lot. I tried to follow your points to state it in a clearer way.
### Problem statement
If a user train a model till some steps and then reload the model from a checkpoint, the results differs from the training the model without breaks.
### How to reproduce the issue
Transformer version: I am using 4.6.0dev version of transformer
```
https://github.com/huggingface/transformers/commit/04ab2ca639ee6fd1002ce0a498d892245f0c9093
```
Please kindly clone this repository with a minimal example
```
git clone [email protected]:dorooddorood606/reproducibility.git
```
To run the codes, please kindly run this command, between the runs in every 50 steps after save of the model, kill the model for like 2-3 times. Please then compare the final results of running for the full iterations with resuming, with ```raining without any breaks```. The results would differ.
```
TASK_NAME=mrpc
python run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 2 --output_dir /temp/$TASK_NAME/ --eval_steps 50 --evaluation_strategy steps --load_best_model_at_end --fp16 --do_predict
```
Please let me know if you need any further information on this.
### Which modifications done on Trainer class to make it reproducible:
I apply the following modifications to the trainer class:
1) Following your suggestions. I save the random states and I reload them before reloading the checkpoint in the trainer class. Please see https://github.com/dorooddorood606/reproducibility/blob/f5902af4669bba8aaee326efdb0cd459e25be675/trainer.py#L126
and https://github.com/dorooddorood606/reproducibility/blob/f5902af4669bba8aaee326efdb0cd459e25be675/trainer.py#L200
2) In each saving of checkpoints, I also save a copy of checkpoint in the output_dir, this is because I personally believe we need to also keep the last checkpoint to resume from in addition to keeping only checkpoint of the best model so far, to be able to continue training from the last state. Please see https://github.com/dorooddorood606/reproducibility/blob/f5902af4669bba8aaee326efdb0cd459e25be675/trainer.py#L87
3) I get the last checkpoint in run_glue.py based on the checkpoint saved in the main output_dir, please see https://github.com/dorooddorood606/reproducibility/blob/f5902af4669bba8aaee326efdb0cd459e25be675/run_glue.py#L46
### Larger impact of this issue
To me this issue with resuming from checkpoint, can also help other users and would be beneficial to all users who need to use this option. I appreciate a lot if you could sparse me some time from your precious time and help on this issue.
<|||||>Thank you for your detailed followup, @dorooddorood606. And sharing what experiments you have tried.
I agree that it'd be awesome to be able to resume as if there was no stopping.
Please give us some time, we are going discuss whether it is feasible to make it happen as there are so many moving parts to consider and if so will build this ground up.
We will keep you posted.<|||||>Dear @stas00
thank you. Sure, meanwhile if you might have some ideas and suggestions for me to try, I greatly appreciate your help. I searched for this issue a lot, and apart from the things HuggingFace repo has already implemented I could not find more tricks to do to solve the issue.
Thanks a lot in advance for your time and assistance. <|||||>@sgugger is working on it in https://github.com/huggingface/transformers/pull/11582<|||||>Hi
I cannot really express how much I appreciate this. Thank you very much both for working on this. This would be wonderful to have resuming fixed in trainer. Thanks for your efforts. <|||||>I totally agree!
All kudos go to @sgugger , who has a much better understanding of the nooks and crannies of the HF Trainer.
<|||||>Dear @sgugger
Thanks for the hard work. I tested it but the issue is not resolved, specially for small datasets it can make large changes in final results, I appreciate if you could share with me some suggestions on how to resolve the issue:
The original one:
```
checkpoint: 200
{'eval_loss': 0.44332757592201233, 'eval_accuracy': 0.7941176470588235, 'eval_f1': 0.8521126760563381, 'eval_combined_score': 0.8231151615575808, 'eval_runtime': 1.5259, 'eval_samples_per_second': 133.692, 'eval_average_metrics': 0.8231151615575808, 'epoch': 1.74}
```
The resumed one:
```
checkpoint: 200
{'eval_loss': 0.4352119266986847, 'eval_accuracy': 0.7941176470588235, 'eval_f1': 0.85, 'eval_combined_score': 0.8220588235294117, 'eval_runtime': 1.4451, 'eval_samples_per_second': 141.165, 'eval_average_metrics': 0.8220588235294117, 'epoch': 1.74}
```
The differences accumulate a lot over time
# To reproduce please run:
```
TASK_NAME=mrpc
python run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3 --output_dir /temp/results --eval_steps 50 --evaluation_strategy steps --load_best_model_at_end --fp16 --do_test --save_total_limit 1
```
Here are the final results without drop:
```
[INFO|trainer_pt_utils.py:907] 2021-05-09 17:35:14,973 >> ***** eval metrics *****
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:14,973 >> epoch = 3.0
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:14,973 >> eval_accuracy = 0.701
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:14,974 >> eval_average_metrics = 0.7605196946035051
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:14,974 >> eval_combined_score = 0.7605
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:14,974 >> eval_f1 = 0.8201
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:14,974 >> eval_loss = 0.604
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:14,974 >> eval_mem_cpu_alloc_delta = 2MB
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:14,974 >> eval_mem_cpu_peaked_delta = 2MB
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:14,974 >> eval_mem_gpu_alloc_delta = 0MB
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:14,974 >> eval_mem_gpu_peaked_delta = 33MB
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:14,974 >> eval_runtime = 0:00:01.95
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:14,974 >> eval_samples = 204
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:14,974 >> eval_samples_per_second = 104.502
05/09/2021 17:35:14 - INFO - __main__ - *** Test ***
[INFO|trainer.py:515] 2021-05-09 17:35:15,036 >> The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: idx, sentence1, sentence2.
[INFO|trainer.py:2089] 2021-05-09 17:35:15,040 >> ***** Running Evaluation *****
[INFO|trainer.py:2091] 2021-05-09 17:35:15,041 >> Num examples = 204
[INFO|trainer.py:2094] 2021-05-09 17:35:15,041 >> Batch size = 8
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 26/26 [00:01<00:00, 13.77it/s]
[INFO|trainer_pt_utils.py:907] 2021-05-09 17:35:17,070 >> ***** test metrics *****
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:17,070 >> epoch = 3.0
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:17,070 >> eval_accuracy = 0.6863
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:17,070 >> eval_average_metrics = 0.7490196078431373
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:17,070 >> eval_combined_score = 0.749
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:17,070 >> eval_f1 = 0.8118
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:17,070 >> eval_loss = 0.6198
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:17,070 >> eval_mem_cpu_alloc_delta = 0MB
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:17,070 >> eval_mem_cpu_peaked_delta = 2MB
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:17,070 >> eval_mem_gpu_alloc_delta = 0MB
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:17,070 >> eval_mem_gpu_peaked_delta = 33MB
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:17,070 >> eval_runtime = 0:00:01.95
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:17,070 >> eval_samples_per_second = 104.281
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:17,070 >> test_samples = 204
```
with breaking in between:
```
[INFO|trainer_pt_utils.py:907] 2021-05-09 17:41:22,953 >> ***** eval metrics *****
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:22,953 >> epoch = 3.0
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:22,953 >> eval_accuracy = 0.6863
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:22,953 >> eval_average_metrics = 0.7467517127332861
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:22,953 >> eval_combined_score = 0.7468
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:22,953 >> eval_f1 = 0.8072
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:22,953 >> eval_loss = 0.6106
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:22,953 >> eval_mem_cpu_alloc_delta = 2MB
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:22,953 >> eval_mem_cpu_peaked_delta = 1MB
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:22,953 >> eval_mem_gpu_alloc_delta = 0MB
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:22,953 >> eval_mem_gpu_peaked_delta = 33MB
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:22,953 >> eval_runtime = 0:00:01.82
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:22,953 >> eval_samples = 204
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:22,954 >> eval_samples_per_second = 111.603
05/09/2021 17:41:22 - INFO - __main__ - *** Test ***
[INFO|trainer.py:515] 2021-05-09 17:41:23,014 >> The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: idx, sentence2, sentence1.
[INFO|trainer.py:2089] 2021-05-09 17:41:23,018 >> ***** Running Evaluation *****
[INFO|trainer.py:2091] 2021-05-09 17:41:23,019 >> Num examples = 204
[INFO|trainer.py:2094] 2021-05-09 17:41:23,019 >> Batch size = 8
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 26/26 [00:01<00:00, 14.71it/s]
[INFO|trainer_pt_utils.py:907] 2021-05-09 17:41:24,916 >> ***** test metrics *****
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:24,916 >> epoch = 3.0
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:24,916 >> eval_accuracy = 0.701
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:24,916 >> eval_average_metrics = 0.7572180248246088
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:24,916 >> eval_combined_score = 0.7572
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:24,916 >> eval_f1 = 0.8135
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:24,916 >> eval_loss = 0.6068
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:24,917 >> eval_mem_cpu_alloc_delta = 0MB
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:24,917 >> eval_mem_cpu_peaked_delta = 1MB
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:24,917 >> eval_mem_gpu_alloc_delta = 0MB
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:24,917 >> eval_mem_gpu_peaked_delta = 33MB
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:24,917 >> eval_runtime = 0:00:01.83
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:24,917 >> eval_samples_per_second = 111.455
[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:24,917 >> test_samples = 204
```
This is that different that still does not allow using checkpointing, I only have access to gpus which are interruptable and really appreciate your help
I also have added `CUBLAS_WORKSPACE_CONFIG=:16:8` as described in `https://discuss.pytorch.org/t/random-seed-with-external-gpu/102260/3` to make torch deterministic, still does not work, <|||||>Are you sure you are running on a source install of Transformers? The command produces the exact same results on my end.<|||||>Dear Sylvain,
Thanks for the response. Yes, I install transformers as pip install git+https://github.com/huggingface/transformers.git
but the results differs a lot. Please kindly run this command and break it after first checkpoint (iterations = 50)
```
TASK_NAME=mrpc
python run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3 --output_dir /tmp/ --eval_steps 50 --evaluation_strategy steps --load_best_model_at_end --fp16 --do_test
```
<|||||>This might be due to the FP16 parameter. Could you check if you get the same result without FP16?
The reason is due to the fact we don't save the state of the gradient scaler in mixed precision training, which is another thing to restore to its state. Can make a PR to fix that tomorrow.<|||||>Dear Sylvain
Thank you for taking your precious time and answering this issue. you are absolutely right. I checked it without fp16 and I confirm this works fine without fp16, it would be wonderful to have the fp16 mode also working when you have time.
Thank you for your hard work and great job you do :) <|||||>Problem was fixed on my side with the PR above. Let me know if this is not the case for you.<|||||>Dear @sgugger
Thank you for the PR, I checked it with the last version of transformers now, and the issue still exists, please kindly run this command and break this after first 50 steps:
```
TASK_NAME=mrpc
python run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --eval_steps 50 --evaluation_strategy steps --load_best_model_at_end --fp16 --do_test
```
Here are the results:
If you do not break:
```
After 50 steps:
{'eval_loss': 0.6383711695671082, 'eval_accuracy': 0.6764705882352942, 'eval_f1': 0.8070175438596491, 'eval_combined_score': 0.7417440660474717, 'eval_runtime': 2.1914, 'eval_samples_per_second': 93.091, 'eval_average_metrics': 0.7417440660474717, 'epoch': 0.43}
After 100 steps:
{'eval_loss': 0.6184656023979187, 'eval_accuracy': 0.6862745098039216, 'eval_f1': 0.813953488372093, 'eval_combined_score': 0.7501139990880072, 'eval_runtime': 2.1089, 'eval_samples_per_second': 96.731, 'eval_average_metrics': 0.7501139990880072, 'epoch': 0.87}
```
if you break after 50 steps:
```
After 100 steps
{'eval_loss': 0.6308265328407288, 'eval_accuracy': 0.6862745098039216, 'eval_f1': 0.813953488372093, 'eval_combined_score': 0.7501139990880072, 'eval_runtime': 2.1549, 'eval_samples_per_second': 94.668, 'eval_average_metrics': 0.7501139990880072, 'epoch': 0.87}
```
The differences accumulates and the results at the end varies a lot that resumed results are not usable.
I really appreciate if you could kindly have another look. Could you kindly reopen this issue as well?
thanks.
<|||||>I sadly cannot reproduce (get the exact same results with the command you indicated using a source install on current master) so this comes from something in your particular setup at this stage. |
transformers | 11,322 | closed | [Trainer] fix the placement on device with fp16_full_eval | * `do_train` isn't a reliable arg - it is not required to run `train), so for now add a workaround for the `fp16_full_eval` case to place the model on device if `train()` was called w/o `do_train=True` being passed to the Trainer args.
* while at it fix the `deepspeed` case, now that it's used in eval too, it should never be put on device
* Also `args = self.args` to make the code easier to read/shorter - there is a lot of it in `train`.
Fixes: https://github.com/huggingface/transformers/issues/11200#issuecomment-822631511
@sgugger | 04-19-2021 17:45:30 | 04-19-2021 17:45:30 | We probably need to re-think the "placement on device" logic. And to do it explicitly in each stage and in `__init__` only in those special cases where it's absolutely required. |
transformers | 11,321 | closed | EncoderDecoderModel's decoder gets unexpected use_cache argument | ## Environment info
- `transformers` version: 4.5.1
- Platform: Linux-4.15.0-123-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.8.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten, @patil-suraj I think you might have an idea on what went wrong.
## Information
EncoderDecoderModel internally calls decoder with use_cache parameter, which seems to be not define for a Bert decoder.
## To reproduce
I have bigger configs but the bug is still here with minimalistic configs:
```python
encoder_config = BertConfig()
encoder = BertModel(config=encoder_config)
decoder_config = BertConfig(is_decoder=True)
decoder = BertForMaskedLM(config=decoder_config)
model = EncoderDecoderModel(encoder=encoder, decoder=decoder)
input_ids = torch.ones(5, dtype=torch.long).unsqueeze(0)
model(input_ids=input_ids, decoder_input_ids=input_ids, labels=input_ids)
```
Outputs error from `/opt/conda/lib/python3.8/site-packages/transformers/models/encoder_decoder/modeling_encoder_decoder.py` at line 416 :
```
forward() got an unexpected keyword argument 'use_cache'
```
## Expected behavior
- Default Bert encoder and decoder can be stacked in an EncoderDecoderModel
| 04-19-2021 17:01:03 | 04-19-2021 17:01:03 | Hi @biggoron
`BertForMaskedLM` can't be used as a decoder, it's intended for masked LM. The `BertLMHeadModel` model should be used if you wan to use bert as a decoder. Also the right way to initialize the bert as decoder is as follows
```python
dec_config = BertConfig.from_pretrained("bert-base-uncased")
dec_config.add_cross_attentions = True # add cross attention if you want to use it in EncoderDecoderModel
dec_config.is_decoder = True
dec = BertLMHeadModel.from_pretrained("bert-base-uncased", config=dec_config)
```
or simply use the `from_encoder_decoder_pretrained` method which takes care of this.<|||||>Thanks a lot for your help, it is much clearer! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.