repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
โ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 9,911 | closed | [seq2seq] fix logger format for non-main process | Currently, in `finetune_trainer.py` non-main process doesn't have any formatting at all, so we end up with:
```
[WARNING|modeling_t5.py:1645] 2021-01-30 20:01:37,246 >> [p0] got MPU
[WARNING|modeling_t5.py:1646] 2021-01-30 20:01:37,246 >> [p0] DP group [0]
[p1] got MPU
[p1] DP group [1]
```
as you can see the 2nd process in DDP misses formatting in logger.
this PR fixes it.
I looked in the take-over version `run_seq2seq.py` if it needed to be fixed too and it doesn't have these function calls, not sure why. They appear to be needed, unless they get called elsewhere.
@sgugger, @patil-suraj
| 01-31-2021 04:07:41 | 01-31-2021 04:07:41 | @LysandreJik knows better for the centralized logging system so I'll defer to him. |
transformers | 9,910 | closed | Doc title in the template | # What does this PR do?
After reviewing a few PRs post-template, I'm noticing the doc pages are always misnamed -> they should use the cased name of the model, not the uppercase version. | 01-30-2021 23:00:15 | 01-30-2021 23:00:15 | |
transformers | 9,909 | closed | run_seq2seq.py : Why we pad labels with -100? | As mentioned in [this line](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/run_seq2seq.py#L437), why we add -100. Can't we just keep pad_token_id ? | 01-30-2021 22:29:29 | 01-30-2021 22:29:29 | Please use the [forums](https://discuss.huggingface.co/) for questions like this. We keep issues for bugs or feature requests only. |
transformers | 9,908 | open | [seq2seq] some logging for all processes in distributed mode | In 2D Parallelism, e.g. Pipeline + DeepSpeed I need to log unique device maps per process for the user to see, but currently `logger.info()` is only activated for the main process via `if is_main_process`. Currently only in `examples/seq2seq/run_seq2seq.py`, `examples/seq2seq/finetune_trainer.py`, but it'll be needed for other scripts as well down the road.
Any idea how I could accomplish that while keeping things as they are? I guess I could use `logger.warn` as a workaround, since it's not disabled for other processes. But it's not a good approach, since it's a WARNING after all. And I don't quite want to use `print()` as it might not be what the user wants if they want things quiet.
Perhaps you have some other ideas on how I could go about doing that.
I think perhaps adding another logger that's INFO-activated for all distributed processes, and is used only occasionally when the normal logger won't do.
I think as we are getting more and more into distributed training we will need to be able to log specific things for specific processes.
Thank you.
@LysandreJik, @patrickvonplaten, @sgugger | 01-30-2021 21:31:11 | 01-30-2021 21:31:11 | I'm definitely not an expert on logging so I'll leave it to @LysandreJik and @sgugger here. The idea of adding new `multi-process` logging functionality sounds very reasonable to me though!<|||||>I would advocate for using the `logger.warning` in cases where you want to display something from all processes. While in the library we should be strict about what we want to display as info/warn/error, I think we can be a bit more flexible and use the logger verbosity differently, as it's actively defined in the scripts. Of course, this is only if you're defining the logs in the script and not in the library, otherwise we'll need to reconsider.
If we don't want to go down this road, we could also have an approach defined on the log levels. We could potentially define additional log levels, between INFO and WARN that could get the job done.
<|||||>Oh, I like an intermediate level between INFO and WARN, like `IMPORTANT_INFO`? (`ALL_PROCESS_INFO` wouldn't make sense as it would not technically control the number of processes).<|||||>I'm talking about the library here, Trainer that is. But, of course, the same should apply to custom scripts.
I think a custom logger is a better idea. In particular since it needs to log the rank of the process - currently I have to add it manually.
I don't think we should mess with levels. These should be sacrosanct. This is because it could make things very confusing for the user.
But being able to retrieve a 2nd logger object that logs for all processes with the format that includes a process rank and using the same log level would be useful.
Though need to decide whether:
a. Such logger would be not logging anything unless invoked in a distributed environment.
b. Or perhaps it's actually better for it to be identical to the normal logger under non-distributed env, so logs aren't missed - it's just it'll not include process rank.<|||||>> I'm talking about the library here, Trainer that is.
The problem is that only Trainer knows when it's executed in a distributed training but logs are in all parts of the library. Though maybe this new logger will only be used inside the `Trainer`?
(Sorry my 3yo made a wrong click.) <|||||>> The problem is that only Trainer knows when it's executed in a distributed training but logs are in all parts of the library.
Not really. We have `torch.distributed.get_rank()` for most things to know whether we are under distributed, though the logger shouldn't be initialized until first use since the dist env comes a bit later in the game. Down the road if we have other methods that defined multiproc we will just try those too or provide one for a user to run if it's non-standard and it'll set a multi-proc flag in the logging library.
It probably could/should copy the same format from normal logger, but embed process rank into it.
i.e. perhaps it can be created on the fly and require no special handling on the user side (or trainer side).
> Though maybe this new logger will only be used inside the Trainer?
No, user scripts will need it too. It's not trainer-specific. Think `model.parallelize()` `model.enable_pipeline` (new)
And just to clarify - this is for logging inside the model's code - perhaps I will find a way to abstract it out, but it still would be outside of Trainer and in the core library.
An example of this is building a custom device_map for PP or MP specific to the process and logging that. This would be the same whether it was called from Trainer or user's code.
> (Sorry my 3yo made a wrong click.)
You have competition growing ;)<|||||>I have no issues with having a second logger used in multi-process environments. It would be nice if it could be handled by the `logging` class of `transformers`, so as to have a single front-end for the logging, otherwise we'll end up confusing the users as much as if we add intermediate logging levels.
Do we actually need a second logger though, wouldn't it be simpler to adapt the formatter for those particular logs?<|||||>> I have no issues with having a second logger used in multi-process environments. It would be nice if it could be handled by the `logging` class of `transformers`, so as to have a single front-end for the logging, [...]
Yes, that is what I had in mind.
> Do we actually need a second logger though, wouldn't it be simpler to adapt the formatter for those particular logs?
Could you please give a example of what you have in mind?
I have no attachment whatsoever to how this is done. So if you already have an idea on how to make this work I'm all ears.
Thank you.<|||||>Ok I gave it more thought and my proposal of using formatters was a mistake, it won't be possible this way. I looked for a solution using filters, but, alas, the logs are already filtered by the levels before they're handled by the filters.
Thinking about it further, I'm not 100% sure how I can see several loggers here as we already have one logger per module. If you know how to do it cleanly, by all means, please do!
---
As an aside, I don't think having intermediate levels is a bad thing. The `logging` utility has an `addLevelName` method, and this specific use-case seems perfect. I understand why adding many different level names will get harder to understand, but this is the addition of one level for a situation that would benefit from it.
It would only require adding a level, which has a name. Here's how it could look like:
```py
import logging
import sys
# Add level > 30
logging.addLevelName(35, "MODEL_PARALLEL_INFO")
# Setup logging like we do in our scripts
logger = logging.getLogger(__name__)
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
handlers=[logging.StreamHandler(sys.stdout)],
)
for i in range(3):
# Simulate our scripts' logging
logger.setLevel(logging.INFO if i == 0 else logging.WARN)
logger.warning(f"Initiating {i}")
logger.info("Random information")
logger.log(logging.getLevelName("MODEL_PARALLEL_INFO"), f"Device Map of {i}: []")
```
This logs the following:
```
02/03/2021 15:40:28 - WARNING - __main__ - Initiating 0
02/03/2021 15:40:28 - INFO - __main__ - Random information
02/03/2021 15:40:28 - MODEL_PARALLEL_INFO - __main__ - Device Map of 0: []
02/03/2021 15:40:28 - WARNING - __main__ - Initiating 1
02/03/2021 15:40:28 - MODEL_PARALLEL_INFO - __main__ - Device Map of 1: []
02/03/2021 15:40:28 - WARNING - __main__ - Initiating 2
02/03/2021 15:40:28 - MODEL_PARALLEL_INFO - __main__ - Device Map of 2: []
```
Happy to drop the idea if you still think this would make things confusing for the user. I do agree that it could make things confusing for users not making use of the scripts and wondering why an INFO statement slipped when they set their verbosity to `warn`.<|||||>I think we can have the intermediate level be intermediate -> so not shown when the verbosity is set to `warn`. I expect the scripts to switch the statement from
```
logger.setLevel(logging.INFO if i == 0 else logging.WARN)
logger.parallel_info(f"Initiating {i}")
```
to
```
logger.setLevel(logging.INFO if i == 0 else logging.MODEL_PARALLEL_INFO)
logger.parallel_info(f"Initiating {i}")
```
I disagree with the name though, as this is a bit too specific ;-) PARALLEL_INFO is enough IMO
<|||||>To the naming:
Well, PP is too specific - not very generic either.
What we have here is 2 specific events, which may happen under any distributed training. So the common is that it's distributed, the separate is:
1. log only once for multiple processes (avoid duplicated logging)
2. log for every process (only for unique per-process logging)
and single process training is a special case of distributed with n_procs=1, so for a single proc both should be logged.
So I think this is what the name should reflect and not the specific circumstance it's used in.
------------------
To the implementation, thank you for your specific code suggestions @LysandreJik and @sgugger - please let me experiment with your proposals and try other things out and I will come back to you.
<|||||>Apologies for the delay, here is how I see a simple solution that doesn't break any conventions.
We create a second logger. Just need to think how to make it appear if the user didn't explicitly configure one and make it globally available from other modules.
Here is a possible implementation:
```
# logger.py
import logging
import sys
import os
local_rank = int(os.environ.get("LOCAL_RANK", -1))
# normal logger
logger = logging.getLogger(__name__)
handler_shared = logging.StreamHandler(sys.stdout)
formatter_shared = logging.Formatter('%(asctime)s - %(levelname)s - %(name)s - %(message)s')
handler_shared.setFormatter(formatter_shared)
logger.addHandler(handler_shared)
# rank-specific logger
if local_rank != -1:
logger_rank_specific = logging.getLogger(__name__ + "rank_specific")
handler_rank_specific = logging.StreamHandler(sys.stdout)
formatter_rank_specific = logging.Formatter(f'%(asctime)s - %(levelname)s - p{local_rank} - %(name)s - %(message)s')
handler_rank_specific.setFormatter(formatter_rank_specific)
logger_rank_specific.addHandler(handler_rank_specific)
else:
logger_rank_specific = logger
# the 2nd logger is just for special info that each process should print
logger_rank_specific.setLevel(logging.INFO)
# set normal logger to just the main process INFO
logger.setLevel(logging.INFO if local_rank < 1 else logging.WARN)
# test
logger.warning(f"Initiating")
logger.info("Random information")
logger_rank_specific.info(f"Device Map: {[1] * local_rank}")
```
Dist test:
```
$ python -m torch.distributed.launch --nproc_per_node 4 ./logger.py
2021-02-11 19:57:45,889 - WARNING - __main__ - Initiating
2021-02-11 19:57:45,889 - INFO - __main__ - Random information
2021-02-11 19:57:45,889 - INFO - p0 - __main__rank_specific - Device Map: []
2021-02-11 19:57:45,897 - WARNING - __main__ - Initiating
2021-02-11 19:57:45,898 - INFO - p1 - __main__rank_specific - Device Map: [1]
2021-02-11 19:57:45,905 - WARNING - __main__ - Initiating
2021-02-11 19:57:45,905 - INFO - p2 - __main__rank_specific - Device Map: [1, 1]
2021-02-11 19:57:45,914 - WARNING - __main__ - Initiating
2021-02-11 19:57:45,914 - INFO - p3 - __main__rank_specific - Device Map: [1, 1, 1]
```
Non-dist test:
```
$ python ./logger.py
2021-02-11 20:21:16,716 - WARNING - __main__ - Initiating
2021-02-11 20:21:16,717 - INFO - __main__ - Random information
2021-02-11 20:21:16,717 - INFO - __main__ - Device Map: []
```
All works.
Not sure what to call the second logger, open to suggestions.
What do you think?
Thank you.<|||||>I am fine with using a second logger like this, I guess it could be called `multiprocess_logger` and that its name could be `+ "rank_specific"` like you said. Should we add a method `get_multiprocess_logger` in the logging module so that people don't have to remember the `__name__ + "rank_specific"` part? This would give an API like:
```
from .util import logging
logger = logging.get_logger(__name__)
multiprocess_logger = logging.get_multiprocess_logger(__name__)
```
in the modules where we need the `multiprocess_logger`. And the `set_verbosity_xxx` methods would affect both transformers logger.
For the scripts we would still need to do it manually though.<|||||>Yes, of course, we will have it all nicely wrapped up. If @LysandreJik is in agreement, I will work on a PR.
It'd be nice to have a somewhat shorter name for `multiprocess_logger`, but the one you proposed works too. Perhaps, in reversed to aid the completion? `logger_multiproc` or `logger_multiprocess` or `logger_mp`?
Also I'm not sure with `__name__ + "rank_specific"` - should it be in sync with the variable name - whichever we choose?
Hmm, what if instead of changing the format for that logger to be
```
logging.Formatter(f'%(asctime)s - %(levelname)s - p{local_rank} - %(name)s - %(message)s')
```
We keep the exact same format, but we simply append the actual rank to the name?
```
logger_rank_specific = logging.getLogger(__name__ + f"local_rank_{local_rank}")
logging.Formatter(f'%(asctime)s - %(levelname)s - %(name)s - %(message)s')
```
But again, either way works. Just one less thing to modify in this case.
<|||||>Thanks for writing everything out, I'm ok with your proposal! |
transformers | 9,907 | closed | Remove subclass for sortish sampler | # What does this PR do?
When putting the sortish sampler in the main `Trainer`, I forgot to remove the overide in `Seq2SeqTrainer` which led to an issue (see #9900). This in turn makes the old `finetune_trainer` script fail because its datasets don't have the right entries (the texts are processed during the data collation) so it requires reverting the changes in that script to use back the old `Seq2SeqTrainer` (which is fine since that script will soon go to legacy).
Fixes #9900 | 01-30-2021 19:51:37 | 01-30-2021 19:51:37 | |
transformers | 9,906 | closed | Error "Expected input batch_size (16) to match target batch_size (1440)" in the WNUT NER example | ## Environment info
- `transformers` version: 4.2.2
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@joeddav
@sgugger
## Information
Model I am using: DistilBert
The problem arises when using:
* [ x] the official example script
Steps to reproduce the behavior:
reproducing the NER example from
https://huggingface.co/transformers/master/custom_datasets.html
verbatim (in colab) I get
```
"Expected input batch_size (16) to match target batch_size (1440)."
```
when running trainer.train
Stack trace:
```
ValueError Traceback (most recent call last)
<ipython-input-12-aa1378d94d0f> in <module>()
21 )
22
---> 23 trainer.train()
8 frames
/usr/local/lib/python3.6/dist-packages/transformers/trainer.py in train(self, model_path, trial)
886 tr_loss += self.training_step(model, inputs)
887 else:
--> 888 tr_loss += self.training_step(model, inputs)
889 self._total_flos += self.floating_point_ops(inputs)
890
/usr/local/lib/python3.6/dist-packages/transformers/trainer.py in training_step(self, model, inputs)
1248 loss = self.compute_loss(model, inputs)
1249 else:
-> 1250 loss = self.compute_loss(model, inputs)
1251
1252 if self.args.n_gpu > 1:
/usr/local/lib/python3.6/dist-packages/transformers/trainer.py in compute_loss(self, model, inputs)
1275 Subclass and override for custom behavior.
1276 """
-> 1277 outputs = model(**inputs)
1278 # Save past state if it exists
1279 # TODO: this needs to be fixed and made cleaner later.
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
/usr/local/lib/python3.6/dist-packages/transformers/models/distilbert/modeling_distilbert.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, labels, output_attentions, output_hidden_states, return_dict)
637 else:
638 loss_fct = nn.CrossEntropyLoss()
--> 639 loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
640
641 if not return_dict:
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py in forward(self, input, target)
960 def forward(self, input: Tensor, target: Tensor) -> Tensor:
961 return F.cross_entropy(input, target, weight=self.weight,
--> 962 ignore_index=self.ignore_index, reduction=self.reduction)
963
964
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
2466 if size_average is not None or reduce is not None:
2467 reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 2468 return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
2469
2470
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)
2260 if input.size(0) != target.size(0):
2261 raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).'
-> 2262 .format(input.size(0), target.size(0)))
2263 if dim == 2:
2264 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
ValueError: Expected input batch_size (16) to match target batch_size (1440).
``` | 01-30-2021 15:16:27 | 01-30-2021 15:16:27 | I'm guessing you're running the trainer code block from the sequence classification example verbatim. You want `DistilBertForTokenClassification` not `DistilBertForSequenceClassification`, so comment out:
> model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased") <|||||>> I'm guessing you're running the trainer code block from the sequence classification example verbatim. You want `DistilBertForTokenClassification` not `DistilBertForSequenceClassification`, so comment out:
>
> > model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased")
Doh. Indeed, thanks for the catch.<|||||>Had to use DistilBertForTokenClassification to reproduce the example. |
transformers | 9,905 | closed | exe executable file | I want to use pyinstaller to convert my test.py into an exe executable file, but unfortunately, I failed. I checked the reason, it may be that the transformer library did not successfully convert. Could it be that my method is wrong? The command I used is pyinstaller -D test.py, however, there are no transformers in the generated library.
pyinstaller -D test.py | 01-30-2021 14:29:10 | 01-30-2021 14:29:10 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,904 | closed | Tokenizer return offsets | # ๐ Feature request
Request for the feature raised in [Issue #1263](https://github.com/huggingface/transformers/issues/1263).
Previous PRs have attempted to address this but none of them were merged - https://github.com/huggingface/transformers/pull/1274 and https://github.com/huggingface/transformers/pull/2178.
## Motivation
Refer [Issue #1263](https://github.com/huggingface/transformers/issues/1263)
## Your contribution
Please guide me on how to submit a PR. | 01-30-2021 12:36:42 | 01-30-2021 12:36:42 | All fast tokenizers have this feature, just pass along `return_offsets_mapping=True` in your call to the tokenizer. Also note that fast tokenizers are used by default when `AutoTokenizer` is called.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,903 | closed | Clarify definition of seed argument in TrainingArguments | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Clarifies the definition of the `seed` argument in `TrainingArguments` to:
* Explain what "initialisation" refers to
* How to ensure reproducibility across runs
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
Link to discussion on the HF forum: https://discuss.huggingface.co/t/fixing-the-random-seed-in-the-trainer-does-not-produce-the-same-results-across-runs/3442?u=lewtun
## Who can review?
@sgugger
| 01-30-2021 11:06:09 | 01-30-2021 11:06:09 | > Thanks for flagging the doc was incorrect! You changes are not entirely correct either so I made some suggestions.
Thanks for fixing my tweaks - I like the changes so committed them ๐ <|||||>Sorry for missing that last suggestion of yours - should be ready to go now!<|||||>Yes, thanks a lot! |
transformers | 9,902 | closed | PPLM example - AttributeError issue | Hi all,
Thank you for great library.
Now I try to understand PPLM model (see reference https://eng.uber.com/pplm/ ), and when I try to start example from HuggingFace repository https://github.com/huggingface/transformers/tree/master/examples/research_projects/pplm (run_pplm.py) - I faced with the next issue:
```
Traceback (most recent call last):
File "run_pplm.py", line 820, in <module>
run_pplm_example(**vars(args))
File "run_pplm.py", line 678, in run_pplm_example
repetition_penalty=repetition_penalty,
File "run_pplm.py", line 405, in full_text_generation
repetition_penalty=repetition_penalty,
File "run_pplm.py", line 511, in generate_text_pplm
device=device,
File "run_pplm.py", line 115, in perturb_past
grad_accumulator = [(np.zeros(p.shape).astype("float32")) for p in past]
File "run_pplm.py", line 115, in <listcomp>
grad_accumulator = [(np.zeros(p.shape).astype("float32")) for p in past]
AttributeError: 'tuple' object has no attribute 'shape'
```
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: 4.3.0.dev0
- Python version: 3.8
- PyTorch version (GPU?): 1.7.1
I started this example on my own laptop and in Google Colab environment
## To reproduce
Steps to reproduce the behavior:
1. Follow to https://github.com/huggingface/transformers/tree/master/examples/research_projects/pplm and do Setup steps
2. Do command: `python run_pplm.py -B military --cond_text "The potato" --length 50 --gamma 1.5 --num_iterations 3 --num_samples 10 --stepsize 0.03 --window_length 5 --kl_scale 0.01 --gm_scale 0.99 --colorama --sample
`
## Expected behavior
This script should work without an error :)
| 01-30-2021 10:27:59 | 01-30-2021 10:27:59 | Hi! Could you install an earlier version of `transformers` to see if it works? I believe it was tested with `transformers==3.0.1`<|||||>It works fine up to and including version v4.2.2 but is broken in versions above that<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,901 | closed | Missing model license information | Hello,
A significant number of models uploaded to the model hub do not contain any license information. I wanted to check if your conditions set a default license under which models are uploaded when not specified?
In order to have a missing license added when it is missing, could you please advise on the standard way to proceed?
- Should an issue be created and tagging the author of the model asking for the license?
- Should I contact the author directly without raising an issue?
The community now has contributed a large number of very useful models, but more transparency regarding licensing (or default license) would be great.
Below are a few models I would be very interested in getting license information for, but a more general approach would be very beneficial:
- https://huggingface.co/valhalla/longformer-base-4096-finetuned-squadv1 (@patil-suraj )
- https://huggingface.co/mrm8488/longformer-base-4096-finetuned-squadv2 (@mrm8488 )
- https://huggingface.co/mrm8488/mobilebert-uncased-finetuned-squadv2 (@mrm8488 )
- https://huggingface.co/mrm8488/mobilebert-finetuned-ner (@mrm8488 )
- https://huggingface.co/mrm8488/mobilebert-finetuned-pos (@mrm8488 )
Thank you,
| 01-30-2021 08:55:47 | 01-30-2021 08:55:47 | Pinging @julien-c <|||||>> I wanted to check if your conditions set a default license under which models are uploaded when not specified?
No, that's really the model author's call. But we will try to make it easier/more straightforward for a user to pick one in the future.
> In order to have a missing license added when it is missing, could you please advise on the standard way to proceed?
A GH issue is fine I think, otherwise a thread on [discuss.huggingface.co](https://discuss.huggingface.co) would work well too.<|||||>Thank you for the clarification! |
transformers | 9,900 | closed | run_seq2seq.py doesn't work after enabling sortish sampler | It gives an error saying **AttributeError: 'Dataset' object has no attribute 'make_sortish_sampler'**. Seems like something is wrong with the pipeline or versions. I install both transformers and datasets from the sources.
[Exact line](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_seq2seq.py#L46)
```
self.train_dataset.make_sortish_sampler(
AttributeError: 'Dataset' object has no attribute 'make_sortish_sampler'
```
## Environment info
- `transformers` version: 4.3.0.dev0
- Platform: Linux-5.8.18-050818-generic-x86_64-with-glibc2.29
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1+cu110 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
@LysandreJik @patrickvonplaten
| 01-30-2021 06:41:52 | 01-30-2021 06:41:52 | |
transformers | 9,899 | closed | Does Sortish Sampler work with multiple GPUs in seq2seq? | I am referring to the training script of run_seq2seq.py. I am exactly referring to [this line in seq2seq_trainer.py](https://github.com/huggingface/transformers/blob/1420b5ff675ccdc3296c6776b339a08a22d2e941/src/transformers/trainer_seq2seq.py#L48). So when should I enable **distributed** parameters and how should I do it? | 01-30-2021 04:15:24 | 01-30-2021 04:15:24 | Currently `SortishSampler` will only work with the `finetune_trainer.py` scripts. It will be supported in `run_seq2seq.py` in the soon. And to answer your question, yes it works with multiple GPUs, and you won't need to enable distributer parameter if the training is launched on multi GPUs using `torch.distributed.launch`, it'll be enabled automatically. <|||||>thanks a lot. |
transformers | 9,898 | closed | [doc] nested markup is invalid in rst | Apparently nested markup in RST is invalid: https://docutils.sourceforge.io/FAQ.html#is-nested-inline-markup-possible
So currently this line doesn't get rendered properly, leaving inner markdown unrendered, resulting in:
```
You can create a model repo directly from `the /new page on the website <https://huggingface.co/new>`__.
```
This PR removes the bold markdown which fixes the link.
@sgugger | 01-30-2021 03:50:04 | 01-30-2021 03:50:04 | |
transformers | 9,897 | closed | [t5 tokenizer] add info logs | This PR (was modified from the original):
- adds info logs that correlated to saved tokenizer files on `tokenizer.save_pretrained()`
--------------------------
original PR note
This PR
- adds code to save t5 fast tokenizer `tokenizer.json` file on `tokenizer.save_pretrained()`
- adds info logs that correlated to saved tokenizer files on `tokenizer.save_pretrained()`
Context:
- I needed to create a new t5 smallish model and the created model won't work w/o `tokenizer.json`.
- Also as I was debugging why I was missing that file, I enabled logging and saw that we were getting logs for every saved file, but tokenizer files, so this PR fixes that, so it's consistent and helps one to see if something is missing.
Here is an example:
```
TRANSFORMERS_VERBOSITY=info PYTHONPATH=/hf/transformers-master/src python t5-make-very-small-model.py
[....]
Configuration saved in t5-very-small-random/config.json
Model weights saved in t5-very-small-random/pytorch_model.bin
Configuration saved in t5-very-small-random/config.json
tokenizer config file saved in t5-very-small-random/tokenizer_config.json
Special tokens file saved in t5-very-small-random/special_tokens_map.json
Copy vocab file to t5-very-small-random/spiece.model
tokenizer config file saved in t5-very-small-random/tokenizer_config.json
Special tokens file saved in t5-very-small-random/special_tokens_map.json
Copy vocab file to t5-very-small-random/spiece.model
Copy tokenizer file to t5-very-small-random/tokenizer.json
```
I'm not sure why I needed to save both:
```
tokenizer.save_pretrained(mname_very_small)
tokenizer_fast.save_pretrained(mname_very_small)
```
note `tokenization_t5.py` doesn't have it! both t5 tokenizer files:
```
VOCAB_FILES_NAMES = {"vocab_file": "spiece.model"}
VOCAB_FILES_NAMES = {"vocab_file": "spiece.model", "tokenizer_file": "tokenizer.json"}
```
As I flagged on slack `https://huggingface.co/sshleifer/t5-tinier-random` fails to be used since it's missing this fast `tokenizer.json` file from the s3 set of files,
```
Traceback (most recent call last):
File "./finetune_trainer.py", line 373, in <module>
main()
File "./finetune_trainer.py", line 205, in main
tokenizer = AutoTokenizer.from_pretrained(
File "/home/stas/hf/transformers/src/transformers/models/auto/tokenization_auto.py", line 385, in from_pretrained
return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/home/stas/hf/transformers/src/transformers/tokenization_utils_base.py", line 1768, in from_pretrained
return cls._from_pretrained(
File "/home/stas/hf/transformers/src/transformers/tokenization_utils_base.py", line 1841, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/home/stas/hf/transformers/src/transformers/models/t5/tokenization_t5_fast.py", line 139, in __init__
super().__init__(
File "/home/stas/hf/transformers/src/transformers/tokenization_utils_fast.py", line 86, in __init__
fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file)
```
it could be a symptom for another problem in our code.
@LysandreJik, @sgugger | 01-30-2021 02:22:53 | 01-30-2021 02:22:53 | I don't think this is something that should be done in `save_vocabulary`. You have the option in `save_pretrained` to set `legacy_format` to `False` to generate that `tokenizer.json` file. I'm not an expert in the tokenization side with all the stuff that was added for backward compatibility so I don't know if there is a better option.
I wasn't aware havin this file was mandatory for some models to use the fast tokenizer. Are you sure you have sentencepiece installed? It might be due to this that the conversion slow to fast does not work automatically
Anyhow, once we have found the right way to generate that `tokenizer.json` file, it should be added on the model sharing doc page, next to the section on how to generate TF/PyTorch checkpoints, so that people know what to do to have the most complete model on the hub.<|||||>I don't have a problem to add it anywhere else, who do we tag on this?
1. Let the code speak for itself:
```
python -c "from transformers import T5Tokenizer, T5TokenizerFast; mname_from='sshleifer/t5-tinier-random'; tokenizer = T5Tokenizer.from_pretrained(mname_from); tokenizer_fast = T5TokenizerFast.from_pretrained(mname_from)"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/mnt/disc1/data/trash/src/transformers/src/transformers/tokenization_utils_base.py", line 1762, in from_pretrained
return cls._from_pretrained(
File "/mnt/disc1/data/trash/src/transformers/src/transformers/tokenization_utils_base.py", line 1835, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/mnt/disc1/data/trash/src/transformers/src/transformers/models/t5/tokenization_t5_fast.py", line 139, in __init__
super().__init__(
File "/mnt/disc1/data/trash/src/transformers/src/transformers/tokenization_utils_fast.py", line 86, in __init__
fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file)
Exception: No such file or directory (os error 2)
```
2. If `footokenizer.from_pretrained()` fetches `tokenizer.json` then `footokenizer.save_pretrained()` must save it too.
> I wasn't aware havin gthis file was mandatory for some models to use the fast tokenizer. Are you sure you have sentencepiece installed? It might be due to this that the conversion slow to fast does not work automatically
```
pip install sentencepiece
Requirement already satisfied: sentencepiece in /mnt/nvme1/anaconda3/envs/main-38/lib/python3.8/site-packages (0.1.91)
```
If you look at the trace it is hunting for that file and can't find it.
> Anyhow, once we have found the right way to generate that tokenizer.json file, it should be added on the model sharing doc page, next to the section on how to generate TF/PyTorch checkpoints, so that people know what to do to have the most complete model on the hub.
Agreed!
@LysandreJik, @n1t0 <|||||>ok, so as @sgugger suggested on slack, the fast tokenizer saving will be handled on the core-level some time in the future, so I removed that part from this PR, leaving just the logger part. |
transformers | 9,896 | closed | [wandb] restore WANDB_DISABLED=true to disable wandb | This PR
* extends `ENV_VARS_TRUE_VALUES` with "true"
* restores `WANDB_DISABLED=true` to disable wandb
* documents this exact setting
* syncs trainer_tf with the same solution.
Context: we are still dealing with https://github.com/huggingface/transformers/issues/9623 where wandb fails no matter if you have it installed or not.
It looks like due to https://github.com/huggingface/transformers/issues/9699 a few days ago this behavior was changed to be one of `ENV_VARS_TRUE_VALUES = {"1", "ON", "YES"}`. And it's not documented anywhere.
This PR tries to restore the original behavior where any value of `WANDB_DISABLED` should disable wandb.
And wandb integration is broken, that's why we need a way disable it - it's so annoying when trying to develop and wandb keeps on breaking things whether it's installed or not. See: https://github.com/huggingface/transformers/issues/9623
Alternatively, instead of the proposed change in this PR, let's document this API that it has to be on of `{"1", "ON", "YES"}``, so that it doesn't change from day to day.
@sgugger
| 01-30-2021 02:10:06 | 01-30-2021 02:10:06 | If I understood correctly the user's issue, the problem was that any value was accepted. We can add "True" in `ENV_VAR_TRUE_VALUES` which seems to be missing, but if I set `WAND_DISABLED=False` for instance, I would expect wandb to not be disabled.
In any case those env variables are now deprecated (have to make a PR to issue a proper warning) since we have the `report_to` training argument that allows the user to set the reporting platform they want to use.<|||||>> If I understood correctly the user's issue, the problem was that any value was accepted. We can add "True" in `ENV_VAR_TRUE_VALUES` which seems to be missing, but if I set `WAND_DISABLED=False` for instance, I would expect wandb to not be disabled.
That is how I implemented it originally for this PR, but then read user's issue that triggered the PR that broke the original setting, and the issue writer requested a plain - any `WANDB_DISABLED` value. I'm fine with either. Do you want me to recode it to add `True`?
Also it needs to be documented, so that this disabling is solid and doesn't get changed again and again. If it's documented with just `Yes` that is already supported that is good enough for me.
> In any case those env variables are now deprecated (have to make a PR to issue a proper warning) since we have the `report_to` training argument that allows the user to set the reporting platform they want to use.
Well, except this new feature doesn't help in this particular case. As you can see from https://github.com/huggingface/transformers/issues/9623 and problems as recent as yesterday wandb is still a problem, even if you don't purposefully activate it or even have it installed. I won't be trying to fix this if it worked.
Perhaps the default `report_to` should be `None` and have an option for `All` to ease up for those who want them all?
Whatever the outcome, please let's fix so that if one doesn't have wandb installed it shouldn't break things.
Thank you.
<|||||>> Do you want me to recode it to add True?
Yes, just as I said, adding `True` to the `ENV_VAR_TRUE_VALUES` should be enough to have this work (it's an oversight that `True` is not in that constant).
> Also it needs to be documented, so that this disabling is solid and doesn't get changed again and again. If it's documented with just Yes that is already supported that is good enough for me.
By all means, please add documentation in this PR. For now it's documented with the [callback](https://huggingface.co/transformers/main_classes/callback.html#transformers.integrations.WandbCallback) but I'm open to any suggestion to make this better.
> Whatever the outcome, please let's fix so that if one doesn't have wandb installed it shouldn't break things
The bug in #9623 with wandb not installed is linked to something weird in your env as I haven't been able to reproduce it by following your steps. I can add stronger checks that wandb is a proper module by checking its version/authors (like is done for [datasets](https://github.com/huggingface/transformers/blob/22121e813e2d043feb4484865ab5871870cb9dc3/src/transformers/file_utils.py#L130) but I have no idea if it will solve your bug or not (since I have no reproducer on my side).
If wandb is installed and you pass along `--report_to []`, you should not see either
```
wandb.errors.error.Error: You must call wandb.init() before wandb.log()
```
nor
```
AttributeError: module 'wandb' has no attribute 'ensure_configured'
```
as the callback is not passed to the Trainer.
> Perhaps the default report_to should be None and have an option for All to ease up for those who want them all?
As I explained before, that switch will be done in v5, as it is a breaking change.<|||||>Thank you for the feedback, @sgugger - PR updated as requested, plus synced trainer_tf with the same solution. |
transformers | 9,895 | closed | TFGPT2LMHeadModel unknown location | I have been playing around with tensorflow (CPU), and some language model'ing - and it have been a blast so far - everything working great.
But after watching my old CPU slowly getting killed from all the model-training - i decided it was time to finally get some use out of my RTX 2080. I have been following the guide from [washinton university](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/install/tensorflow-install-jul-2020.ipynb):. Pretty quickly i got tensorflow-gpu running, ran it on some light grade-prediction and stuff like that.
But when i got to running GPT2 language model, i ran into some minor problems. I start by tokenizing the data:
from tokenizers.models import BPE
from tokenizers import Tokenizer
from tokenizers.decoders import ByteLevel as ByteLevelDecoder
from tokenizers.normalizers import NFKC, Sequence
from tokenizers.pre_tokenizers import ByteLevel
from tokenizers.trainers import BpeTrainer
class BPE_token(object):
def __init__(self):
self.tokenizer = Tokenizer(BPE())
self.tokenizer.normalizer = Sequence([
NFKC()
])
self.tokenizer.pre_tokenizer = ByteLevel()
self.tokenizer.decoder = ByteLevelDecoder()
def bpe_train(self, paths):
trainer = BpeTrainer(vocab_size=50000, show_progress=True, inital_alphabet=ByteLevel.alphabet(), special_tokens=[
"<s>",
"<pad>",
"</s>",
"<unk>",
"<mask>"
])
self.tokenizer.train(trainer, paths)
def save_tokenizer(self, location, prefix=None):
if not os.path.exists(location):
os.makedirs(location)
self.tokenizer.model.save(location, prefix)
# ////////// TOKENIZE DATA ////////////
from pathlib import Pa th
import os# the folder 'text' contains all the files
paths = [str(x) for x in Path("./da_corpus/").glob("**/*.txt")]
tokenizer = BPE_token()# train the tokenizer model
tokenizer.bpe_train(paths)# saving the tokenized data in our specified folder
save_path = 'tokenized_data'
tokenizer.save_tokenizer(save_path)
Code above works perfectly and tokenizes the data - just like with tensorflow (CPU). After having my data tokenized i start to train my model - but before it even gets start, i get the following ImportError:
from transformers import GPT2Config, TFGPT2LMHeadModel, GPT2Tokenizer # loading tokenizer from the saved model path
ImportError: cannot import name 'TFGPT2LMHeadModel' from 'transformers' (unknown location)
Transformers package seems to be installed correctly in the site-packages lib, and i seem to be able to use the other transformers - but not **TFGPT2LMHeadModel**
I have read everything on google and [hugging.co](https://huggingface.co/transformers/) - tried different versions of tensorflow-gpu, transformers, tokenizers and alot of other packages - - sadly nothing helps.
**Packages:**
- Python, 3.7.1
- Tensorflow 2.1.0
- Tensorflow-gpu 2.1.0
- Tensorflow-base 2.1.0
- Tensorflow-estimator 2.1.0
- Transformers 4.2.2
- Tokenizers 0.9.4
- cudnn 7.6.5
- cudatoolkit 10.1.243
@LysandreJik | 01-30-2021 00:17:23 | 01-30-2021 00:17:23 | Solved it by installing tensorflow-gpu=2.3.0 & cuda 10.1
Following this guide:
https://medium.com/analytics-vidhya/tensorflow-2-3-0-with-gpu-support-on-windows-10-f975a552ea7c
Use this command to install gpu2.3.0
python -m pip install https://storage.googleapis.com/tensorflow/windows/gpu/tensorflow_gpu-2.3.0-cp37-cp37m-win_amd64.whl |
transformers | 9,894 | closed | ImportError: cannot import name 'PreTrainedEncoderDecoder' from 'transformers' (unknown location) | Hi,
I am using the library to pretrain my model of choice. I am now interested in setting up an encoder-decoder architecture with my pretrained models, and the "combiners" seems quite a straightforward way to do that.
Unfortunately, I am getting an import error on both "PreTrainedEncoderDecoder" and "Model2Model"
What am I missing ?
Thanks
Gianfilippo
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.0.dev0
- Platform: Linux-3.10.0-1062.33.1.el7.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.6
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: NA
- Using distributed or parallel set-up in script?: NA
## To reproduce
Steps to reproduce the behavior:
1.python -c "from transformers import PreTrainedEncoderDecoder"
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
ImportError: cannot import name 'PreTrainedEncoderDecoder' from 'transformers' (unknown location)
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
no error
| 01-29-2021 23:44:29 | 01-29-2021 23:44:29 | I'm not sure where you have seen those objects: they are nowhere in the transformers library. The library provides `EncoderDecoderModel`, see the [encoder/decoder doc page](https://huggingface.co/transformers/model_doc/encoderdecoder.html).<|||||>Hi, I was reading this (https://medium.com/huggingface/encoder-decoders-in-transformers-a-hybrid-pre-trained-architecture-for-seq2seq-af4d7bf14bb8). I also found someone reporting some issue while using the same object here
https://github.com/huggingface/transformers/issues/2206.
Perhaps I am looking at some older version ?
<|||||>This is indeed from an older version (I guess 2 something or even 1 something). <|||||>Thanks. I will look at the EncoderDecoderModel |
transformers | 9,893 | open | rfc: new benchmark tool | This issue is to collect notes and ideas on creating a new benchmarking tool.
This is not about the other speed/memory regression project we have been discussing elsewhere.
This is about integration and various comparisons that we need to run in order to give users the best advice on how to deploy transformers in the most efficient way.
Please share the comments ideas/suggestions/concerns/needs, and I will compile them here.
- important: not part of examples - the goal is performance and integration tooling and not user-facing - totally different needs and priorities
- the cmd line has to continue working the same months later - so that old benchmarks could be re-run - ok to change interface with back-compat option so that the old benchmarks can be still re-validated and compared to
- ideally work with any transformers model - a single tool to rule them all
- minimal amount of arguments - just the important ones
- ability to generate markdown table entries directly and json files that contain not just the outcome but also the key variables that are being tested -
- the report to include critical hardware/software params as well in a compact form and allow these to be merged from multiple recordings - i.e. if the hw/sw are the same - they can be merged into a single report. will need to figure out how to record hardware nuances
* e.g. the same DDP test with 2 gpus connected w/ NVLink gives dramatically different results than the same 2 gpus w/o NVLink.
* not sure how to record CPU-capacity/ free RAM, etc., since all these impact the outcome
- crucial to be able to truncate the dataset | 01-29-2021 21:00:39 | 01-29-2021 21:00:39 | I was thinking about one feature if possible,
How about when we run an example script a benchmarking script is automatically run and store the results in one file if the user passes an optional argument.
When the user uploads the model on the model hub we can directly sort the model based on benchmarking results file.<|||||>All data files on the model hub for the same model arch will give the same speed performance results, since they are just data points.
Therefore it's the model code that needs to be benchmarked (and the trainer if there is more than one).
And given that currently we have only one model implementation of each there is nothing to compare it to.
The main idea of this issue is to do regression testing, to ensure that we don't accidentally make models slower while changing the code. For an example of this happening, please see: https://github.com/huggingface/transformers/pull/11218 |
transformers | 9,892 | closed | Seeking clarification on T5 prefix for summarization | In the paper, I see the the prefix for summarization is "TL;DR:" . If I look into the model [config.json](https://huggingface.co/t5-base/blob/main/config.json) of T5-Base, I see it is "summarization:".
"task_specific_params": {
"summarization": {
"early_stopping": true,
"length_penalty": 2.0,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
.....
If I want to finetune Huggingface T5 for summarization, which prefix should I use?
Thank you
| 01-29-2021 20:54:36 | 01-29-2021 20:54:36 | Hi @ari9dam
Please use the [forum](https://discuss.huggingface.co/) for such questions, and there's a discussion about this in the post
https://discuss.huggingface.co/t/t5-finetuning-tips/684 |
transformers | 9,891 | closed | Remove Token from Vocab? | Is there a way I can remove a token from vocab.json? | 01-29-2021 19:08:43 | 01-29-2021 19:08:43 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,890 | closed | Restore TF embeddings and attention layers to their previous version | # What does this PR do?
This PR restores the attention layers and the embeddings as it was in v4.1, even though the embeddings got few improvements compared to their original version to keep XLA compliancy. The reason is because we realized that some used operators were not compatible with some NN SDKs such as the one from Qualcomm or ONNX.
| 01-29-2021 16:35:50 | 01-29-2021 16:35:50 | Pinging @mfuntowicz <|||||>> Morgan was mentioning that the transpose_for_score method was called right after the Q/K/V projection, but that there was no need to split this dimension if we're not doing head masking.
What do you think? Maybe that's some work for another PR, though.
I think it seems doable but not sure, I prefer to keep things like this to be sure we revert properly as it was before and we get at least a proper version, and we can take care of this change in another PR.<|||||>Sounds good to me!<|||||>LGTM! @patrickvonplaten feel free to merge if you approve the changes ^^<|||||>I won't have time to do a proper review today (can do it tomorrow), but feel free to merge without me if @LysandreJik and @sgugger are ok with it<|||||>@patrickvonplaten if you can take a look at it today and merge it if it's fine with you, that would be great |
transformers | 9,889 | closed | m2m_100 | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-29-2021 14:25:28 | 01-29-2021 14:25:28 | |
transformers | 9,888 | closed | [Quick poll] Give your opinion on the future of ๐ค transformers: 40k edition! | Thanks to all of you, Transformers just passed 40k :star2: this week!
Our libraries have always been about the community and we need your input to define the direction of the next 40k stars.
If you have a couple of minutes and want to participate in shaping the future of the library, please share your thoughts: https://forms.gle/FackvXzWJBWQz2WY8
(please reply in the above feedback form rather than to this thread)
Thank you all on behalf of the HuggingFace team! ๐ค | 01-29-2021 12:30:15 | 01-29-2021 12:30:15 | Just did, thanks a lot @LysandreJik, the form is super quick to fill-in and interesting!
Everyone, we're waiting for you<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,887 | closed | Fit chinese wwm to new datasets | Sorry for my later update.
I make my code(especially in chinese mlm_wwm) fit the newest code.
Here are the changes:
1. add `chinese_ref` key to avoid miss ref inf.
2. fix the type bug in `data_collator.py`
3. re-add `run_chinese_ref.py` cause it could run with the newest version code (4.2.2).
4. update readme | 01-29-2021 10:54:46 | 01-29-2021 10:54:46 | @sgugger @LysandreJik
Could you help me review these code ?<|||||>> Hi there! Thanks for updating your example. We have now created a `research_projects` project for the examples not directly maintained by the core team, and I think the `run_mlm_wwm` script and the chine_ref file could all go there in a new folder. Would you mind adjusting your PR in that direction?
Sure, maybe move `run_chinese_ref.py` to `research_projects` folder and leave `run_mlm_wwm.py` in where it was would be better ? And I don't know which folder is better ?
The two files are independent, we could move it to anywhere.<|||||>The `run_mlm_wwm` file is not maintained by us directly and it only works for BERT-models, compared to the other examples, so I think it can all go together there. You can create a new folder named `mlm_wwm` (since it's not just Chinese) for instance and have the specific requirements in the `requirements.txt` file there?<|||||>> The `run_mlm_wwm` file is not maintained by us directly and it only works for BERT-models, compared to the other examples, so I think it can all go together there. You can create a new folder named `mlm_wwm` (since it's not just Chinese) for instance and have the specific requirements in the `requirements.txt` file there?
done!<|||||>Last thing is to run `make style` to make sure the files are properly formatted, let me know if you have any issue doing this!<|||||>> Last thing is to run `make style` to make sure the files are properly formatted, let me know if you have any issue doing this!
yeah, seem my previous PR also failed in format :(
I got error as follow:
```
#!/bin/bash -eo pipefail
black --check examples tests src utils
would reformat /home/circleci/transformers/examples/research_projects/mlm_wwm/run_chinese_ref.py
would reformat /home/circleci/transformers/src/transformers/trainer.py
Oh no! ๐ฅ ๐ ๐ฅ
2 files would be reformatted, 706 files would be left unchanged.
Exited with code exit status 1
```
But I formate my code.

Maybe you could help me do this part ?<|||||>@sgugger My pleasure. Maybe you could help me fix the formate error :(
My python version `3.9.1` black `20.8b1`, why I got diff result in CI. |
transformers | 9,886 | closed | Conversion of BPE tokenizer for Marian models | Hello,
I was searching for the pt-en model of Marian and noticed that it has not been converted for the huggingface library apparently because it uses a BPE tokenizer. Is it possible to convert BPE-based models to be used in huggingface somehow?
Thank you | 01-29-2021 10:42:38 | 01-29-2021 10:42:38 | cc'ing @n1t0 on this in case he didn't see it<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,885 | closed | Finetune_Trainer Question | Hi, I'm new to HuggingFace. I want to fine-tune a BARTForConditionalGeneration model by finetunr_trainer.py for the translation task on google colab, but I didn't figure out how to use the script to fine-tune the model. Could anyone help me to show a quick example? Thanks | 01-29-2021 09:14:03 | 01-29-2021 09:14:03 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead? You'll get more help there.
The docs regarding the maintained trainer are available [here](https://github.com/huggingface/transformers/tree/master/examples/seq2seq#new-script) and may be useful to you.
Thanks!<|||||>Oh sorry about that, Thank you very much |
transformers | 9,884 | closed | Exporting model to onnx increases the model size | Hi, I'm trying to convert some models as mentioned below to onnx as follows:
ktrapeznikov/albert-xlarge-v2-squad-v2
albert-xlarge-v1
albert-xlarge-v2
The common issue with exporting all these models is that I get an exception that the protobuf size increases to more than 2gb while all these are less than 800mb. When I use the use_external_data_format=True flag, the exported model files(network layers as I searched in other issues) sum up to in gbs of size. For eg, the model ktrapeznikov/albert-xlarge-v2-squad-v2 is sized to 210mb but when I convert it to onnx using use_external_data_format flag, the model size sums up to 4gb.
## Code example
```
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
import torch
model_name = "ktrapeznikov/albert-xlarge-v2-squad-v2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
model.eval()
question = "what is google specialization"
text = "Google LLC is an American multinational technology company that specializes in Internet-related services and products, which include online advertising technologies, a search engine, cloud computing, software, and hardware."
encoding = tokenizer.encode_plus(question, text)
input_ids, attention_mask, token_type_ids = encoding["input_ids"],encoding["attention_mask"], encoding["token_type_ids"]
input_ids = torch.tensor([input_ids])
attention_mask = torch.tensor([attention_mask])
token_type_ids = torch.tensor([token_type_ids])
torch.onnx.export(
model,
(input_ids,attention_mask, token_type_ids),
f"{model_name}.onnx",
input_names = ['input_ids','attention_mask', 'token_type_ids'],
output_names = ['qa_outputs'],
opset_version=12, ##opset has to be set to 12
do_constant_folding=True,
use_external_data_format=True,
dynamic_axes = {
'input_ids' : {0: 'batch', 1: 'sequence'},
'attention_mask' : {0: 'batch', 1: 'sequence'},
'token_type_ids' : {0: 'batch', 1: 'sequence'},
'qa_outputs': {0: 'batch'}
}
)
```


## System Info
PyTorch version: 1.7.0+cu101
CUDA used to build PyTorch: 10.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.12.0
Python version: 3.6 (64-bit runtime)
Is CUDA available: False
CUDA runtime version: 10.1.243
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.19.5
[pip3] torch==1.7.0+cu101
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.3.1
[pip3] torchvision==0.8.1+cu101
Any help would be much appreciated.
Thanks in advance! | 01-29-2021 08:26:16 | 01-29-2021 08:26:16 | I wouldn't be surprised that ONNX serializes each of the layers as independant layers when they're all repeated. I don't know enough about the ONNX export to know it that's the issue or what to do to fix it though.
Do you get similar increases in size with other models? With BERT for example?<|||||>Hi @LysandreJik , I faced the issue only on albert models so far, exporting to onnx for other BERT models worked fine and use them for prediction as well.<|||||> I solved the issue using [this code](https://github.com/thehetpandya/onnx-shared-weights-remove/blob/main/onnx_remove_shared_weights.ipynb) that removes shared weights from the ONNX model. |
transformers | 9,883 | closed | examples/seq2seq , where can I find the definition for the sortish_sampler argument? | 01-29-2021 06:18:06 | 01-29-2021 06:18:06 | I am trying to understand the use of sortish sampler. Right now it is not used in the run_seq2seq.py script. <|||||>I found it. It is in the t**rainer_seq2seq.p**y script. |
|
transformers | 9,882 | closed | Some weights of {} were not initialized from the model checkpoint | I keep failing to load model checkpoint.
I built a model inheriting PreTrainedModel and have roberta inside initialization.
Training this model with trainer works fine, but when I try to load the checkpoint using ```from_pretrained```, it keeps failing to load the checkpoint. Can someone help me out? Thanks
Structure of my model
```
class MaskClassifier(PreTrainedModel):
def __init__(self, config, path):
super().__init__(config=config)
self.roberta = RobertaModel.from_pretrained(path)
self.max_mask = 10
self.hidden_size = RobertaConfig().hidden_size
self.linear1 = torch.nn.Linear(2 * self.hidden_size, self.hidden_size)
self.linear2 = torch.nn.Linear(self.hidden_size, self.max_mask + 1)
self.softmax = torch.nn.Softmax(dim=1)
def forward(self, input_ids, attention_mask, token_type_ids, labels=None):
...
# Feed input to RoBERTa
```
Initialize before training
```
config = RobertaConfig()
config.max_position_embeddings = 514
config.type_vocab_size = 1
config.vocab_size = 50265
model = MaskClassifier(config=config, path='roberta-base')
```
Saving after training
```trainer.save_model('./slogan_pretrained')```
Loading the checkpoint
```
config = RobertaConfig()
config.max_position_embeddings = 514
config.type_vocab_size = 1
config.vocab_size = 50265
model = MaskClassifier.from_pretrained(path, config=config, path='roberta-base')
```
I found similair issue(https://github.com/huggingface/transformers/issues/2886), but I don't know exactly how I should override the function ```from_pretrained``` and even I tried overriding this functions, it still can't load the checkpoint.
Error Message
> Some weights of MaskClassifier were not initialized from the model checkpoint at /home/yeoun/slogans/slogan_pretrained and are newly initialized: ['.roberta.embeddings.position_ids', '.roberta.embeddings.word_embeddings.weight', '.roberta.embeddings.position_embeddings.weight', '.roberta.embeddings.token_type_embeddings.weight', '.roberta.embeddings.LayerNorm.weight', '.roberta.embeddings.LayerNorm.bias', '.roberta.encoder.layer.0.attention.self.query.weight', '.roberta.encoder.layer.0.attention.self.query.bias', '.roberta.encoder.layer.0.attention.self.key.weight', '.roberta.encoder.layer.0.attention.self.key.bias', '.roberta.encoder.layer.0.attention.self.value.weight', ...
| 01-29-2021 03:09:07 | 01-29-2021 03:09:07 | Hi! Thanks for opening an issue. I see two issues with your setup here:
- Why are you using `from_pretrained` to load the `RobertaModel` inside your pre-trained model? You should just initialize a `RobertaModel` from the configuration imo.
- Instead of `PreTrainedModel`, I would instead use `RobertaPreTrainedModel`.
See the below script for an example of what I would recommend. I'm saving & reloading the model to make sure that all the weights get saved/loaded:
```py
from transformers import RobertaModel, RobertaConfig, logging
from transformers.models.roberta.modeling_roberta import RobertaPreTrainedModel
import torch
logging.set_verbosity_info()
class MaskClassifier(RobertaPreTrainedModel):
def __init__(self, config):
super().__init__(config=config)
self.roberta = RobertaModel(config)
self.max_mask = 10
self.hidden_size = config.hidden_size
self.linear1 = torch.nn.Linear(2 * self.hidden_size, self.hidden_size)
self.linear2 = torch.nn.Linear(self.hidden_size, self.max_mask + 1)
self.softmax = torch.nn.Softmax(dim=1)
self.init_weights()
model = MaskClassifier.from_pretrained("roberta-base")
```
Let's see the logs now, for the first load using the `roberta-base` checkpoint:
```
Some weights of the model checkpoint at roberta-base were not used when initializing MaskClassifier: ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight']
- This IS expected if you are initializing MaskClassifier from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing MaskClassifier from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of MaskClassifier were not initialized from the model checkpoint at roberta-base and are newly initialized: ['roberta.embeddings.position_ids', 'linear1.weight', 'linear1.bias', 'linear2.weight', 'linear2.bias']
```
The warning tells you: you're not using the `lm_head` weights, and the following layers are initialized: `linear1` and `linear2`.
Since you're not using the LM head, and the two layers are the ones you just added, then there's nothing to worry about.
Let's try saving the model and reloading it again:
```py
model.save_pretrained("here")
MaskClassifier.from_pretrained("here")
```
The logs show:
```
All model checkpoint weights were used when initializing MaskClassifier.
All the weights of MaskClassifier were initialized from the model checkpoint at here.
```
Success :tada: <|||||>Thanks a lot!!! It works <|||||>@LysandreJik
I really appreciate your help! You saved me from nightmares...
Actually I have one more custom model, and I tried the same structure you showed me, but it fails to load the weights. The only difference is that I'm using RobertaForMaskedLM, not RobertaModel here.
Model Structure
```
class MaskedLM(RobertaPreTrainedModel):
def __init__(self, config):
super().__init__(config=config)
self.roberta = RobertaForMaskedLM(config)
# self.tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
self.refinement_num = 3
# self.mask_id = self.tokenizer.convert_tokens_to_ids([tokenizer.mask_token])[0] # 50264
self.init_weights()
def forward( ... )
```
Initialize Model
```
model = MaskedLM.from_pretrained('roberta-base')
```
Error Message
```
Some weights of the model checkpoint at roberta-base were not used when initializing MaskedLM: ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight', 'roberta.embeddings.word_embeddings.weight', 'roberta.embeddings.position_embeddings.weight', 'roberta.embeddings.token_type_embeddings.weight', 'roberta.embeddings.LayerNorm.weight', 'roberta.embeddings.LayerNorm.bias', 'roberta.encoder.layer.0.attention.self.query.weight', ... ]
- This IS expected if you are initializing MaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing MaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of MaskedLM were not initialized from the model checkpoint at roberta-base and are newly initialized: ['roberta.roberta.embeddings.position_ids', 'roberta.roberta.embeddings.word_embeddings.weight', 'roberta.roberta.embeddings.position_embeddings.weight', 'roberta.roberta.embeddings.token_type_embeddings.weight', 'roberta.roberta.embeddings.LayerNorm.weight', 'roberta.roberta.embeddings.LayerNorm.bias', 'roberta.roberta.encoder.layer.0.attention.self.query.weight', ... ]
```
I don't know why this model has '**roberta.roberta**.embeddings.position_ids', not '**roberta**.embeddings.position_ids'<|||||>Hmmm, the issue here is that there is a difference between `RobertaModel`, which has the following weights:
```
embeddings.position_ids
embeddings.xxx
[...]
```
and `RobertaForMaskedLM`, which contains `RobertaModel` under the `roberta` prefix:
```
roberta.embeddings.position_ids
roberta.embeddings.xxx
[...]
lm_head.dense
lm_head.bias
[...]
```
I'm not entirely sure of what you're trying to achieve as I don't see your forward function, but I think you could prevent a lot of pain by redefining your model somewhat like `RobertaForMaskedLM` is setup:
```py
# Import the RobertaLMHead
from transformers.models.roberta.modeling_roberta import RobertaPreTrainedModel, RobertaLMHead
class MaskedLM(RobertaPreTrainedModel):
def __init__(self, config):
super().__init__(config=config)
# Create the RoBERTa model and its head like in the MaskedLM layer
self.roberta = RobertaModel(config)
self.lm_head = RobertaLMHead(config)
self.refinement_num = 3
self.init_weights()
def forward( ... )
outputs = self.roberta(xxx)
sequence_output = outputs[0]
prediction_scores = self.lm_head(sequence_output)
# Do your stuff!
```
This way you can load the checkpoint seamlessly in your model, as the naming with the prefixes will be correct.<|||||>Thanks!! I tried to build a MaskedLM with some refinements. After predicting multiple <mask> tokens, mask two random predicted tokens and predict them again. Anyway thanks a lot ๐ค ๐ค ๐ค <|||||>Glad I could help!<|||||>> (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Hi~
I just have a question about why โ (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).โ IS NOT expected? the knowledge in pretrained model is same as your task that you are going to finetune?
Thanks~
Best,
Pengbo |
transformers | 9,881 | closed | DeBERTa pretraining using MLM: model gradients become NAN | ## Environment info
- `transformers` version: 4.3.0.dev0
- Platform: Ubuntu
- Python version: 3.6.12
- PyTorch version : 1.7.1
- Using GPU in script?: Y
- Using distributed or parallel set-up in script?: Y, using 8 GPU machine.
### Who can help
@BigBird01 @NielsRogge
Models:
DeBERTa Base
## Information
I am using DeBERTa base model and training it with Masked Language Modeling task using single file from wikipedia text
dataset. For the first step the loss is around 11 and after backward pass, gradients become nan and gradient norm goes to infinity.
I reduced learning rate from 1e-4 to 5e-10, still the issue persists. Batch size per GPU is 32 and with 8 GPUs, total batchsize becomes 256. Configured hyperparameters according to paper are as below.
* Number of Layers: 12
* Hidden size: 768
* FNN inner hidden size: 3072
* Attention Heads: 12
* Attention Head size: 64
* Dropout: 0.1
* Warmup Steps: 10k
* Learning Rates: 1e-4
* Batch Size: 256
* Weight Decay: 0.01
* Max Steps: 1M
* Learning Rate Decay: Linear
* Adam ฮต: 1e-6
* Adam ฮฒ1: 0.9
* Adam ฮฒ2: 0.999
* Gradient Clipping: 1.0
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import (
DebertaConfig,
DebertaTokenizer,
DebertaForMaskedLM,
LineByLineTextDataset,
DataCollatorForLanguageModeling,
Trainer,
TrainingArguments
)
tokenizer = DebertaTokenizer.from_pretrained('microsoft/deberta-base')
train_dataset = LineByLineTextDataset(
tokenizer=tokenizer,
file_path="/data/wikidemo/wiki_01",
block_size=128,
)
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=True, mlm_probability=0.15
)
config = DebertaConfig()
model = DebertaForMaskedLM(config=config)
training_args = TrainingArguments(
output_dir="./deberta",
overwrite_output_dir=True,
num_train_epochs=1000,
per_gpu_train_batch_size=2,
learning_rate=5e-10,
weight_decay=0.01,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e06,
max_grad_norm=1.0,
save_steps=10_000,
save_total_limit=2,
logging_first_step=False,
logging_steps=1,
max_steps=10000,
gradient_accumulation_steps=10,
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=train_dataset,
)
print("Starting training")
trainer.train()
``` | 01-29-2021 02:30:39 | 01-29-2021 02:30:39 | hi @mansimane
In your code in `TrainingArguments`, `adam_epsilon` is set to `1e06`, which quite a large value, I believe it's a typo, should be 1e-6 as mentioned in the comment. This could be the reason for `nan` gradients. <|||||>Thanks @patil-suraj for the catch. I fixed the Adam epsilon, but still some gradients are becoming infinity and nan after first backward pass. Following is the config I tried
```python
training_args = TrainingArguments(
output_dir="./deberta",
overwrite_output_dir=True,
num_train_epochs=1000,
per_gpu_train_batch_size=32,
learning_rate=1e-10,
warmup_steps=10000,
weight_decay=0.01,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-6,
max_grad_norm=1.0,
save_steps=10_000,
save_total_limit=2,
logging_first_step=False,
logging_steps=1,
max_steps=10000,
gradient_accumulation_steps=1,
)
```<|||||>Hi,
sorry for the late reply. I tested MLM with `DeBertaForMaskedLM` using the `run_mlm.py` script, and everything seems to be working fine. So it seems like a hyperparameter issue (I would suggest using the same hyperparameter values as this script). Your learning rate for example seems way too low.
My Google colab to reproduce: https://colab.research.google.com/drive/1Rk5JoBTzK0I8J3FjG2R4J9HCeOrUpRTt?usp=sharing<|||||>I am having the same issue but with MobileBert after loading a pre-trained model. I trained from scratch a LM 23000 steps. Now loading the model mobilebert.from_pretrained() to reload the model and keep training. Now when I try to keep training the loss i NaN. I have removed all related to learning rate in the training args and the nans keep appearing.
```
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir="./mobile_linear_att_4Heads_8L_128_512_03layerdrop_shared_all_dataset_1",
overwrite_output_dir=True,
num_train_epochs=1,
per_device_train_batch_size=95,
save_steps=50,
save_total_limit=2,
logging_first_step=True,
logging_steps=50,
gradient_accumulation_steps=8,
fp16=True,
dataloader_num_workers=19,
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=big_dataset,
tokenizer=tokenizer)
trainer.train()
```
EDIT: After some debugging I looked into the "trainer_state.json" and I have seen that before finishing the last training I got NaNs into the model so, it is nothing related to learning rate o something at this moment.
```
{
"cuda max_memory_reserved": 23460839424,
"cuda memory cached": 23460839424,
"cuda memory consumption": 111139328,
"epoch": 0.99,
"learning_rate": 0.0004937288135593219,
"loss": 4.5816,
"num_parameters": 5920442,
"step": 22900
},
{
"cuda max_memory_reserved": 23460839424,
"cuda memory cached": 23460839424,
"cuda memory consumption": 111139328,
"epoch": 0.99,
"learning_rate": 0.0004934745762711864,
"loss": NaN,
"num_parameters": 5920442,
"step": 22950
},
```
EDIT2: I think that my issue is related to the scheduler in the learning rate. I am trying to train in batches of 20% of the dataset, so the learning rate scheduler I think, it calculate the learning rate based on the epoch and not on the current step, so I hardcoded in:
```
self.lr_scheduler = get_scheduler(
self.args.lr_scheduler_type,
self.optimizer,
num_warmup_steps=self.args.warmup_steps,
num_training_steps=num_training_steps, # <- here I hardcoded the calculated final (20%+20%+20%...) training steps
)
```
So when I was approximating the final of the training in the first 20% it got something weird.<|||||>it's a pain to train on shards of (bookcorpus + wikipedia + openwebtext) I am processing the 20% of each one because I dont have more than 1 TB of disk. But I am figthing with the learning rate scheduler, because I have to do engineering to train on all the dataset. <|||||>Thank you @NielsRogge . I was able to train DeBERTa with run_mlm.py script. Not sure what was the issue in my code, it gave nan after trying learning rate that you used as well. <|||||>@mansimane are you using fp16 or fp32 ? |
transformers | 9,880 | closed | [trainer] [deepspeed] refactor deepspeed setup devices | Following the discussion at https://github.com/huggingface/transformers/pull/9798#pullrequestreview-578822445 as we now have multiple integrations with complex unique setups, @sgugger and I agreed that it's better to have a small duplication of a few lines of code but to make it much easier to understand what goes on for a specific integration, so rather than further refactoring the recently added sage branch, this PR creates a dedicated branch for DeepSpeed and thus simplifies the general case when straight DDP is used.
There is no functionality change - just a small code reshuffle.
@sgugger
| 01-29-2021 00:46:51 | 01-29-2021 00:46:51 | |
transformers | 9,879 | closed | [seq2seq] correctly handle mt5 | This PR fixes `seq2seq/utils.py` to handle `mt5` like it does `t5`.
Ideally there should be a test, which would require creating a tiny model for mt5, but I'm being told this code is going away anyway, so there is no point investing energy into it.
Fixes: https://github.com/huggingface/transformers/issues/9865
@patil-suraj, @sgugger | 01-29-2021 00:05:29 | 01-29-2021 00:05:29 | > when you port this to the new run_seq2seq, it would be great to try to find a way to make this not use any special code for a given model
I'm working on it in #9844, it's not finished though. We might need to add `get_input_embeddings` and `get_pos_embeddings` methods to every s2s model, to avoid special cases.<|||||>If we need to add some methods to deal with the special cases, I would prefer it (otherwise the script might fail with new seq2seq models). |
transformers | 9,878 | closed | [DOCS] curl links go to 404 not found in NER tutorial | Hey, in [NER tutorial](https://huggingface.co/transformers/v2.2.0/examples.html#named-entity-recognition) the curl commands seem outdated
```
curl -L 'https://sites.google.com/site/germeval2014ner/data/NER-de-train.tsv?attredirects=0&d=1' \
| grep -v "^#" | cut -f 2,3 | tr '\t' ' ' > train.txt.tmp
curl -L 'https://sites.google.com/site/germeval2014ner/data/NER-de-dev.tsv?attredirects=0&d=1' \
| grep -v "^#" | cut -f 2,3 | tr '\t' ' ' > dev.txt.tmp
curl -L 'https://sites.google.com/site/germeval2014ner/data/NER-de-test.tsv?attredirects=0&d=1' \
| grep -v "^#" | cut -f 2,3 | tr '\t' ' ' > test.txt.tmp
```
Can you please share new curl requests? | 01-28-2021 22:03:31 | 01-28-2021 22:03:31 | the doc link you shared if for V2.2.0, we have updated all the examples in a recent version. You can find the new ner examples here https://github.com/huggingface/transformers/tree/master/examples/token-classification<|||||>Thank you @patil-suraj |
transformers | 9,877 | closed | Fix head masking for TFT5 models | * This PR fixes head masking in TFT5 models (#9859)
* This PR further fixes the name of an error message variable from `__HEAD_MASK_WARNING_MSG` to `_HEAD_MASK_WARNING_MSG` as the former one was not working properly and raised error (double underscore made troubles)
<hr>
Fixes: #9859
Reviewers: @jplu | 01-28-2021 21:47:02 | 01-28-2021 21:47:02 | @stancld can you please rebase on master in order to solve the conflicts?<|||||>Thanks! The PR should be merged once @LysandreJik and @patrickvonplaten will have reviewed it.<|||||>Thanks for fixing it! |
transformers | 9,876 | closed | When on sagemaker use their env variables for saves | # What does this PR do?
When on SageMaker, the content of the env variable "SM_OUTPUT_DATA_DIR" should be used to save training artifacts (such as our checkpoint) so make it overwrite the `output_dir` (and make that argument optional so it doesn't need to be passed for sagemaker training).
Then the final model will be easy to deploy if it's also saved to the content of the env variable "SM_MODEL_DIR" so adding that as well.
| 01-28-2021 21:11:19 | 01-28-2021 21:11:19 | |
transformers | 9,875 | closed | Clarify use of unk_token in slow tokenizers' docstrings | # What does this PR do?
Currently, the docstrings for slow tokenizers' `tokenize()` method claim that unknown tokens will be left in place, in contrast to the fast tokenizers' behavior. In reality, both convert unknown tokens to `unk_token`.
Fixes #9714
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik | 01-28-2021 20:23:16 | 01-28-2021 20:23:16 | |
transformers | 9,874 | closed | pin_memory -> dataloader_pin_memory | Ref: https://github.com/huggingface/transformers/pull/9857#issuecomment-769256215
This PR adds a new argument `dataloader_pin_memory` to `TrainingArguments`. You can use this to pin memory in `DataLoader`. | 01-28-2021 18:30:25 | 01-28-2021 18:30:25 | Updated with review comments. Please let me know if/when it's okay to merge :) <|||||>Good for me, thanks a lot!<|||||>this is much better, thank you for the adjustment, @abhishekkrthakur |
transformers | 9,873 | closed | Strange hyperparameter warning | ## Environment info
- `transformers` version: Master Branch
- Platform: Ubuntu 18.04
- Python version: 3.7
- PyTorch version (GPU?): 1.7 (YES)
- Tensorflow version (GPU?):
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: NO
### Who can help
@amogkam
## Information
Model I am using (Bert, XLNet ...):
BART
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Go to https://docs.ray.io/en/master/tune/examples/pbt_transformers.html and copy that code.
2. Execute that code.
3. Wait and when some iterations have been made, you'll see that a constant warning persists:
```{python}
2021-01-28 17:33:56,863 WARNING trial_runner.py:420 -- Trial Runner checkpointing failed: Checkpoint must not be in-memory.
```
Although it seems a ray-related problem, from reading https://github.com/huggingface/transformers/pull/6747 I have arrived to the conclusion that maybe as the checkpointing integration has been removed from Transformers, PBT is no longer working.
When I look into the logs, I see that effectively no perturbation is being made, and it should because perturbation_interval is set to
## Expected behavior
It's expected that if I set perturbation_interval to 1, perturbations are made every 1 training iteration, but PBT is not doing any perturbation at all and I think it's because of some problem in the integration for checkpointing between Transformers and Ray Tune. | 01-28-2021 17:04:02 | 01-28-2021 17:04:02 | Hey @alexvaca0 thanks for bringing this up. Indeed our example is outdated. I updated it here https://github.com/amogkam/ray/blob/hf-pbt/python/ray/tune/examples/pbt_transformers/pbt_transformers.py and when running on the latest Ray wheels and the latest transformer release (4.2.2), I am seeing perturbations happening as expected

I'll also try this out on transformers master branch and see if it's working on that as well, and will update here again. Thanks!<|||||>I just tried on transformers master and am seeing it work as well. Please let me know if this updated example works for you @alexvaca0! <|||||>Thank you so much for your quick and very helpful answer @amogkam I'm using also the latest ray wheels and master version of transformers, therefore that example should work for me too! I'm going to try it as soon as I can so that I can tell you if it works for me or not! Thank you :) <|||||>I just checked your code and I don't find any change with respect to the official example except for the evaluation strategy, which was steps and now it's epochs, but maybe I'm missing something. I'll try first that example and then I'll try to apply it to my dataset. I have one more question regarding PBT with transformers: I'm observing that from time to time the models "re-start" from the beggining (that is, a model that had trained for 1.53 epochs suddenly returns to step 0 and starts from there). In my configuration I set number of epochs to 10, expecting that each of the 4 models in the population trains for 10 epochs, but mutating their configurations in the process. However, they'd never reach that number of epochs if they continue restarting from the beginning... Is there something I'm missing here? Do you think this issue will be also solved with the new training script? @amogkam Thank you !! :) <|||||>@amogkam
2021-01-29 15:01:01,078 WARNING trial_runner.py:370 -- Trial Runner checkpointing failed: Checkpoint must not be in-memory.
It still throws this error, and I've checked that another anomaly still persists: models not always re-start training from the point where they left, but start from the beginning again... Any clues why this may be happening? It's strange that this doesn't occur always, as sometimes models do re-start training from the point where they left...<|||||>Hey @alexvaca0, could you share what your stdout looks like please? Also is this with any modifications to the example, and can you share the full code that you are using? Thanks!<|||||>Could you give me your email so that I can share it with you that way? :) @amogkam <|||||>Hey yes you can send it to [email protected]<|||||>Great! Already sent :) @amogkam <|||||>I am having the same problem ```Trial Runner checkpointing failed: Checkpoint must not be in-memory.```, but it does some times manage to create checkpoints, as I have ```PopulationBasedTraining: 4 checkpoints, 2 perturbs```
Models seem to start training from the beginning again most of the time. I'm guessing it happens when the checkpoiting fails.
I also notice that I am getting some errors and warnings:
>WARNING function_runner.py:541 -- Function checkpointing is disabled. This may result in unexpected behavior when using checkpointing features or certain schedulers. To enable, set the train function arguments to be `func(config, checkpoint_dir=None)
>ERROR syncer.py:72 -- Log sync requires rsync to be installed.
Did you find a solution?
Also, I don't really understand how the ```perturbation_interval``` and ```time_attr``` arguments work together. It seems to consider the models for perturbations at the number of ```logging_steps``` I set in ```TrainingArguments```, but as I understand it, it is supposed to do so after every training step (so after every minibatch?) when ```time_attr=training_iteration``` and ```perturbation_interval=1.``` Since that's how it seems to work, I set ```checkpoint_freq``` to the same value as ```logging_steps``` in ```TrainingArguments```
Here's what I think is the relevant part of my code:
``` python
class UCCTrainer(Trainer):
def compute_loss(self, model, inputs, return_outputs=False):
outputs = model(
input_ids=inputs['input_ids'],
attention_mask=inputs['attention_mask'],
token_type_ids=inputs['token_type_ids']
)
loss = th.nn.BCEWithLogitsLoss()(outputs['logits'], inputs['labels'])
return (loss, outputs) if return_outputs else loss
def model_init():
return BertForSequenceClassification.from_pretrained(
config.MODEL_NAME, return_dict=True
)
def objective(metrics):
try:
return metrics[config.COMPUTE_OBJECTIVE]
except KeyError:
return metrics[f'eval_{config.COMPUTE_OBJECTIVE}']
def hp_space(trial):
return {
'learning_rate': tune.uniform(1e-5, 5e-5),
'num_train_epochs': tune.choice([2, 3, 4, 5]),
'seed': tune.choice(range(1, 50)),
'weight_decay': tune.uniform(0.0, 0.3),
'per_device_train_batch_size': tune.choice([10, 15, 20])
}
def compute_metrics(eval_pred: EvalPrediction):
scores = eval_pred.predictions # np.array 4427x2
labels = eval_pred.label_ids # np.array 4427x2
pred = np.argmax(scores, axis=1)
labels_flat = np.argmax(labels, axis=1)
return get_binary_metrics(pred, labels_flat)
if __name__ == '__main__':
os.environ['WANDB_WATCH'] = 'all'
tokenizer = BertTokenizer.from_pretrained(
config.MODEL_NAME,
do_lower_case=config.DO_LOWER_CASE
)
train_df = dataframe_from_json('data/train_balanced.json')
train_binary = make_binary_df(train_df)
train_data = UCCDataset(train_binary, tokenizer, config.MAX_LEN)
total_steps = len(train_data)/config.TRAIN_BATCH_SIZE
warmup_steps = round(0.1*total_steps)
training_args = TrainingArguments(
output_dir=config.OUTPUT_DIR,
do_train=True,
do_eval=True,
evaluation_strategy='steps',
learning_rate=config.LEARNING_RATE,
weight_decay=0.1,
logging_steps=config.LOG_INTERVAL,
seed=1,
disable_tqdm=True,
report_to=['wandb'],
run_name=config.RUN_NAME,
load_best_model_at_end=config.LOAD_BEST_LAST,
metric_for_best_model=config.COMPUTE_OBJECTIVE,
logging_first_step=True,
lr_scheduler_type='linear',
warmup_steps=warmup_steps
)
val_df = get_clean_df(pd.read_csv('data/val.csv'))
val_binary = make_binary_df(val_df)
val_data = UCCDataset(val_df, tokenizer, config.MAX_LEN)
model_config = BertConfig(
vocab_size=tokenizer.vocab_size,
pretrained_model_name_or_path=config.MODEL_NAME,
num_labels=config.N_LABELS,
return_dict=True
)
trainer = UCCTrainer(
args=training_args,
train_dataset=train_data,
eval_dataset=val_data,
tokenizer=tokenizer,
model_init=model_init,
compute_metrics=compute_metrics
)
ray_scheduler = PopulationBasedTraining(
time_attr='training_iteration',
metric=f'eval_{config.COMPUTE_OBJECTIVE}',
mode='max',
perturbation_interval=1,
hyperparam_mutations={
'learning_rate': tune.uniform(1e-5, 5e-5),
'num_train_epochs': tune.choice([2, 3, 4, 5]),
'seed': tune.choice(range(1, 50)),
'weight_decay': tune.uniform(0.0, 0.3),
'per_device_train_batch_size': tune.choice([10, 15, 20])
}
)
best_model = trainer.hyperparameter_search(
hp_space=hp_space,
compute_objective=objective,
n_trials=3,
direction='maximize',
backend='ray',
# the following arguments are kwargs for tune.run
scheduler=ray_scheduler,
name='testmars5',
resources_per_trial={'cpu': 1, 'gpu': 1},
keep_checkpoints_num=3,
checkpoint_score_attr="training_iteration",
checkpoint_freq=config.LOG_INTERVAL
)
```<|||||>Hey folks, is this still an issue?
cc @jwa018 @khrystynaFaryna |
transformers | 9,872 | closed | on_log event should occur *after* the current log is written | 01-28-2021 16:36:01 | 01-28-2021 16:36:01 | ||
transformers | 9,871 | closed | Exception: You're trying to run a `Unigram` model but you're file was trained with a different algorithm | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.2
- Platform: Linux-3.10.107-1-tlinux2_kvm_guest-0049-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [1 ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. open https://github.com/agemagician/ProtTrans/blob/master/Embedding/PyTorch/Basic/ProtAlbert.ipynb
2. when run the code 'tokenizer = AutoTokenizer.from_pretrained("Rostlab/prot_albert", do_lower_case=False )'
3. report errors as the follow:
Downloading: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 505/505 [00:00<00:00, 516kB/s]
Downloading: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 238k/238k [00:03<00:00, 77.0kB/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/anaconda3/envs/prottrans/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 385, in from_pretrained
return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/home/anaconda3/envs/prottrans/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1768, in from_pretrained
return cls._from_pretrained(
File "/home/anaconda3/envs/prottrans/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1841, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/home/anaconda3/envs/prottrans/lib/python3.8/site-packages/transformers/models/albert/tokenization_albert_fast.py", line 136, in __init__
super().__init__(
File "/home/anaconda3/envs/prottrans/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 89, in __init__
fast_tokenizer = convert_slow_tokenizer(slow_tokenizer)
File "/home/anaconda3/envs/prottrans/lib/python3.8/site-packages/transformers/convert_slow_tokenizer.py", line 659, in convert_slow_tokenizer
return converter_class(transformer_tokenizer).converted()
File "/home/anaconda3/envs/prottrans/lib/python3.8/site-packages/transformers/convert_slow_tokenizer.py", line 349, in converted
tokenizer = self.tokenizer(self.proto)
File "/home/anaconda3/envs/prottrans/lib/python3.8/site-packages/transformers/convert_slow_tokenizer.py", line 335, in tokenizer
raise Exception(
Exception: You're trying to run a `Unigram` model but you're file was trained with a different algorithm
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 01-28-2021 13:27:18 | 01-28-2021 13:27:18 | Use "AlbertTokenizer" rather than "AutoTokenizer", this should solve your issue.
Please, check the updated notebook version.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Prot_albert tokenizer is returning none type, what changed? |
transformers | 9,870 | closed | IndexError when finetuning barthez on summarization | ## Environment info
- `transformers` version: 4.3.0.dev0
- Platform: Linux-4.4.0-197-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): BARThez
## To reproduce
Steps to reproduce the behavior:
```
python finetune_trainer.py --learning_rate 3e-5 --fp16 --evaluation_strategy steps --predict_with_generate --model_name_or_path moussaKam/barthez --data_dir xsum --do_train --do_eval --output_dir welcome_back --per_device_train_batch_size 4 --task summarization --max_target_length 50 --overwrite_output_dir --eval_steps 50 --n_val 20
```
```
Traceback (most recent call last):
File "finetune_trainer.py", line 373, in <module>
main()
File "finetune_trainer.py", line 303, in main
train_result = trainer.train(
File "/datadisks/datadisk1/transformers/src/transformers/trainer.py", line 942, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/datadisks/datadisk1/transformers/src/transformers/trainer.py", line 1017, in _maybe_log_save_evaluate
metrics = self.evaluate()
File "/datadisks/datadisk1/transformers/src/transformers/trainer_seq2seq.py", line 96, in evaluate
return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
File "/datadisks/datadisk1/transformers/src/transformers/trainer.py", line 1458, in evaluate
output = self.prediction_loop(
File "/datadisks/datadisk1/transformers/src/transformers/trainer.py", line 1617, in prediction_loop
metrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids))
File "/datadisks/datadisk1/transformers/examples/seq2seq/utils.py", line 92, in summarization_metrics
pred_str, label_str = decode_pred(pred)
File "/datadisks/datadisk1/transformers/examples/seq2seq/utils.py", line 86, in decode_pred
label_str = tokenizer.batch_decode(pred.label_ids, skip_special_tokens=True)
File "/datadisks/datadisk1/transformers/src/transformers/tokenization_utils_base.py", line 3070, in batch_decode
return [
File "/datadisks/datadisk1/transformers/src/transformers/tokenization_utils_base.py", line 3071, in <listcomp>
self.decode(
File "/datadisks/datadisk1/transformers/src/transformers/tokenization_utils_base.py", line 3109, in decode
return self._decode(
File "/datadisks/datadisk1/transformers/src/transformers/tokenization_utils.py", line 711, in _decode
filtered_tokens = self.convert_ids_to_tokens(token_ids, skip_special_tokens=skip_special_tokens)
File "/datadisks/datadisk1/transformers/src/transformers/tokenization_utils.py", line 695, in convert_ids_to_tokens
tokens.append(self._convert_id_to_token(index))
File "/datadisks/datadisk1/transformers/src/transformers/models/barthez/tokenization_barthez.py", line 237, in _convert_id_to_token
return self.sp_model.IdToPiece(index)
File "/home/dascim/anaconda3/envs/transformers/lib/python3.8/site-packages/sentencepiece/__init__.py", line 501, in _batched_func
return _func(self, arg)
File "/home/dascim/anaconda3/envs/transformers/lib/python3.8/site-packages/sentencepiece/__init__.py", line 494, in _func
raise IndexError('piece id is out of range.')
IndexError: piece id is out of range.
```
## Expected behavior
For some reason the tokenizer is trying to decode some -100 id.
| 01-28-2021 13:14:24 | 01-28-2021 13:14:24 | Hmmmm the -100 id should be linked to the ignored values. It shouldn't try to decode this. Pinging @sgugger <|||||>Not sure why there would be `-100` in the labels with the old script. Note that we are not maintaining that one anymore and will replace it with `run_seq2seq` which is almost ready for use (misses a few features you don't seem to be using in your command anyway).
If you really need the old one, you should add the line
```
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
```
in the `compute_metric` function to replace the -100s by the pad token id.<|||||>We also fixed this very recently (yesterday) in the model, see: https://github.com/huggingface/transformers/commit/74f16b82765a05eccee45e80d79370202a958873 => so you should also be able to run:
```
python finetune_trainer.py --learning_rate 3e-5 --fp16 --evaluation_strategy steps --predict_with_generate --model_name_or_path moussaKam/barthez --data_dir xsum --do_train --do_eval --output_dir welcome_back --per_device_train_batch_size 4 --task summarization --max_target_length 50 --overwrite_output_dir --eval_steps 50 --n_val 20
```
on master now.
However, as @sgugger points out, we strongly recommend using the `run_seq2seq.py` script from now on as we won't continue maintaining `finetune_trainer.py` anymore. |
transformers | 9,869 | closed | Added do_lower_case parameters for tokenizer in mlm training. | Added do_lower_case while training mlm. Useful for training cased BER, for instance. | 01-28-2021 11:04:42 | 01-28-2021 11:04:42 | Hi there, thanks for your PR!
The examples scripts are kept simple and without too much functionality so users can easily understand and tweak them for their needs (they are just examples, they do not mean to cover **everything**). As you saw, it's super easy to add things like this option, if needed. The PR will stay this to demonstrate how, but I don't think we will merge it.<|||||>As far as I know this is already done in the Tokenizer logic. You can define this lower casing option in the `tokenizer_config.json` - and this is done for quite a lot models. I dont't see any reason to have this as an extra cli option ๐ค <|||||>I just tried to be useful :)<|||||>No problem at all. If you want to make sure something you are working on will be accepted, don't hesitate to open an issue about it first, that way we can tell you if it's desirable or not :-)
Be sure to check the [good first issues](https://github.com/huggingface/transformers/issues?q=is%3Aissue+is%3Aopen+label%3A%22Good+First+Issue%22) if you want to try something else! |
transformers | 9,868 | closed | Remove submodule | Removes the `datasets` submodule that was introduced in https://github.com/huggingface/transformers/pull/9825. | 01-28-2021 09:01:16 | 01-28-2021 09:01:16 | |
transformers | 9,867 | closed | where is position_embedding_type used | When I was using pytorch Electra Model, I read its source code but I didn't find where position_embedding_type is used.
So did I miss something? | 01-28-2021 08:29:08 | 01-28-2021 08:29:08 | It is used quite a lot! Here for example:
https://github.com/huggingface/transformers/blob/4c3ae89ad3215c3252ebf8ce964795ba8813d810/src/transformers/models/electra/modeling_electra.py#L194-L196
Actually just Ctrl+F "position_embedding_type" in this file and you should be able to find out where it's used :) (11 occurrences)<|||||>thanks |
transformers | 9,866 | closed | Whole word mask in run_mlm_wwm.py | I find that `run_mlm_wwm.py` uses the whole word mask class `DataCollatorForWholeWordMask`.
But in this class `_whole_word_mask` function, we recognize if a token is the beginning of a word by this way:
```
cand_indexes = []
for (i, token) in enumerate(input_tokens):
if token == "[CLS]" or token == "[SEP]":
continue
if len(cand_indexes) >= 1 and token.startswith("##"):
cand_indexes[-1].append(i)
else:
cand_indexes.append([i])
```
I also notice that `run_mlm_wwm.py` is also used for Roberta per-training in the examples. However, the tokenizer for Roberta doesn't contain tokens like `[CLS]` and `[SEP]`. Also, the subwords do not start with `##`.
How can this code handle language models use Roberta-like tokenizer?
Thanks!
| 01-28-2021 07:51:11 | 01-28-2021 07:51:11 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I noticed this myself, `DataCollatorForWholeWordMask` seems to be very specific to the BERT tokenizer. It seems that it should be using the special tokens mask, and the word_ids() from the tokenizer rather than rely on [CLS],[SEP] tokens and subwords starting with ## (so it fails with metaspace for example).
Edit: It also calls `self._tensorize_batch` which as far as I can see isn't implemented, so I assume this class isn't maintained?<|||||>It's a bit old but I wanted to share my quick fix for Roberta-like tokenizers (I think this can be more general propose, but I just needed this for Herbert tokenizer):
```
def _whole_word_mask(self, input_tokens: List[str], max_predictions=512):
"""
Get 0/1 labels for masked tokens with whole word mask proxy
"""
if not isinstance(self.tokenizer, (BertTokenizer, BertTokenizerFast,
RobertaTokenizer, RobertaTokenizerFast,
XLMRobertaTokenizer, XLMRobertaTokenizerFast,
HerbertTokenizer, HerbertTokenizerFast,
XLMTokenizer)):
warnings.warn(
"DataCollatorForWholeWordMask is only suitable for BertTokenizer or RobertaTokenizer-like tokenizers. "
"Please refer to the documentation for more information."
)
cand_indexes = []
special_tokens = [val for key, val in self.tokenizer.special_tokens_map.items()
if key not in ['unk_token', 'mask_token']]
is_bert_tokenizer = isinstance(self.tokenizer, (BertTokenizer, BertTokenizerFast))
for (i, token) in enumerate(input_tokens):
if token in special_tokens:
continue
if is_bert_tokenizer:
if len(cand_indexes) >= 1 and token.startswith("##"):
cand_indexes[-1].append(i)
else:
cand_indexes.append([i])
else: # Roberta-like tokenizers have </w> token at the end to indicate end of word
# edge case for chinese (##) are added in DataCollatorForWholeWordMask
if token.startswith("##"):
token = token[2:]
if token.endswith("</w>"):
token = token[:-4]
if len(cand_indexes) == 0:
cand_indexes.append([i])
else:
cand_indexes[-1].append(i)
if token.endswith("</w>"):
cand_indexes.append([])
if len(cand_indexes[-1]) == 0:
cand_indexes = cand_indexes[:-1]
random.shuffle(cand_indexes)
num_to_predict = min(max_predictions, max(1, int(round(len(input_tokens) * self.mlm_probability))))
masked_lms = []
covered_indexes = set()
for index_set in cand_indexes:
if len(masked_lms) >= num_to_predict:
break
# If adding a whole-word mask would exceed the maximum number of
# predictions, then just skip this candidate.
if len(masked_lms) + len(index_set) > num_to_predict:
continue
is_any_index_covered = False
for index in index_set:
if index in covered_indexes:
is_any_index_covered = True
break
if is_any_index_covered:
continue
for index in index_set:
covered_indexes.add(index)
masked_lms.append(index)
if len(covered_indexes) != len(masked_lms):
raise ValueError("Length of covered_indexes is not equal to length of masked_lms.")
mask_labels = [1 if i in covered_indexes else 0 for i in range(len(input_tokens))]
return mask_labels
```<|||||>If anybody else has this issue. I fixed it for RoBERTa by adding a few lines that deal with the way RoBERTa tokenizes. Note that it's not a general purpose solution for other LMs. The previous comment did not work for me. See here:
https://github.com/RikVN/transformers/blob/main/src/transformers/data/data_collator.py#L948 |
transformers | 9,865 | closed | [trainer] seq2seq doesn't handle mt5 correctly | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.2
- Platform: Linux-5.4.0-58-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.7
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <yes>
- Using distributed or parallel set-up in script?: <yes>
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@stas00,@patrickvonplaten, @patil-suraj
## Information
Model I am using (MT5-xl,MT5-large):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (official example scripts task)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. The script I used is `exmaples/seq2seq/finetune_trainer.py`, which was originally used to reproduce the training of T5-3b on single 3090. All processes are the same as [#8771](https://github.com/huggingface/transformers/issues/8771#issuecomment-759176685) and it can reproduce the training of T5-3b(whether single card or 2/4 cards).
2. Here is the problem, when I try to train MT5-xl, `--freeze_embeds` seems to bring bugs. I used 4*3090, My script is
```
export BS=1; PYTHONPATH=../../src; USE_TF=0;
/usr/bin/time -v deepspeed --num_gpus=4 ./finetune_trainer.py --model_name_or_path /<my_model_dir>/models/mt5/xl/v0 --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_predict --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size 1 --per_device_train_batch_size 1 --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 5 --n_train 60 --n_val 10 --n_test 10 --deepspeed ds_config.json --fp16
```
Here is my report:
```
[2021-01-27 14:59:52,982] [WARNING] [runner.py:117:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.
[2021-01-27 14:59:57,024] [INFO] [runner.py:358:main] cmd = /<my_dir>/miniconda3/envs/nlp/bin/python -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMCwgMSwgMiwgM119 --master_addr=127.0.0.1 --master_port=29500 ./finetune_trainer.py --model_name_or_path /<my_model_dir>/models/mt5/xl/v0 --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_predict --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size 1 --per_device_train_batch_size 1 --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 5 --n_train 60 --n_val 10 --n_test 10 --deepspeed ds_config.json --fp16
[2021-01-27 14:59:57,793] [INFO] [launch.py:78:main] WORLD INFO DICT: {'localhost': [0, 1, 2, 3]}
[2021-01-27 14:59:57,793] [INFO] [launch.py:87:main] nnodes=1, num_local_procs=4, node_rank=0
[2021-01-27 14:59:57,793] [INFO] [launch.py:99:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1, 2, 3]})
[2021-01-27 14:59:57,793] [INFO] [launch.py:100:main] dist_world_size=4
[2021-01-27 14:59:57,793] [INFO] [launch.py:103:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3
[2021-01-27 15:00:01,106] [INFO] [distributed.py:40:init_distributed] Initializing torch distributed with backend: nccl
[2021-01-27 15:00:01,340] [INFO] [distributed.py:40:init_distributed] Initializing torch distributed with backend: nccl
[2021-01-27 15:00:01,672] [INFO] [distributed.py:40:init_distributed] Initializing torch distributed with backend: nccl
[2021-01-27 15:00:01,870] [INFO] [distributed.py:40:init_distributed] Initializing torch distributed with backend: nccl
01/27/2021 15:00:05 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1, distributed training: True, 16-bits training: True
01/27/2021 15:00:05 - WARNING - __main__ - Process rank: 2, device: cuda:2, n_gpu: 1, distributed training: True, 16-bits training: True
01/27/2021 15:00:05 - WARNING - __main__ - Process rank: 1, device: cuda:1, n_gpu: 1, distributed training: True, 16-bits training: True
01/27/2021 15:00:05 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(output_dir='output_dir', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=True, evaluation_strategy=<EvaluationStrategy.STEPS: 'steps'>, prediction_loss_only=False, per_device_train_batch_size=1, per_device_eval_batch_size=1, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=3e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-06, max_grad_norm=1.0, num_train_epochs=1.0, max_steps=-1, lr_scheduler_type=<SchedulerType.LINEAR: 'linear'>, warmup_steps=5, logging_dir='runs/Jan27_15-00-01_user-SYS-4029GP-TRT', logging_first_step=True, logging_steps=1000, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=True, fp16_opt_level='O1', fp16_backend='auto', local_rank=0, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=25000, dataloader_num_workers=0, past_index=-1, run_name='output_dir', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=False, deepspeed='ds_config.json', label_smoothing_factor=0.1, adafactor=False, sortish_sampler=True, predict_with_generate=True)
01/27/2021 15:00:05 - WARNING - __main__ - Process rank: 3, device: cuda:3, n_gpu: 1, distributed training: True, 16-bits training: True
[INFO|configuration_utils.py:443] 2021-01-27 15:00:05,352 >> loading configuration file /<my_model_dir>/models/mt5/xl/v0/config.json
[INFO|configuration_utils.py:481] 2021-01-27 15:00:05,353 >> Model config MT5Config {
"_name_or_path": "/home/patrick/t5/mt5-xl",
"architectures": [
"T5ForConditionalGeneration"
],
"d_ff": 5120,
"d_kv": 64,
"d_model": 2048,
"decoder_start_token_id": 0,
"dropout_rate": 0.1,
"eos_token_id": 1,
"feed_forward_proj": "gated-gelu",
"initializer_factor": 1.0,
"is_encoder_decoder": true,
"layer_norm_epsilon": 1e-06,
"model_type": "mt5",
"num_decoder_layers": 24,
"num_heads": 32,
"num_layers": 24,
"output_past": true,
"pad_token_id": 0,
"relative_attention_num_buckets": 32,
"tie_word_embeddings": false,
"tokenizer_class": "T5Tokenizer",
"transformers_version": "4.2.1",
"use_cache": true,
"vocab_size": 250112
}
[INFO|configuration_utils.py:443] 2021-01-27 15:00:05,353 >> loading configuration file /<my_model_dir>/models/mt5/xl/v0/config.json
[INFO|configuration_utils.py:481] 2021-01-27 15:00:05,354 >> Model config MT5Config {
"_name_or_path": "/home/patrick/t5/mt5-xl",
"architectures": [
"T5ForConditionalGeneration"
],
"d_ff": 5120,
"d_kv": 64,
"d_model": 2048,
"decoder_start_token_id": 0,
"dropout_rate": 0.1,
"eos_token_id": 1,
"feed_forward_proj": "gated-gelu",
"initializer_factor": 1.0,
"is_encoder_decoder": true,
"layer_norm_epsilon": 1e-06,
"model_type": "mt5",
"num_decoder_layers": 24,
"num_heads": 32,
"num_layers": 24,
"output_past": true,
"pad_token_id": 0,
"relative_attention_num_buckets": 32,
"tie_word_embeddings": false,
"tokenizer_class": "T5Tokenizer",
"transformers_version": "4.2.1",
"use_cache": true,
"vocab_size": 250112
}
[INFO|tokenization_utils_base.py:1685] 2021-01-27 15:00:05,354 >> Model name '/<my_model_dir>/models/mt5/xl/v0' not found in model shortcut name list (t5-small, t5-base, t5-large, t5-3b, t5-11b). Assuming '/<my_model_dir>/models/mt5/xl/v0' is a path, a model identifier, or url to a directory containing tokenizer files.
[INFO|tokenization_utils_base.py:1718] 2021-01-27 15:00:05,354 >> Didn't find file /<my_model_dir>/models/mt5/xl/v0/tokenizer.json. We won't load it.
[INFO|tokenization_utils_base.py:1718] 2021-01-27 15:00:05,355 >> Didn't find file /<my_model_dir>/models/mt5/xl/v0/added_tokens.json. We won't load it.
[INFO|tokenization_utils_base.py:1718] 2021-01-27 15:00:05,355 >> Didn't find file /<my_model_dir>/models/mt5/xl/v0/special_tokens_map.json. We won't load it.
[INFO|tokenization_utils_base.py:1718] 2021-01-27 15:00:05,355 >> Didn't find file /<my_model_dir>/models/mt5/xl/v0/tokenizer_config.json. We won't load it.
[INFO|tokenization_utils_base.py:1764] 2021-01-27 15:00:05,355 >> loading file /<my_model_dir>/models/mt5/xl/v0/spiece.model
[INFO|tokenization_utils_base.py:1764] 2021-01-27 15:00:05,355 >> loading file None
[INFO|tokenization_utils_base.py:1764] 2021-01-27 15:00:05,355 >> loading file None
[INFO|tokenization_utils_base.py:1764] 2021-01-27 15:00:05,355 >> loading file None
[INFO|tokenization_utils_base.py:1764] 2021-01-27 15:00:05,355 >> loading file None
[INFO|modeling_utils.py:1025] 2021-01-27 15:00:06,472 >> loading weights file /<my_model_dir>/models/mt5/xl/v0/pytorch_model.bin
Traceback (most recent call last):
File "./finetune_trainer.py", line 367, in <module>
main()
File "./finetune_trainer.py", line 230, in main
freeze_embeds(model)
File "/<my_dir>/transformers/examples/seq2seq/utils.py", line 567, in freeze_embeds
[INFO|modeling_utils.py:1143] 2021-01-27 15:05:03,683 >> All model checkpoint weights were used when initializing MT5ForConditionalGeneration.
[INFO|modeling_utils.py:1152] 2021-01-27 15:05:03,683 >> All the weights of MT5ForConditionalGeneration were initialized from the model checkpoint at /<my_model_dir>/models/mt5/xl/v0.
If your task is similar to the task the model of the checkpoint was trained on, you can already use MT5ForConditionalGeneration for predictions without further training.
Traceback (most recent call last):
File "./finetune_trainer.py", line 367, in <module>
main()
File "./finetune_trainer.py", line 230, in main
freeze_embeds(model)
File "/<my_dir>/transformers/examples/seq2seq/utils.py", line 567, in freeze_embeds
freeze_params(model.model.shared)
File "/<my_dir>/miniconda3/envs/nlp/lib/python3.7/site-packages/torch/nn/modules/module.py", line 779, in __getattr__
freeze_params(model.model.shared)
File "/<my_dir>/miniconda3/envs/nlp/lib/python3.7/site-packages/torch/nn/modules/module.py", line 779, in __getattr__
type(self).__name__, name))
torch.nn.modules.module.ModuleAttributeError: 'MT5ForConditionalGeneration' object has no attribute 'model'
type(self).__name__, name))
torch.nn.modules.module.ModuleAttributeError: 'MT5ForConditionalGeneration' object has no attribute 'model'
Traceback (most recent call last):
File "./finetune_trainer.py", line 367, in <module>
main()
File "./finetune_trainer.py", line 230, in main
freeze_embeds(model)
File "/<my_dir>/transformers/examples/seq2seq/utils.py", line 567, in freeze_embeds
freeze_params(model.model.shared)
File "/<my_dir>/miniconda3/envs/nlp/lib/python3.7/site-packages/torch/nn/modules/module.py", line 779, in __getattr__
type(self).__name__, name))
torch.nn.modules.module.ModuleAttributeError: 'MT5ForConditionalGeneration' object has no attribute 'model'
Traceback (most recent call last):
File "./finetune_trainer.py", line 367, in <module>
main()
File "./finetune_trainer.py", line 230, in main
freeze_embeds(model)
File "/<my_dir>/transformers/examples/seq2seq/utils.py", line 567, in freeze_embeds
freeze_params(model.model.shared)
File "/<my_dir>/miniconda3/envs/nlp/lib/python3.7/site-packages/torch/nn/modules/module.py", line 779, in __getattr__
type(self).__name__, name))
torch.nn.modules.module.ModuleAttributeError: 'MT5ForConditionalGeneration' object has no attribute 'model'
Command being timed: "deepspeed --num_gpus=4 ./finetune_trainer.py --model_name_or_path /<my_model_dir>/models/mt5/xl/v0 --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_predict --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size 1 --per_device_train_batch_size 1 --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 5 --n_train 60 --n_val 10 --n_test 10 --deepspeed ds_config.json --fp16"
User time (seconds): 348.34
System time (seconds): 177.55
Percent of CPU this job got: 166%
Elapsed (wall clock) time (h:mm:ss or m:ss): 5:15.88
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 33558800
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 1
Minor (reclaiming a frame) page faults: 67111048
Voluntary context switches: 132337
Involuntary context switches: 6635761
Swaps: 0
File system inputs: 29248712
File system outputs: 32
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
```
3. So I removed `--freeze_embeds` and tried to train MT5-xl again, but I got CUDA out of memory. My device is 4*24G 3090, with BS=1, ZeRO stage=2, and CPU_offload=true. I assume that T5-3b and MT5-xl should be in the same order of magnitude, and I can do it on t5-3b, so I think this should not happen.
4. I also tried training MT5-large. Just replace mt5-xl to mt5-large, under the same conditions in 3. And I got the overflow problem. This is not surprising me because MT5-large seems not fixed FP16 yet. In short, I want to know if there is any problem with my operation or if this is the case. If it is because the MT5-large has not been repaired, does huggingface have any plans to repair it?
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
1. Why can't mt5-xl train on 4*3090? Or what should I do?
2. Can mt5-large FP16 (mainly DeepSpeed) be used? If not, is there any plan to fix it?
<!-- A clear and concise description of what you would expect to happen. -->
| 01-28-2021 07:26:55 | 01-28-2021 07:26:55 | OK, I can reproduce the problem with just google/mt5-small and 2 gpus:
```
export BS=1; PYTHONPATH=../../src USE_TF=0 deepspeed --num_gpus=2 ./finetune_trainer.py --model_name_or_path google/mt5-small --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_predict --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size 1 --per_device_train_batch_size 1 --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 5 --n_train 60 --n_val 10 --n_test 10 --deepspeed ds_config.json --fp16
```
We will get it sorted out today.<|||||>ok, the problem had nothing to do with DeepSpeed, it's just a seq2seq neglect.
The fix is:
```
diff --git a/examples/seq2seq/utils.py b/examples/seq2seq/utils.py
index 8b24bfda..303b89f7 100644
--- a/examples/seq2seq/utils.py
+++ b/examples/seq2seq/utils.py
@@ -563,7 +563,7 @@ def freeze_embeds(model):
"""Freeze token embeddings and positional embeddings for bart, just token embeddings for t5."""
model_type = model.config.model_type
- if model_type == "t5":
+ if model_type in ["t5", "mt5"]:
freeze_params(model.shared)
for d in [model.encoder, model.decoder]:
freeze_params(d.embed_tokens)
```
Please let me know if you can manage to apply this fix. I will make a proper PR later, but it'll take some work, since I need to make a tiny mt5 model and add a test.
You can just edit the file if you don't know how to apply a patch. <|||||>The fix should be merged shortly https://github.com/huggingface/transformers/pull/9879
<|||||>I can solve the `--freeze_embeds` bug now, thanks for your help! @stas00
As for questions 3 and 4, I noticed that the title of the issue has been edited. I don't know if these questions are caused by the model or the seq2seq trainer. Maybe I should raise them in a new issue?<|||||>Oh, you wrote those items as steps to reproduce the problem, so I didn't know that those were issues that needed to/could be fixed.
Once I discovered that the issue you posted was unrelated to DeepSpeed I took the liberty to adjust the subject.
In general, yes, let's try to keep each issue separate, so that it makes it much easier to track things and not let things fall between the cracks.
Back to your follow up question:
Looking just at the params:
- t5-3b ~10GB
- mt5-xl ~15GB
So the 2nd model is substantially larger, and if t5-3b fit tightly onto a 24GB card it's not surprising that the larger model didn't.
and in addition to model params you also need to allocate memory for:
- inputs
- gradients
- optimizer states
I tried mt5-xl on 4x 40gb gpu setup and it worked, but took ~29GB on each GPU, so there is the problem - you're 5GB short.
The command I run was:
```
export BS=1; PYTHONPATH=../../src USE_TF=0 /usr/bin/time -v deepspeed --num_gpus=4 ./finetune_trainer.py --model_name_or_path google/mt5-xl --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_predict --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size 1 --per_device_train_batch_size 1 --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 5 --n_train 60 --n_val 10 --n_test 10 --deepspeed ds_config.json --fp16
```
You may try to tweak the buffer sizes in `ds_config.json` but I think the gap is too big.
I'm working on a 2D Parallelism solution that will combine pipe|model-parallelism w/ ZeRO-DP (DeepSpeed), which should enable such feats with huge models, but it might take some time. The docs aren't quite there so it takes a lot of trial and error to move forward. You may want to track this PR https://github.com/huggingface/transformers/pull/9765 for updates.
Alternatively when fairscale or DeepSpeed releases ZeRO phase 3, you shouldn't have a problem loading this model onto 4x 24GB gpus. Currently the problem is that the model params are too big w/o phase 3. In phase 3 params are partitioned too - problem solved.
<|||||>> I tried mt5-xl on 4x 40gb gpu setup and it worked, but took ~29GB on each GPU, so there is the problem - you're 5GB short.
That's help a lot! Thank you!
I am also looking forward to ZeRO stage 3 and your pipe|model-parallelism. Hope one day we can working on it. Thank you again!<|||||>> And I got the overflow problem. This is not surprising me because MT5-large seems not fixed FP16 yet.
Did you get `nan` loss or gradient overflow warning ? And yes, fp16 is still not working for mT5-large
> I assume that T5-3b and MT5-xl should be in the same order of magnitude
mT5-xl is actually quite bigger than T5-3b for two reasons
1. It's vocab_size is huge (250112), which results in bigger token_embedding layer and final linear layer.
2. It's based on t51.1 which uses `gated-gelu` activation instead of `relu`. `gated-gelu` adds one extra linear layer in every feed-forward layer.<|||||>@patil-suraj That's very helpful! Thank you a lot!
Now I understand that there are many differences between mT5-xl and T5-3b, and I will set up separate experiments for them in the future. By the way, do you have any plans to repair the FP16 in mt5-large/xl ?<|||||>Dear @patil-suraj, here you have mentioned for mt5-small you have made it work with fp16? since you did not mention this model, do you mind telling me how you made it work? I am having a hard time with mt5-small with fp16 thanks a lot for your advice <|||||>I have a similar error here
```python
from transformers import T5TokenizerFast, MT5ForConditionalGeneration
tokenizer = T5TokenizerFast.from_pretrained('google/mt5-base') # "google/mt5-base" "google/mt5-large" "google/mt5-xl"
model = MT5ForConditionalGeneration.from_pretrained('google/mt5-base', return_dict=True)
condition = "translate English to German: "
input = "My name is Azeem and I live in India"
# You can also use "translate English to French" and "translate English to Romanian"
input_ids = tokenizer(condition+input, return_tensors="pt").input_ids # Batch size 1
outputs = model.generate(input_ids)
decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(decoded)
```
Stacktrace:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-8-f9822d331a70>](https://localhost:8080/#) in <module>()
3 tokenizer = T5TokenizerFast.from_pretrained('google/mt5-base') # "google/mt5-base" "google/mt5-large" "google/mt5-xl"
4
----> 5 model = AutoModelForSeq2SeqLM.from_pretrained('google/mt5-base', return_dict=True)
6
7 condition = "translate English to German: "
8 frames
[/usr/local/lib/python3.7/dist-packages/transformers/configuration_utils.py](https://localhost:8080/#) in __getattribute__(self, key)
250 if key != "attribute_map" and key in super().__getattribute__("attribute_map"):
251 key = super().__getattribute__("attribute_map")[key]
--> 252 return super().__getattribute__(key)
253
254 def __init__(self, **kwargs):
AttributeError: 'MT5Config' object has no attribute 'relative_attention_max_distance'
```
@stas00 any idea? I'm using HF master:
```
!pip install git+https://github.com/huggingface/transformers.git
```<|||||>@loretoparisi
This is because T5Config now has `relative_attention_max_distance` attribute introduced in the #16155 which was missing from `MT5Config`. Fix is here #16170
|
transformers | 9,864 | closed | Longformer: raise TypeError("pred must not be a Python bool", pred) | ## Environment info
- `transformers` version: 4.2.2
- Platform: Ubuntu 18.04
- Python version: 3.7.6
- PyTorch version (GPU?): None
- Tensorflow version (GPU?): 2.3.1, 2.3.2 (with or without GPU)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help@patrick-s-h-lewis
Models:
- longformer @patrickvonplaten
## Information
Errors occur when I use the TFLongformerMainLayer as a layer of my model. I will give a simple example below to reproduce this bug.
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Errors occurs in 'transformers/models/longformer/modeling_tf_longformer.py:1799 _pad_to_window_size *
inputs_embeds = tf.cond(padding_len > 0, pad_embeddings, lambda: inputs_embeds)'
It looks like `padding_len >0` is a python bool which caused this error.
According to the [official guide of tf.cond](https://www.tensorflow.org/api_docs/python/tf/cond) example: `result = tf.cond(x < y, lambda: tf.add(x, z), lambda: tf.square(y))`, I think this is because both 'padding_len' and '0' are not tensors, so `padding_len >0` just returns a python bool.
```
TypeError: in user code:
/home/xingya/.conda/envs/tf2/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:806 train_function *
return step_function(self, iterator)
test.py:44 call *
x = self.longformer(input_ids=None, inputs_embeds=inputs)[0]
/home/xingya/.conda/envs/tf2/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1680 call *
(
/home/xingya/.conda/envs/tf2/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1799 _pad_to_window_size *
inputs_embeds = tf.cond(padding_len > 0, pad_embeddings, lambda: inputs_embeds)
/home/xingya/.conda/envs/tf2/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py:201 wrapper **
return target(*args, **kwargs)
/home/xingya/.conda/envs/tf2/lib/python3.7/site-packages/tensorflow/python/ops/control_flow_ops.py:1396 cond_for_tf_v2
return cond(pred, true_fn=true_fn, false_fn=false_fn, strict=True, name=name)
/home/xingya/.conda/envs/tf2/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py:201 wrapper
return target(*args, **kwargs)
/home/xingya/.conda/envs/tf2/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py:507 new_func
return func(*args, **kwargs)
/home/xingya/.conda/envs/tf2/lib/python3.7/site-packages/tensorflow/python/ops/control_flow_ops.py:1180 cond
return cond_v2.cond_v2(pred, true_fn, false_fn, name)
/home/xingya/.conda/envs/tf2/lib/python3.7/site-packages/tensorflow/python/ops/cond_v2.py:62 cond_v2
raise TypeError("pred must not be a Python bool", pred)
TypeError: ('pred must not be a Python bool', True)
```
Here is a snipt to reproduce this bug:
```
from transformers.models.longformer.modeling_tf_longformer import TFLongformerMainLayer
from tensorflow.keras.layers import Input, Embedding, Dense
from tensorflow.keras.models import Model
from transformers import LongformerConfig
import tensorflow as tf
import numpy as np
tf.random.set_seed(200)
class LongFormerMain(tf.keras.layers.Layer):
def __init__(self, name='longformer', **kwargs):
super(LongFormerMain, self).__init__(name=name, **kwargs)
config = LongformerConfig(attention_window=4, num_hidden_layers=1, vocab_size=10)
self.longformer = TFLongformerMainLayer(config)
def call(self, inputs):
x = self.longformer(input_ids=None, inputs_embeds=inputs)[0]
return x
inputs = Input(shape=(None,), dtype='int32')
output = Embedding(100, 768)(inputs)
longformer = LongFormerMain()
output = longformer(output)
output = Dense(9, activation='softmax')(output)
model = Model(inputs, output)
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy')
x = np.array([[5, 2, 3] * 3] * 100)
y = np.array([[1, 2, 3] * 3] * 100)
model.fit(x=x, y=y, epochs=10, batch_size=4, validation_split=0.1)
print(model.predict([[5, 2, 3] * 3]))
```
| 01-28-2021 04:16:11 | 01-28-2021 04:16:11 | cc @jplu do you maybe have a good idea here?<|||||>Hey @xuxingya !! Thanks a lot for reporting the issue! Indeed Longformer has a bug in the `_pad_to_window_size` method. We will work on fixing this ASAP.
Even though there is indeed a bug, your piece of code is wrong and should be:
```python
from transformers.models.longformer.modeling_tf_longformer import TFLongformerMainLayer
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
from transformers import LongformerConfig
import tensorflow as tf
import numpy as np
tf.random.set_seed(200)
class CustomLongFormer(tf.keras.layers.Layer):
def __init__(self, name='longformer', **kwargs):
super().__init__(name=name, **kwargs)
config = LongformerConfig(attention_window=4, num_hidden_layers=1, vocab_size=10)
self.longformer = TFLongformerMainLayer(config)
def call(self, inputs):
x = self.longformer(input_ids=None, inputs_embeds=inputs)[0]
return x
longformer = CustomLongFormer()
inputs = Input(shape=(None, None), dtype='float32', name="inputs_embeds")
output = longformer(inputs)
output = Dense(9, activation='softmax')(output)
model = Model(inputs, output)
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy')
x = np.array([np.random.uniform(0,1, (3, 768))] * 100)
y = np.array([[1]*3] * 100)
model.fit(x=x, y=y, epochs=10, batch_size=4, validation_split=0.1)
```<|||||>Fixed in #9942 |
transformers | 9,863 | closed | Add support for tf2 encoder_decoder | # ๐ New model addition
I would like to add `TensorFlow-2` support for `encoder_decoder` model. I will soon create a PR, if this is approved. | 01-27-2021 23:23:52 | 01-27-2021 23:23:52 | Shall I start working on it if no one else is doing it?<|||||>Feel free to give it a try and tag me if you encounter any issues along the way! Just to set expectations, such a PR will be a longer project (~1 month) and is a relatively low priority for the library at the moment, so I might not be able to reply daily.
But nevertheless, I'm more than happy to guide you through a PR :-) <|||||>Thanks! I will start the PR soon :) |
transformers | 9,862 | closed | AttributeError with T5Tokenizer | I am trying to use **T5Tokenizer** and **t5-base** model to fine-tune on **SQuAD** dataset. But each time, when I run the tokenizer code I get errors (e.g, `'NoneType' object has no attribute 'encode'/'batch_encode_plus'/'encode_plus'`).
Example code
```
tokenizer = T5Tokenizer.from_pretrained('t5-base')
ids_neg = tokenizer.encode('negative </s>')
ids_pos = tokenizer.encode('positive </s>')
```
I get the following error:
> AttributeError Traceback (most recent call last)
> <ipython-input-19-f34cd55ac673> in <module>()
> ----> 1 ids_neg = tokenizer.encode('negative </s>')
> 2 ids_pos = tokenizer.encode('positive </s>')
> 3 len(ids_neg), len(ids_pos)
>
> AttributeError: 'NoneType' object has no attribute 'encode' | 01-27-2021 22:14:32 | 01-27-2021 22:14:32 | I think the errors could be more explicit here, here I think it comes from the fact that you don't have SentencePiece installed. Can you try to install it and let me know if it fixes your issue?<|||||>Hi @LysandreJik. I had the same issue, with sentencepiece installed. I also notice that my previous notebooks with T5Tokenizer and t5-base don't also work as well.
Here's my error:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-42-63aa129d4b0c> in <module>()
----> 1 dataset = ParaphraseDataset(tokenizer, 'data', 'dev', 256)
2 print("Val dataset: ",len(dataset))
3
4 data = dataset[61]
5 print(tokenizer.decode(data['source_ids']))
1 frames
<ipython-input-39-2fa4af2cad5a> in __init__(self, tokenizer, data_dir, type_path, max_len)
12 self.targets = []
13
---> 14 self._build()
15
16 def __len__(self):
<ipython-input-39-2fa4af2cad5a> in _build(self)
34
35 # tokenize inputs
---> 36 tokenized_inputs = self.tokenizer.batch_encode_plus(
37 [input_], max_length=self.max_len, pad_to_max_length=True, return_tensors="pt", truncation='longest_first'
38 )
AttributeError: 'NoneType' object has no attribute 'batch_encode_plus'
```
It seems to me that T5Tokenizer isn't loading the T5-base tokenizer properly<|||||>Hmmm I'm pretty sure this only happens when you don't have sentencepiece installed. Do you mind pasting your environment info as well as `pip list`? If you're running on colab, can you make sure you restart the runtime before trying again? Thank you for your patience.<|||||>Here's mine (sorry for the long scroll!) (PS: I just found that downgrading the version of the transformers package solves the issue):
```
Package Version
----------------------------- ---------------
absl-py 0.10.0
aiohttp 3.7.3
alabaster 0.7.12
albumentations 0.1.12
altair 4.1.0
appdirs 1.4.4
argon2-cffi 20.1.0
asgiref 3.3.1
astor 0.8.1
astropy 4.1
astunparse 1.6.3
async-generator 1.10
async-timeout 3.0.1
atari-py 0.2.6
atomicwrites 1.4.0
attrs 20.3.0
audioread 2.1.9
autograd 1.3
Babel 2.9.0
backcall 0.2.0
beautifulsoup4 4.6.3
bleach 3.2.2
blis 0.4.1
bokeh 2.1.1
Bottleneck 1.3.2
branca 0.4.2
bs4 0.0.1
CacheControl 0.12.6
cachetools 4.2.1
catalogue 1.0.0
certifi 2020.12.5
cffi 1.14.4
chainer 7.4.0
chardet 3.0.4
click 7.1.2
cloudpickle 1.3.0
cmake 3.12.0
cmdstanpy 0.9.5
colorlover 0.3.0
community 1.0.0b1
contextlib2 0.5.5
convertdate 2.2.0
coverage 3.7.1
coveralls 0.5
crcmod 1.7
cufflinks 0.17.3
cupy-cuda101 7.4.0
cvxopt 1.2.5
cvxpy 1.0.31
cycler 0.10.0
cymem 2.0.5
Cython 0.29.21
daft 0.0.4
dask 2.12.0
dataclasses 0.8
datascience 0.10.6
debugpy 1.0.0
decorator 4.4.2
defusedxml 0.6.0
descartes 1.1.0
dill 0.3.3
distributed 1.25.3
Django 3.1.5
dlib 19.18.0
dm-tree 0.1.5
docopt 0.6.2
docutils 0.16
dopamine-rl 1.0.5
earthengine-api 0.1.238
easydict 1.9
ecos 2.0.7.post1
editdistance 0.5.3
en-core-web-sm 2.2.5
entrypoints 0.3
ephem 3.7.7.1
et-xmlfile 1.0.1
fa2 0.3.5
fancyimpute 0.4.3
fastai 1.0.61
fastdtw 0.3.4
fastprogress 1.0.0
fastrlock 0.5
fbprophet 0.7.1
feather-format 0.4.1
filelock 3.0.12
firebase-admin 4.4.0
fix-yahoo-finance 0.0.22
Flask 1.1.2
flatbuffers 1.12
folium 0.8.3
fsspec 0.8.5
future 0.18.2
gast 0.3.3
GDAL 2.2.2
gdown 3.6.4
gensim 3.6.0
geographiclib 1.50
geopy 1.17.0
gin-config 0.4.0
glob2 0.7
google 2.0.3
google-api-core 1.16.0
google-api-python-client 1.7.12
google-auth 1.17.2
google-auth-httplib2 0.0.4
google-auth-oauthlib 0.4.2
google-cloud-bigquery 1.21.0
google-cloud-bigquery-storage 1.1.0
google-cloud-core 1.0.3
google-cloud-datastore 1.8.0
google-cloud-firestore 1.7.0
google-cloud-language 1.2.0
google-cloud-storage 1.18.1
google-cloud-translate 1.5.0
google-colab 1.0.0
google-pasta 0.2.0
google-resumable-media 0.4.1
googleapis-common-protos 1.52.0
googledrivedownloader 0.4
graphviz 0.10.1
grpcio 1.32.0
gspread 3.0.1
gspread-dataframe 3.0.8
gym 0.17.3
h5py 2.10.0
HeapDict 1.0.1
holidays 0.10.4
holoviews 1.13.5
html5lib 1.0.1
httpimport 0.5.18
httplib2 0.17.4
httplib2shim 0.0.3
humanize 0.5.1
hyperopt 0.1.2
ideep4py 2.0.0.post3
idna 2.10
idna-ssl 1.1.0
image 1.5.33
imageio 2.4.1
imagesize 1.2.0
imbalanced-learn 0.4.3
imblearn 0.0
imgaug 0.2.9
importlib-metadata 3.4.0
importlib-resources 5.1.0
imutils 0.5.4
inflect 2.1.0
iniconfig 1.1.1
intel-openmp 2021.1.2
intervaltree 2.1.0
ipykernel 4.10.1
ipython 5.5.0
ipython-genutils 0.2.0
ipython-sql 0.3.9
ipywidgets 7.6.3
itsdangerous 1.1.0
jax 0.2.7
jaxlib 0.1.57+cuda101
jdcal 1.4.1
jedi 0.18.0
jieba 0.42.1
Jinja2 2.11.2
joblib 1.0.0
jpeg4py 0.1.4
jsonschema 2.6.0
jupyter 1.0.0
jupyter-client 5.3.5
jupyter-console 5.2.0
jupyter-core 4.7.0
jupyterlab-pygments 0.1.2
jupyterlab-widgets 1.0.0
kaggle 1.5.10
kapre 0.1.3.1
Keras 2.4.3
Keras-Preprocessing 1.1.2
keras-vis 0.4.1
kiwisolver 1.3.1
knnimpute 0.1.0
korean-lunar-calendar 0.2.1
librosa 0.8.0
lightgbm 2.2.3
llvmlite 0.34.0
lmdb 0.99
lucid 0.3.8
LunarCalendar 0.0.9
lxml 4.2.6
Markdown 3.3.3
MarkupSafe 1.1.1
matplotlib 3.2.2
matplotlib-venn 0.11.6
missingno 0.4.2
mistune 0.8.4
mizani 0.6.0
mkl 2019.0
mlxtend 0.14.0
more-itertools 8.6.0
moviepy 0.2.3.5
mpmath 1.1.0
msgpack 1.0.2
multidict 5.1.0
multiprocess 0.70.11.1
multitasking 0.0.9
murmurhash 1.0.5
music21 5.5.0
natsort 5.5.0
nbclient 0.5.1
nbconvert 5.6.1
nbformat 5.1.2
nest-asyncio 1.4.3
networkx 2.5
nibabel 3.0.2
nltk 3.2.5
notebook 5.3.1
np-utils 0.5.12.1
numba 0.51.2
numexpr 2.7.2
numpy 1.19.5
nvidia-ml-py3 7.352.0
oauth2client 4.1.3
oauthlib 3.1.0
okgrade 0.4.3
opencv-contrib-python 4.1.2.30
opencv-python 4.1.2.30
openpyxl 2.5.9
opt-einsum 3.3.0
osqp 0.6.2.post0
packaging 20.8
palettable 3.3.0
pandas 1.1.5
pandas-datareader 0.9.0
pandas-gbq 0.13.3
pandas-profiling 1.4.1
pandocfilters 1.4.3
panel 0.9.7
param 1.10.1
parso 0.8.1
pathlib 1.0.1
patsy 0.5.1
pexpect 4.8.0
pickleshare 0.7.5
Pillow 7.0.0
pip 19.3.1
pip-tools 4.5.1
plac 1.1.3
plotly 4.4.1
plotnine 0.6.0
pluggy 0.7.1
pooch 1.3.0
portpicker 1.3.1
prefetch-generator 1.0.1
preshed 3.0.5
prettytable 2.0.0
progressbar2 3.38.0
prometheus-client 0.9.0
promise 2.3
prompt-toolkit 1.0.18
protobuf 3.12.4
psutil 5.4.8
psycopg2 2.7.6.1
ptyprocess 0.7.0
py 1.10.0
pyarrow 0.14.1
pyasn1 0.4.8
pyasn1-modules 0.2.8
pycocotools 2.0.2
pycparser 2.20
pyct 0.4.8
pydata-google-auth 1.1.0
pydot 1.3.0
pydot-ng 2.0.0
pydotplus 2.0.2
PyDrive 1.3.1
pyemd 0.5.1
pyglet 1.5.0
Pygments 2.6.1
pygobject 3.26.1
pymc3 3.7
PyMeeus 0.3.7
pymongo 3.11.2
pymystem3 0.2.0
pynndescent 0.5.1
PyOpenGL 3.1.5
pyparsing 2.4.7
pyrsistent 0.17.3
pysndfile 1.3.8
PySocks 1.7.1
pystan 2.19.1.1
pytest 3.6.4
python-apt 1.6.5+ubuntu0.5
python-chess 0.23.11
python-dateutil 2.8.1
python-louvain 0.15
python-slugify 4.0.1
python-utils 2.5.3
pytorch-lightning 1.1.6
pytz 2018.9
pyviz-comms 2.0.1
PyWavelets 1.1.1
PyYAML 5.3.1
pyzmq 21.0.1
qdldl 0.1.5.post0
qtconsole 5.0.2
QtPy 1.9.0
regex 2019.12.20
requests 2.23.0
requests-oauthlib 1.3.0
resampy 0.2.2
retrying 1.3.3
rpy2 3.2.7
rsa 4.7
sacremoses 0.0.43
scikit-image 0.16.2
scikit-learn 0.22.2.post1
scipy 1.4.1
screen-resolution-extra 0.0.0
scs 2.1.2
seaborn 0.11.1
Send2Trash 1.5.0
sentencepiece 0.1.95
setuptools 51.3.3
setuptools-git 1.2
Shapely 1.7.1
simplegeneric 0.8.1
six 1.15.0
sklearn 0.0
sklearn-pandas 1.8.0
smart-open 4.1.2
snowballstemmer 2.1.0
sortedcontainers 2.3.0
SoundFile 0.10.3.post1
spacy 2.2.4
Sphinx 1.8.5
sphinxcontrib-serializinghtml 1.1.4
sphinxcontrib-websupport 1.2.4
SQLAlchemy 1.3.22
sqlparse 0.4.1
srsly 1.0.5
statsmodels 0.10.2
sympy 1.1.1
tables 3.4.4
tabulate 0.8.7
tblib 1.7.0
tensorboard 2.4.1
tensorboard-plugin-wit 1.8.0
tensorboardcolab 0.0.22
tensorflow 2.4.1
tensorflow-addons 0.8.3
tensorflow-datasets 4.0.1
tensorflow-estimator 2.4.0
tensorflow-gcs-config 2.4.0
tensorflow-hub 0.11.0
tensorflow-metadata 0.27.0
tensorflow-privacy 0.2.2
tensorflow-probability 0.12.1
termcolor 1.1.0
terminado 0.9.2
testpath 0.4.4
text-unidecode 1.3
textblob 0.15.3
textgenrnn 1.4.1
Theano 1.0.5
thinc 7.4.0
tifffile 2020.9.3
tokenizers 0.8.1rc2
toml 0.10.2
toolz 0.11.1
torch 1.7.0+cu101
torchsummary 1.5.1
torchtext 0.3.1
torchvision 0.8.1+cu101
tornado 5.1.1
tqdm 4.41.1
traitlets 4.3.3
transformers 3.3.0
tweepy 3.6.0
typeguard 2.7.1
typing-extensions 3.7.4.3
tzlocal 1.5.1
umap-learn 0.5.0
uritemplate 3.0.1
urllib3 1.24.3
vega-datasets 0.9.0
wasabi 0.8.1
wcwidth 0.2.5
webencodings 0.5.1
Werkzeug 1.0.1
wheel 0.36.2
widgetsnbextension 3.5.1
wordcloud 1.5.0
wrapt 1.12.1
xarray 0.15.1
xgboost 0.90
xkit 0.0.0
xlrd 1.1.0
xlwt 1.3.0
yarl 1.6.3
yellowbrick 0.9.1
zict 2.0.0
zipp 3.4.0
```<|||||>Thank you for sharing! In previous packages, we needed `sentencepiece` so it was installed automatically. It's not anymore, with I think was the issue here. Will look into it further.<|||||>> I think the errors could be more explicit here, here I think it comes from the fact that you don't have SentencePiece installed. Can you try to install it and let me know if it fixes your issue?
Yes, I have already installed **SentencePiece**.<|||||>It eventually worked for me with the following re-installation:
```
!pip install transformers==2.9.0
!pip install pytorch_lightning==0.7.5
```
Maybe the error was due to the specific version.<|||||>You need sentencepiece: **_!pip install sentencepiece_**
However, if you are using colab notebook you have to **_restart the runtime for it to work._** after installing sentencepiece_<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,861 | closed | Rag modification | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-27-2021 21:29:24 | 01-27-2021 21:29:24 | Erm.... misclicked |
transformers | 9,860 | closed | Padding tokens affect MobileBert output | ## Environment info
- `transformers` version: 4.2.2
- Platform: Windows-10-10.0.17134-SP0
- Python version: 3.6.8
- PyTorch version (GPU?): 1.7.1+cu110 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
## Information
Model I am using (Bert, XLNet ...): MobileBert
Adding padding tokens to the end of a sequence affects MobileBert output even when masked. I've tried this on a few other models (`bert-base-uncased`, `roberta-base`, `xlm-roberta-base`) and was only able to replicate this with `google/mobilebert-uncased`.
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model_string = 'google/mobilebert-uncased'
tokenizer = AutoTokenizer.from_pretrained(model_string)
model = AutoModelForSequenceClassification.from_pretrained(model_string)
example_text = 'Hello, world!'
input_with_pad = tokenizer.encode_plus(
example_text,
padding='max_length',
max_length=32,
return_tensors='pt'
)
print(input_with_pad)
# {'input_ids': tensor([[ 101, 7592, 1010, 2088, 999, 102, 0, 0, 0, 0, 0, 0,
# 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
# 0, 0, 0, 0, 0, 0, 0, 0]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
# 0, 0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
# 0, 0, 0, 0, 0, 0, 0, 0]])}
input_without_pad = tokenizer.encode_plus(
example_text,
padding='longest',
max_length=32,
return_tensors='pt'
)
print(input_without_pad)
# {'input_ids': tensor([[ 101, 7592, 1010, 2088, 999, 102]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1]])}
with torch.no_grad():
model.eval()
out_with_pad = model(**input_with_pad)
print(out_with_pad.logits)
# tensor([[12693366., -5310913.]])
out_without_pad = model(**input_without_pad)
print(out_without_pad.logits)
# tensor([[12741167., -5327575.]])
```
## Expected behavior
Padding tokens should not affect the output of the model as long as they are masked. As far as I can tell, this only occurs with mobilebert. | 01-27-2021 21:26:55 | 01-27-2021 21:26:55 | Following up on this, is there anyone in particular that I should tag to take a look at this issue?<|||||>Hi @johnmccain, I'll take a look in the coming days.<|||||>Hi! I ran your example and added an additional relative difference computation:
```py
r_tol = torch.max(torch.abs(out_with_pad - out_without_pad) / torch.abs(out_without_pad))
```
What I gather from this is that the difference between the two outputs varies from 0.3% to 0.4%.
Some things to note:
- The attention mask is useful to hide tokens, but isn't perfect: the attention mask essentially adds a very large negative value to the attentions of the tokens we don't want to attend to (-10000), but that is not (-inf) either, so it doesn't erase them from existence. Even if padding is correctly done with an attention mask, some differences of ~1e-4 or ~1e-5 can still happen.
- Unfortunately, thereโs not much we can do, given that this is the way the original model was trained, using an adder (-10000). We have to keep as close as possible to the original implementation.
- While keeping in mind that these differences are usually very very small, and shouldnโt have an impact on your model, the way to get closer to the expected behavior if to have as few padding tokens as possible.
Now, MobileBERT is peculiar in that it has extremely high outputs compared to other models, but from what I'm seeing it's still within the 0.3%-0.4%. It is slightly higher than for other models, such as BERT which are in the ~0.0001% range. I didn't dive in enough to see exactly why this is so, but my guess is that all the tweaks to make it smaller (bottlenecks) might be responsible, as well as the very high outputs.
If you randomly initialize a MobileBERT and run through the same tests:
```py
config = AutoConfig.from_pretrained(model_string)
model = AutoModelForSequenceClassification.from_config(config)
```
You'll get results that are comparable to BERT:
```py
print(out_with_pad_logits, out_without_pad_logits)
print(out_without_pad_logits - out_with_pad_logits)
print(r_tol)
```
yields
```
tensor([[ 0.0255, -0.0048]]) tensor([[ 0.0255, -0.0048]])
tensor([[-2.9769e-05, 8.6003e-06]])
tensor(0.0018)
```
on my side (random each run as randomly initialized weights)
Investigated with @jplu <|||||>Thanks for looking into this!
That makes sense that this phenomenon would only be immediately visible with MobileBert with its extremely large logits. I will say that this can affect downstream tasks in my experience--for a MobileBert model finetuned on a binary classification task, switching from `padding='max_length'` to `padding='longest'` changed a handful of logits on my test set enough to affect the predicted class. (~1 in 500-1000 examples were altered enough to flip from 0 to 1 or vice versa). I haven't experienced that same sort of impact when using other Huggingface models like RoBERTa or Bert.
I wonder if the effect of padding tokens is diminished when using an activation in the classifier head to avoid the extremely large logits as suggested in #8938. I will comment back with what I find.<|||||>Hey guys, I opened a similar issue a while ago https://github.com/huggingface/transformers/issues/7070. It was automatically closed due to inactivity, but we are still struggling with the issue every day and don't use batching when predicting.
When computing the relative difference, for the example input shown in the issue mentioned above (I compared `emb2` and `emb4` from my issue), I got a quite disturbing result:
- mean error: 4%
- median error: 0.9%
- max error: 200% (5033.1235 VS 1674.1803)
I might have done some mistake, but I just used the formula written above:
> ```python
> r_tol = torch.max(torch.abs(out_with_pad - out_without_pad) / torch.abs(out_without_pad))
> ```<|||||>Hi @swecooo! Thanks for letting us know. There might be a deeper issue than what I've seen then, I'll take a deeper look as soon as I have time.
Could you specify how you computed these, for example with a code snippet so that I can investigate? Thanks!<|||||>Hey @LysandreJik, thank you very much for looking into this issue of ours. :slightly_smiling_face: This is the snippet that I used for computing the mean, median and max errors. I believe it should be identical to your formula mentioned above.
```python
import torch
from transformers import AutoModel, AutoTokenizer
model = 'google/mobilebert-uncased'
tokenizer = AutoTokenizer.from_pretrained(model)
model = AutoModel.from_pretrained(model)
text = 'Hey, how are you?'
i1 = tokenizer.batch_encode_plus([text], padding=True, return_tensors='pt')
emb1 = model(**i1).pooler_output[0] # Only one in batch (not padded): [-2.5088e+07, 7.3279e+04, 1.7544e+05, ...]
i2 = tokenizer.batch_encode_plus([text, text + ' hey'], padding=True, return_tensors='pt')
emb2 = model(**i2).pooler_output[0] # Not longest (padded): [-2.4871e+07, 8.1873e+04, 1.6693e+05, ...]
diff = torch.abs(emb2 - emb1) / torch.abs(emb1)
print("Mean", torch.mean(diff)) # 0.0432
print("Median", torch.median(diff)) # 0.0090
print("Max", torch.max(diff)) # 2.0063
top_10 = torch.argsort(diff, descending=True)[:10]
print(diff[top_10], emb1[top_10], emb2[top_10], sep="\n") # [2.0063, 1.5129, 1.1135, 0.6702, ...]
# [1674.1803, 7940.5342, 2012.5089, -13467.0508, ...]
# [5033.1235, 19954.0098, 4253.4575, -4441.5933 ...]
```
From the outputs, it seems that, naturally, smaller output values have a larger error. Please do let me know if you can (or cannot) reproduce the issue in the same magnitude as it happens for me, or if I can provide any more details.
I used `torch==1.7.1` and `transformers==4.3.2` on Python 3.7.<|||||>Cool, thanks for providing this snippet! I'll need to take a few hours to deep dive into it and see what's happening, so you expect an answer by the end of the next week if that's alright.<|||||>Will also look into your previous issue https://github.com/huggingface/transformers/issues/7070 (Sorry that it felt through the cracks!)<|||||>> you expect an answer by the end of the next week if that's alright
Sure, thanks a lot for looking into this. About #7070, I believe it's basically the same issue as here. :slightly_smiling_face:<|||||>Hello! I've taken a look, and you are both right: padding tokens affect MobileBERT's output values. One thing that MobileBERT does differently to other models, is that it uses an embedding size of `128` which is different to the `hidden_size`.
Before adding the word embeddings to the position embeddings and token type embeddings, these word embeddings are first passed through a 1D convolution with kernel size 3, effectively casting a tensor of size `(batch_size, sequence_length, 128)` to a tensor of size `(batch_size, sequence_length, 384)`.
This happens here: https://github.com/huggingface/transformers/blob/a85eb616f73c3e7eedb22146972ea41921164671/src/transformers/models/mobilebert/modeling_mobilebert.py#L199-L214
Then, this value is passed through a linear layer of output size `512`, resulting in a final value of size `(batch_size, sequence_length, 512)`.
This happens here: https://github.com/huggingface/transformers/blob/a85eb616f73c3e7eedb22146972ea41921164671/src/transformers/models/mobilebert/modeling_mobilebert.py#L215-L216
Due to these two transformations, if we have a single padding token, it now has an impact on the token that is right before it. One can easily test is with the following code:
```py
from transformers import MobileBertModel, MobileBertTokenizer
import torch
# Instantiate model and tokenizer
model = MobileBertModel.from_pretrained("google/mobilebert-uncased")
tokenizer = MobileBertTokenizer.from_pretrained("google/mobilebert-uncased")
# Create an array of just "1"
input_embeds = torch.ones([1, 10, 128])
# Fill the last token's embeddings with a very high value
input_embeds[:, -1, :] = 100000
resulting_embeddings = model.embeddings(inputs_embeds=input_embeds)
# Resulting embeddings of shape [1, 10, 512]
maximum_values_per_token_embedding = resulting_embeddings.squeeze().max(dim=1).values.round().tolist()
# [16.0, 17.0, 17.0, 17.0, 17.0, 17.0, 17.0, 17.0, 210926.0, 1566259.0]
```
As we can see, the last two tokens are affected by the very high value of the last token. This is due to the 1D convolution. Unfortunately, the attention mask can't really do anything about that now, as it's only aware of the last value, and only ignoring that one.
---
Steps from here: I'm contacting the author to see if we have an error in our implementation w.r.t padding tokens. In the meantime I'll think about how we can handle it from there.
Thank you for opening this issue, this is quite an error in the expected behavior vs actual behavior!<|||||>Hi all! There seems to have been an error with the weights conversion, as this issue stems from the padding token (0) embeddings seem to have values, where they should not.
Could you please confirm that adding the following line right after model instantiation solves your issues:
If the model is a `MobileBertModel` (for example with `AutoModel`)
```py
model.embeddings.word_embeddings.weight[0, :] = 0
```
@swecooo after adding the line mentioned above, running your code results in:
```out
Mean tensor(8.5514e-06, grad_fn=<MeanBackward0>)
Median tensor(6.2345e-07, grad_fn=<MedianBackward0>)
Max tensor(0.0003, grad_fn=<MaxBackward1>)
```
If the model is a mobilebert with a head, for example sequence classification:
```py
model.mobilebert.embeddings.word_embeddings.weight[0, :] = 0
```
@johnmccain after adding the line mentioned above, running your code results in:
```out
tensor([[-2765138.0000, 1868917.2500]])
tensor([[-2765139.7500, 1868916.5000]])
```
If you confirm this solves your issues, I will update the checkpoints on the hub.<|||||>Hey @LysandreJik, I can confirm that setting the pad token embeddings to zero solves the issue with my code.
I went ahead and trained up the model on a classification task to check the real-world impact of zeroing the pad token embeddings, and I am no longer seeing discrepancies in classification output when using max_length vs longest padding ๐
Thank you!<|||||>This is great news!<|||||>Hi @LysandreJik, I can also confirm that zeroing the embedding solves the issue for me. Thanks a lot!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @LysandreJik, I just want to ping about the model checkpoint update, because it seems that the issue is still present in the model. I use the workaround for now, but if you found some time, it would be great to close this! :)<|||||>Thanks for the ping @sewco, I have just updated the weights. This can be closed now, feel free to reopen if you still feel something is missing. |
transformers | 9,859 | closed | Head masking and test_head_masking not working properly for TFT5 models. | When removing `test_head_masking` flags during #9858, I found out `test_headmasking` was actually never run for `TFT5Model` and it seems there must be a bug, please see below:
```
_______________________________________________________________________________________________________ TFT5ModelTest.test_headmasking _______________________________________________________________________________________________________
self = <tests.test_modeling_tf_t5.TFT5ModelTest testMethod=test_headmasking>
def test_headmasking(self):
if not self.test_head_masking:
return
random.Random().seed(42)
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
random.Random().seed()
inputs_dict["output_attentions"] = True
config.output_hidden_states = True
configs_no_init = _config_zero_init(config) # To be sure we have no Nan
for model_class in self.all_model_classes:
model = model_class(config=configs_no_init)
# Prepare head_mask
def prepare_layer_head_mask(i, attention_heads, num_hidden_layers):
if i == 0:
return tf.concat(
(tf.zeros(1, dtype=tf.float32), tf.ones(attention_heads - 1, dtype=tf.float32)), 0
)
elif i == num_hidden_layers - 1:
return tf.concat(
(tf.zeros(attention_heads - 1, dtype=tf.float32), tf.ones(1, dtype=tf.float32)), 0
)
else:
return tf.ones(attention_heads, dtype=tf.float32)
head_mask = tf.stack(
[
prepare_layer_head_mask(i, config.num_attention_heads, config.num_hidden_layers)
for i in range(config.num_hidden_layers)
],
0,
)
inputs = self._prepare_for_class(inputs_dict, model_class).copy()
inputs["head_mask"] = head_mask
if model.config.is_encoder_decoder:
signature = inspect.signature(model.call)
arg_names = [*signature.parameters.keys()]
if "decoder_head_mask" in arg_names: # necessary diferentiation because of T5 model
inputs["decoder_head_mask"] = head_mask
> outputs = model(**inputs, return_dict=True)
test_modeling_tf_common.py:686:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../../../../../miniconda3/envs/bart/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py:1012: in __call__
outputs = call_fn(inputs, *args, **kwargs)
../src/transformers/models/t5/modeling_tf_t5.py:1160: in call
inputs["encoder_outputs"] = self.encoder(
../../../../../../miniconda3/envs/bart/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py:1012: in __call__
outputs = call_fn(inputs, *args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <transformers.models.t5.modeling_tf_t5.TFT5MainLayer object at 0x7f8c38206a30>
input_ids = <tf.Tensor: shape=(13, 7), dtype=int32, numpy=
array([[63, 79, 60, 1, 57, 50, 42],
[27, 6, 27, 88, 79, 14, 3... [95, 95, 79, 95, 63, 32, 24],
[ 8, 9, 14, 46, 91, 75, 56],
[26, 78, 52, 95, 45, 33, 78]], dtype=int32)>
attention_mask = None, encoder_hidden_states = None, encoder_attention_mask = None, inputs_embeds = None
head_mask = <tf.Tensor: shape=(5, 4), dtype=float32, numpy=
array([[0., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.],
[0., 0., 0., 1.]], dtype=float32)>, encoder_head_mask = None
past_key_values = None, use_cache = False, output_attentions = True, output_hidden_states = True, return_dict = True, training = False, kwargs = {}
inputs = {'attention_mask': <tf.Tensor: shape=(13, 7), dtype=float32, numpy=
array([[1., 1., 1., 1., 1., 1., 1.],
[1., 1..., 1.]], dtype=float32)>, 'encoder_attention_mask': None, 'encoder_head_mask': None, 'encoder_hidden_states': None, ...}
input_shape = [13, 7], batch_size = 13, seq_length = 7, mask_seq_length = 7
def call(
self,
input_ids=None,
attention_mask=None,
encoder_hidden_states=None,
encoder_attention_mask=None,
inputs_embeds=None,
head_mask=None,
encoder_head_mask=None,
past_key_values=None,
use_cache=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
training=False,
**kwargs,
) -> Tuple:
inputs = input_processing(
func=self.call,
config=self.config,
input_ids=input_ids,
attention_mask=attention_mask,
encoder_hidden_states=encoder_hidden_states,
encoder_attention_mask=encoder_attention_mask,
inputs_embeds=inputs_embeds,
head_mask=head_mask,
encoder_head_mask=encoder_head_mask,
past_key_values=past_key_values,
use_cache=use_cache,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
training=training,
kwargs_call=kwargs,
)
if inputs["input_ids"] is not None and inputs["inputs_embeds"] is not None:
err_msg_prefix = "decoder_" if self.is_decoder else ""
raise ValueError(
f"You cannot specify both {err_msg_prefix}inputs and {err_msg_prefix}inputs_embeds at the same time"
)
elif inputs["input_ids"] is not None:
input_shape = shape_list(inputs["input_ids"])
inputs["input_ids"] = tf.reshape(inputs["input_ids"], (-1, input_shape[-1]))
elif inputs["inputs_embeds"] is not None:
input_shape = shape_list(inputs["inputs_embeds"])[:-1]
else:
err_msg_prefix = "decoder_" if self.is_decoder else ""
raise ValueError(f"You have to specify either {err_msg_prefix}inputs or {err_msg_prefix}inputs_embeds")
if inputs["inputs_embeds"] is None:
assert self.embed_tokens is not None, "You have to intialize the model with valid token embeddings"
inputs["inputs_embeds"] = self.embed_tokens(inputs["input_ids"])
batch_size, seq_length = input_shape
# required mask seq length can be calculated via length of past
mask_seq_length = (
shape_list(inputs["past_key_values"][0][0])[2] + seq_length
if inputs["past_key_values"] is not None
else seq_length
)
if inputs["attention_mask"] is None:
inputs["attention_mask"] = tf.fill((batch_size, mask_seq_length), 1)
if (
self.is_decoder
and inputs["encoder_attention_mask"] is None
and inputs["encoder_hidden_states"] is not None
):
encoder_seq_length = shape_list(inputs["encoder_hidden_states"])[1]
inputs["encoder_attention_mask"] = tf.fill((batch_size, encoder_seq_length), 1)
# initialize past_key_values with `None` if past does not exist
if inputs["past_key_values"] is None:
inputs["past_key_values"] = [None] * len(self.block)
# We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
# ourselves in which case we just need to make it broadcastable to all heads.
inputs["attention_mask"] = tf.cast(inputs["attention_mask"], dtype=tf.float32)
num_dims_attention_mask = len(shape_list(inputs["attention_mask"]))
if num_dims_attention_mask == 3:
extended_attention_mask = inputs["attention_mask"][:, None, :, :]
elif num_dims_attention_mask == 2:
# Provided a padding mask of dimensions [batch_size, mask_seq_length]
# - if the model is a decoder, apply a causal mask in addition to the padding mask
# - if the model is an encoder, make the mask broadcastable to [batch_size, num_heads, mask_seq_length, mask_seq_length]
if self.is_decoder:
seq_ids = tf.range(mask_seq_length)
causal_mask = tf.less_equal(
tf.tile(seq_ids[None, None, :], (batch_size, mask_seq_length, 1)),
seq_ids[None, :, None],
)
causal_mask = tf.cast(causal_mask, dtype=tf.float32)
extended_attention_mask = causal_mask[:, None, :, :] * inputs["attention_mask"][:, None, None, :]
if inputs["past_key_values"][0] is not None:
extended_attention_mask = extended_attention_mask[:, :, -seq_length:, :]
else:
extended_attention_mask = inputs["attention_mask"][:, None, None, :]
# Since attention_mask is 1.0 for positions we want to attend and 0.0 for
# masked positions, this operation will create a tensor which is 0.0 for
# positions we want to attend and -1e9 for masked positions.
# Since we are adding it to the raw scores before the softmax, this is
# effectively the same as removing these entirely.
# T5 has a mask that can compare sequence ids, we can simulate this here with this transposition
# Cf. https://github.com/tensorflow/mesh/blob/8d2465e9bc93129b913b5ccc6a59aa97abd96ec6/mesh_tensorflow/transformer/transformer_layers.py#L270
# extended_attention_mask = tf.math.equal(extended_attention_mask,
# tf.transpose(extended_attention_mask, perm=(-1, -2)))
extended_attention_mask = (1.0 - extended_attention_mask) * -1e9
if self.is_decoder and inputs["encoder_attention_mask"] is not None:
# If a 2D ou 3D attention mask is provided for the cross-attention
# we need to make broadcastable to [batch_size, num_heads, mask_seq_length, mask_seq_length]
# we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length]
inputs["encoder_attention_mask"] = tf.cast(inputs["encoder_attention_mask"], dtype=tf.float32)
num_dims_encoder_attention_mask = len(shape_list(inputs["encoder_attention_mask"]))
if num_dims_encoder_attention_mask == 3:
encoder_extended_attention_mask = inputs["encoder_attention_mask"][:, None, :, :]
if num_dims_encoder_attention_mask == 2:
encoder_extended_attention_mask = inputs["encoder_attention_mask"][:, None, None, :]
# T5 has a mask that can compare sequence ids, we can simulate this here with this transposition
# Cf. https://github.com/tensorflow/mesh/blob/8d2465e9bc93129b913b5ccc6a59aa97abd96ec6/mesh_tensorflow/transformer/transformer_layers.py#L270
# encoder_extended_attention_mask = tf.math.equal(encoder_extended_attention_mask,
# tf.transpose(encoder_extended_attention_mask, perm=(-1, -2)))
encoder_extended_attention_mask = (1.0 - encoder_extended_attention_mask) * -1e9
else:
encoder_extended_attention_mask = None
> assert inputs["head_mask"] is None, "Head mask not supported"
E AssertionError: Head mask not supported
../src/transformers/models/t5/modeling_tf_t5.py:714: AssertionError
============================================================================================================== warnings summary ==============================================================================================================
../../../../../../miniconda3/envs/bart/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py:22
/Users/daniel.stancl/miniconda3/envs/bart/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py:22: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
tests/test_modeling_tf_t5.py: 44 warnings
/var/folders/vs/4jsdk4nx1ds2m48ltfk3nmdc0000gn/T/tmpc35hmpmg.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag__.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)
tests/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_saved_model_creation
tests/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_saved_model_creation
tests/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_saved_model_creation
tests/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_saved_model_creation
tests/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_saved_model_creation
/Users/daniel.stancl/Documents/PhD/Projects/test_transformers/transformers/src/transformers/modeling_tf_utils.py:293: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
tf_logger.warn(
tests/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_saved_model_creation
tests/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_saved_model_creation
tests/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_saved_model_creation
tests/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_saved_model_creation
tests/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_saved_model_creation
/Users/daniel.stancl/Documents/PhD/Projects/test_transformers/transformers/src/transformers/modeling_tf_utils.py:302: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
tf_logger.warn("The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.")
-- Docs: https://docs.pytest.org/en/stable/warnings.html
========================================================================================================== short test summary info ===========================================================================================================
FAILED test_modeling_tf_t5.py::TFT5ModelTest::test_headmasking - AssertionError: Head mask not supported
```
My contribution: I'm gonna try to take care of this tomorrow.
<hr>
Reviewer: @jplu | 01-27-2021 21:24:09 | 01-27-2021 21:24:09 | That's an assert we put in T5 because the head mask is not supported. Happy you take care of this!!! |
transformers | 9,858 | closed | Remove redundant `test_head_masking = True` flags in test files | This PR removes redundant `test_head_masking = True` flags from test files as this is set by default.
Reviewer: @LysandreJik | 01-27-2021 21:17:38 | 01-27-2021 21:17:38 | |
transformers | 9,857 | closed | Pin memory in Trainer by default | 01-27-2021 19:02:32 | 01-27-2021 19:02:32 | Could we please go through normal PR review approval cycles? Unless I missed something and there was one.
It looks like my comment on slack was missed where I suggested to use a more specific cl arg name.
I proposed one of:
- dataloader_pin_memory
- dl_pin_memory
But since we already have `dataloader_num_workers`:
```
num_workers=self.args.dataloader_num_workers,
pin_memory=self.args.pin_memory,
```
it should probably be `dataloader_pin_memory`
This is important since there are other ways to pin memory in pytorch.
------------------
This is a general comment - not specific to this PR:
We have this ongoing issue wrt cl arg naming, that we name something and later we realize it's not the best name and then we are concerned with changing the name not to break user's code, so let's think deeply about new cl args names before we add them. Thank you!<|||||>@stas00 It seems like I missed this message and when I opened this PR in the morning, I didn't see any comments and @sgugger had approved the PR. For a final check, I asked @LysandreJik who gave me the green light.
To avoid this in future, I would request if PR specific comments are made on the PR itself so that author & other reviewers can go through them and make sure that everything is resolved before merging.<|||||>Yes, absolutely. I guess it just fell through the cracks.
And let's have PR description, as simple as:
This PR adds `--pin_memory` to trainer DataLoader and it defaults to True.
|
|
transformers | 9,856 | closed | Add head_mask and decoder_head_mask to PyTorch LED | This PR implements `head_mask` and `decoder_head_mask` for PyTorch LED (and Longformer as there's a copy dependency) and it is the follow-up to the open issue #9814.
**Motivation:** This PR is a part of an endeavour to enable the usage of `head_mask` and `decoder_head_mask` for all encoder-decoder transformers following the recent work on BART-like models (#9569).
<hr>
Fixes: https://github.com/huggingface/transformers/issues/9814
Reviewers: @patrickvonplaten @LysandreJik @stas00 | 01-27-2021 17:47:54 | 01-27-2021 17:47:54 | |
transformers | 9,855 | closed | About max_length in generation_utils.py | In `generation_utils.py`, the docstring of the `beam_search` function shows the example of usage.
```
>>> input_ids = torch.ones((num_beams, 1), device=model.device, dtype=torch.long)
>>> input_ids = input_ids * model.config.decoder_start_token_id
>>> something else that I omit here
>>> outputs = model.beam_search(input_ids, beam_scorer, logits_processor=logits_processor, **model_kwargs)
```
The `beam_search` function uses `while cur_len < max_length` to control the length of generated sequence. But the `cur_len` counts the length including the start token which is a special token. When the user sets `max_length = 1`, does it not mean that the user wants the model to generate one token **while not considering the start token** (I am not sure that is it just me or others may think that way too)? But the `cur_len` will be 1 at the beginning because of the start token and the statement below in the source code.
```
batch_beam_size, cur_len = input_ids.shape
```
The control flow will jump out of the `while` loop and not generate any token.
Maybe `while cur_len < max_length` should be changed to `while cur_len <= max_length`. And maybe other functions should also change the corresponding loop control statement if I am right. | 01-27-2021 16:17:00 | 01-27-2021 16:17:00 | Hey @LinjianLi,
note that `max_length` states the maximum length of both generated tokens and input tokens (which is always at least 1). This means that we count the first special token also as an output token (it will be in the final output) and thus should also be included when computing `max_length`<|||||>> Hey @LinjianLi,
>
> note that `max_length` states the maximum length of both generated tokens and input tokens (which is always at least 1). This means that we count the first special token also as an output token (it will be in the final output) and thus should also be included when computing `max_length`
Thanks for your reply! |
transformers | 9,854 | closed | Deprecate model_path in Trainer.train | # What does this PR do?
This PR deprecates `Trainer.train(model_path=xxx)` to be replaced by `Trainer.train(resume_from_checkpoint=xxx)` which (I think) is clearer and better. No breaking change, just a deprecation warning for now. | 01-27-2021 15:56:11 | 01-27-2021 15:56:11 | |
transformers | 9,853 | closed | Fix computation of attention_probs when head_mask is provided. | Remove dead code path when computing `attention_probs` in case of `head_mask` is provided.
Masking was computed on `attention_scores` which is never used / returned afterwards. | 01-27-2021 15:09:29 | 01-27-2021 15:09:29 | Thanks a lot! |
transformers | 9,852 | closed | Adding a new `return_full_text` parameter to TextGenerationPipeline. | # What does this PR do?
For text-generation, it's sometimes used as prompting text.
In that context, prefixing `generated_text` with the actual input
forces the caller to take an extra step to remove it.
The proposed change adds a new parameter (for backward compatibility).
`return_full_text` that enables the caller to prevent adding the prefix.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@LysandreJik
@patrickvonplaten
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 01-27-2021 14:40:39 | 01-27-2021 14:40:39 | Don't mind the failing test, you can merge when ready. |
transformers | 9,851 | closed | [GA forks] Test on every push | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-27-2021 14:11:32 | 01-27-2021 14:11:32 | |
transformers | 9,850 | closed | Some model use serve previous version can not do inference in web api. | the model serve in
https://huggingface.co/wptoux/albert-chinese-large-qa
can not do inference by click โcomputeโ button.
because it use transformers 3.0.2 but can not
properly load in current version.
I think model online server should consider its implement transformer version. | 01-27-2021 13:49:59 | 01-27-2021 13:49:59 | The error here seems to be because there's a dissociated tokenizer and model. The tokenizer should be BERT while the model should be ALBERT.
The configuration should reflect this by having a `"tokenizer_class": "BertTokenizer"`.
Pinging @wptoux
An example can be seen with PhoBERT having the model set as RoBERTa and the tokenizer as `PhobertTokenizer`: https://huggingface.co/vinai/phobert-base/blob/main/config.json<|||||>> The error here seems to be because there's a dissociated tokenizer and model. The tokenizer should be BERT while the model should be ALBERT.
>
> The configuration should reflect this by having a `"tokenizer_class": "BertTokenizer"`.
>
> Pinging @wptoux
>
> An example can be seen with PhoBERT having the model set as RoBERTa and the tokenizer as `PhobertTokenizer`: https://huggingface.co/vinai/phobert-base/blob/main/config.json
This library has improved a lot since I release this model, I will update it.<|||||>Glad to hear it @wptoux! Thank you!<|||||>I have fixed the problem, and the web api is working now.
Here is an test example
Context: ๆ็ฝ๏ผ701ๅนดโ762ๅนด12ๆ๏ผ ๏ผๅญๅคช็ฝ๏ผๅท้่ฒๅฑ
ๅฃซ๏ผๅๅทโ่ฐชไปไบบโ๏ผๅไปฃไผๅคง็ๆตชๆผซไธปไน่ฏไบบ๏ผ่ขซๅไบบ่ชไธบโ่ฏไปโ๏ผไธๆ็ซๅนถ็งฐไธบโๆๆโ๏ผไธบไบไธๅฆไธคไฝ่ฏไบบๆๅ้ไธๆ็งๅณโๅฐๆๆโๅบๅซ๏ผๆ็ซไธๆ็ฝๅๅ็งฐโๅคงๆๆโใๅไบฌๅคงๅญฆๆๆๆๅฟๆ่ฏไปท๏ผโๆ็ฝไน่ฏๅผๅธๅฎๅฎ๏ผๅบไน้๏ผๆ็ซไน่ฏๅพทๅๅคฉๅฐ๏ผๆบไบๅ๏ผ็่ณๅคฉไบบๅไธๅข็๏ผๆ
่ฝๅบ็ฅๅ
ฅๅใ
Question: ๅฆไฝ่ฏไปทๆ็ฝ็่ฏ
Answer: ๆ็ฝไน่ฏๅผๅธๅฎๅฎ๏ผๅบไน้<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,849 | closed | Labeled pull requests | 01-27-2021 13:45:35 | 01-27-2021 13:45:35 | ||
transformers | 9,848 | closed | Add XLA test | # What does this PR do?
In the same spirit than for the mixed precision test, this PR adds one for XLA compliancy. | 01-27-2021 13:32:47 | 01-27-2021 13:32:47 | Out of curiosity, how long are those tests for the models that have them?<|||||>few milliseconds, XLA is really fast :) |
transformers | 9,847 | closed | TFBart lables consider both pad token and -100 | For #9770,
1. ```TFBartModels``` use -100 as a masking token for ```decoder_input_ids``` and ```compute_loss``` like other models(```T5```).
2. For legacy, all the ```padding token``` in ```labels``` are replace by ```-100`` token.
Below examples show the same result for ```labels``` with ```-100 token``` or ```padding token```, where previously ```Nan``` are shown in the latter case(#9770).
```
import tensorflow as tf
from transformers import BartTokenizer, TFBartForConditionalGeneration
tokenizer = BartTokenizer.from_pretrained("facebook/bart-base")
model = TFBartForConditionalGeneration.from_pretrained("facebook/bart-base")
inputs = tokenizer("My dog is <mask>", return_tensors='tf', truncation=True, max_length=16, padding="max_length")
labels_ids = tokenizer("My dog is cute", return_tensors='tf', truncation=True, max_length=16, padding="max_length").input_ids
## labels padding_token = 1
loss = model(inputs, labels=labels_ids)[0]
print(labels_ids)
print(loss)
## labels padding_token = -100
labels_ids = tf.where(
labels_ids == 1, tf.fill(tf.shape(labels_ids), tf.constant(-100, dtype='int32')), labels_ids
)
loss = model(inputs, labels=labels_ids)[0]
print(labels_ids)
print(loss)
```
```
tf.Tensor(
[[ 0 2387 2335 16 11962 2 1 1 1 1 1 1
1 1 1 1]], shape=(1, 16), dtype=int32)
tf.Tensor(
[2.2291888e-05 4.8874615e-05 3.7192607e-05 7.9230859e-04 6.1941862e+00
1.1058818e+00], shape=(6,), dtype=float32)
tf.Tensor(
[[ 0 2387 2335 16 11962 2 -100 -100 -100 -100 -100 -100
-100 -100 -100 -100]], shape=(1, 16), dtype=int32)
tf.Tensor(
[2.2291888e-05 4.8874615e-05 3.7192607e-05 7.9230859e-04 6.1941862e+00
1.1058818e+00], shape=(6,), dtype=float32)
```
TFBart gives the same result with both -100 and padding token.
However, ```Bart(pytorch) with -100 token in labels```, ```Bart with padding token in labels``` and ```TFBart(tensorflow with -100 or padding token)``` gives three different results. This is noticed but not treated in this PR.
@patrickvonplaten
@jplu
@patil-suraj
| 01-27-2021 13:27:37 | 01-27-2021 13:27:37 | @patrickvonplaten
I merged upstream to the branch!<|||||>You have an error in code quality, could you run `make style` and `make quality` to check it out? Thanks. |
transformers | 9,846 | closed | Adding new parameter to `generate`: `max_time`. | Generation by tokens number is sometimes a bit clunky because we don't
know how many tokens are good enough or even how many tokens are in
the payload (for pipelines users for instance). This leads to hard
to understand behavior.
This PR proposes a new argument `max_time` which is a float of seconds
for the allowed time for `generate` to run on.
Ideally combinations of `max_tokens=None`, `max_time=2` could be used to
generate as many tokens as possible within time budget.
NB: Another possible approach consists of passing a callback to `generate`
putting the caller in charge of the actual decision of when to stop
generating tokens. It opens the door to 'which args should we pass'
to this callback. It's hard to imagine other use-cases for this
early stopping behavior than time (that are not already covered by
parameters of generate)
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten @LysandreJik
@jplu
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 01-27-2021 13:20:12 | 01-27-2021 13:20:12 | Continuing a bit the discussion we had offline for others to chime in.
After quite some discussion and thinking, I see the following problem:
- I don't want to clutter `generate()` with if-statements anymore as it's done a bit in this PR, but rather make use of tools like `LogitsProcessor`. Now @Narsil you gave me some very good arguments to why just adding a `LogitsProcessor` that forces to generate EOS is not good enough (Some models don't have the EOS token & we don't always want to have EOS at the end of the sentence). So I would propose the following solution that we should then also use to deprecate `max_length` from the "lower" generate methods like `greedy_search`, `sample`, ...
Analogs to `LogitsProcessor` and `LogitsProcessorList`, we create a new logit called `StoppingCriteria` and `StoppingCriteriaList` which would look as follows:
```python
class StoppingCriteriaList(list):
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> torch.FloatTensor:
stopping = False
for criteria in self:
stopping = stopping or criteria(input_ids, scores)
return stopping
```
and a `MaxTimeStopping` class as follows:
```python
class MaxTimeStopping(StoppingCriteria):
def __init__(self, max_time):
self.start_time = time.time()
self.max_time = max_time
def __call__(self, *args):
if time.time() - self.start_time > self.max_time:
return True
return False
```
The same way we can create a `MaxLengthStopping` class as follows:
```python
class MaxLengthStopping(StoppingCriteria):
def __init__(self, max_length):
self.start_time = time.time()
self.max_length = max_length
def __call__(self, input_ids, *args):
if input_ids.shape[-1] > self.max_length:
return True
return False
```
Then we can add create a `stopping_criteria` list object in generate along side creating the `logits_processor` list object and pass it to the submodules. In each submodule we would then do something like
```
if stopping_criteria(input_ids, scores):
break;
```
I would then also deprecate the `max_length` as an input parameter to `greedy_search` etc and add a `stopping_criteria` list object instead.
This new approach would open the way for more fancy stopping criteria. E.g. at the moment `max_length` defines the number of total tokens (passed tokens + generated tokens) instead of just the generated tokens which is very hard to change in terms of backwards compatibility. Lots of people have complained about that. With this approach, one could easily make a new `MaxGeneratedTokenStopping` class that would then take over.
Another positive effect of this function is that we can easily test & optimize those classes as we've already seen it for the `LogitsProcessor` classes.
This will require a rather big change, so I'd be very glad if @LysandreJik and @sgugger you can give your opinion here before proceeding.<|||||>Thanks for the thoughtful explanation, this makes a lot of sense. I'm very down to continue the modular approach we have with processors, the new `StoppingCriteria` you propose seems like the way to go. It's good that it keeps the extensibility of the generation methods while not complexifying the generate method itself.<|||||>Agreed with both of you, this `StoppingCriteria` class seems like a good idea!<|||||>@patrickvonplaten @LysandreJik
Do you mind a second review ?
I think this PR is actually ready.
The TF code (which was my main concern) doesn't seem to use LogitsProcessor nor to be tested, so I figured leaving `max_time` is ok. I could also simply remove it to make sure I don't break things.<|||||>> Great! Thanks a lot for tackling this PR!
>
> I'm quite happy with the design :-)
>
> Can we:
>
> 1. Add some docstring for the classes and add those classes to the docs? Give it a new section in `docs/source/internal/generation_utils.rst`
Done, I also added the import statements within `src/transformers/__init__.py`. Is there any other place I should think of ?
>
> 2. Deprecate the `max_length` function input argument for all `greedy_search`, `beam_search` and update the docstring and tests using the new `StoppingCriteriaList` instead
This is something harder to do because of some other usages of `max_length`. (see other comment). I think it should belong in another PR, because this one is already a bit large. And it would require other kinds of care (regarding performance at least).
What do you think ?
>
> 3. Change the `class StoppingCriteria` to an abstract class so keep the design as close as possible to the one in `LogitsProcessor...`
Done. shouldn't they actually contain `@abstractmethod` ?
>
> 4. Delete the functionality for TF. If it would be ok for you, I'd like to just add this functionality for PyTorch for now since TF needs a big refactor before adding more features IMO
Ok.
<|||||>@Narsil, sorry for being so slow on this one! After thinking a bit more, I think you're right that `max_time` should not be part of the config. One last thing that we'll have to do IMO is to ensure backwards compatibility for the "sub"-generation methods. See comment above. Please let me know, if this doesn't make sense or if I misunderstood something<|||||>T5 Also passed, but had a OOM crash on my local machine.
```
================================================================================================================================== test session starts ===================================================================================================================================
platform linux -- Python 3.8.5, pytest-6.1.1, py-1.9.0, pluggy-0.13.1
rootdir: /home/nicolas/src/transformers
plugins: forked-1.3.0, xdist-2.1.0
collected 114 items
tests/test_modeling_bart.py ........................................ssss.......................^[[A..............................ssss............. [100%]
============================================================= warnings summary =============================================================
.venv/lib/python3.8/site-packages/tensorflow/python/autograph/utils/testing.py:21
/home/nicolas/src/transformers/.venv/lib/python3.8/site-packages/tensorflow/python/autograph/utils/testing.py:21: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
tests/test_modeling_bart.py::BartModelTest::test_torchscript
tests/test_modeling_bart.py::BartModelTest::test_torchscript_output_attentions
tests/test_modeling_bart.py::BartModelTest::test_torchscript_output_hidden_state
/home/nicolas/src/transformers/.venv/lib/python3.8/site-packages/torch/nn/functional.py:1897: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert padding_idx < weight.size(0), "Padding_idx must be within num_embeddings"
tests/test_modeling_bart.py::BartModelTest::test_torchscript
tests/test_modeling_bart.py::BartModelTest::test_torchscript_output_attentions
tests/test_modeling_bart.py::BartModelTest::test_torchscript_output_hidden_state
tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript
tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript_output_attentions
tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript_output_hidden_state
/home/nicolas/src/transformers/src/transformers/models/bart/modeling_bart.py:213: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert attn_weights.size() == (
tests/test_modeling_bart.py::BartModelTest::test_torchscript
tests/test_modeling_bart.py::BartModelTest::test_torchscript_output_attentions
tests/test_modeling_bart.py::BartModelTest::test_torchscript_output_hidden_state
tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript
tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript_output_attentions
tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript_output_hidden_state
/home/nicolas/src/transformers/src/transformers/models/bart/modeling_bart.py:220: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert attention_mask.size() == (
tests/test_modeling_bart.py::BartModelTest::test_torchscript
tests/test_modeling_bart.py::BartModelTest::test_torchscript_output_attentions
tests/test_modeling_bart.py::BartModelTest::test_torchscript_output_hidden_state
tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript
tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript_output_attentions
tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript_output_hidden_state
/home/nicolas/src/transformers/src/transformers/models/bart/modeling_bart.py:252: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert attn_output.size() == (
tests/test_modeling_bart.py::BartModelTest::test_torchscript
tests/test_modeling_bart.py::BartModelTest::test_torchscript_output_attentions
tests/test_modeling_bart.py::BartModelTest::test_torchscript_output_hidden_state
tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript
tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript_output_attentions
tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript_output_hidden_state
/home/nicolas/src/transformers/src/transformers/models/bart/modeling_bart.py:856: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if input_shape[-1] > 1:
-- Docs: https://docs.pytest.org/en/stable/warnings.html
========================================= 106 passed, 8 skipped, 28 warnings in 562.15s (0:09:22) ==========================================
```<|||||>> T5 Also passed, but had a OOM crash on my local machine.
>
> ```
> ================================================================================================================================== test session starts ===================================================================================================================================
> platform linux -- Python 3.8.5, pytest-6.1.1, py-1.9.0, pluggy-0.13.1
> rootdir: /home/nicolas/src/transformers
> plugins: forked-1.3.0, xdist-2.1.0
> collected 114 items
>
> tests/test_modeling_bart.py ........................................ssss.......................^[[A..............................ssss............. [100%]
>
> ============================================================= warnings summary =============================================================
> .venv/lib/python3.8/site-packages/tensorflow/python/autograph/utils/testing.py:21
> /home/nicolas/src/transformers/.venv/lib/python3.8/site-packages/tensorflow/python/autograph/utils/testing.py:21: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
> import imp
>
> tests/test_modeling_bart.py::BartModelTest::test_torchscript
> tests/test_modeling_bart.py::BartModelTest::test_torchscript_output_attentions
> tests/test_modeling_bart.py::BartModelTest::test_torchscript_output_hidden_state
> /home/nicolas/src/transformers/.venv/lib/python3.8/site-packages/torch/nn/functional.py:1897: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
> assert padding_idx < weight.size(0), "Padding_idx must be within num_embeddings"
>
> tests/test_modeling_bart.py::BartModelTest::test_torchscript
> tests/test_modeling_bart.py::BartModelTest::test_torchscript_output_attentions
> tests/test_modeling_bart.py::BartModelTest::test_torchscript_output_hidden_state
> tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript
> tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript_output_attentions
> tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript_output_hidden_state
> /home/nicolas/src/transformers/src/transformers/models/bart/modeling_bart.py:213: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
> assert attn_weights.size() == (
>
> tests/test_modeling_bart.py::BartModelTest::test_torchscript
> tests/test_modeling_bart.py::BartModelTest::test_torchscript_output_attentions
> tests/test_modeling_bart.py::BartModelTest::test_torchscript_output_hidden_state
> tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript
> tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript_output_attentions
> tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript_output_hidden_state
> /home/nicolas/src/transformers/src/transformers/models/bart/modeling_bart.py:220: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
> assert attention_mask.size() == (
>
> tests/test_modeling_bart.py::BartModelTest::test_torchscript
> tests/test_modeling_bart.py::BartModelTest::test_torchscript_output_attentions
> tests/test_modeling_bart.py::BartModelTest::test_torchscript_output_hidden_state
> tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript
> tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript_output_attentions
> tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript_output_hidden_state
> /home/nicolas/src/transformers/src/transformers/models/bart/modeling_bart.py:252: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
> assert attn_output.size() == (
>
> tests/test_modeling_bart.py::BartModelTest::test_torchscript
> tests/test_modeling_bart.py::BartModelTest::test_torchscript_output_attentions
> tests/test_modeling_bart.py::BartModelTest::test_torchscript_output_hidden_state
> tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript
> tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript_output_attentions
> tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript_output_hidden_state
> /home/nicolas/src/transformers/src/transformers/models/bart/modeling_bart.py:856: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
> if input_shape[-1] > 1:
>
> -- Docs: https://docs.pytest.org/en/stable/warnings.html
> ========================================= 106 passed, 8 skipped, 28 warnings in 562.15s (0:09:22) ==========================================
> ```
Ok for me then! If T5 tests pass this is good enough. |
transformers | 9,845 | closed | [WIP/ don't merge] T5 gradient checkpointing | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-27-2021 13:10:41 | 01-27-2021 13:10:41 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,844 | closed | [examples/seq2seq] support label smoothing | # What does this PR do?
Add support for label smoothing by adding `prepare_decoder_input_ids_from_labels` method to all seq2seq models which will let us prepare `decoder_input_ids` outside the model.
For context, we need to pass `decoder_input_ids` for label smoothing because we don't pass `labels` to avoid calculating loss twice, which leads to speeds degradation, see #9713.
@sgugger , @patrickvonplaten what do we think about adding `prepare_decoder_input_ids_from_labels` to every seq2seq model, there are already `shift_tokens_right/_shift_right` methods, but the name is a bit confusing IMO to use outside the model. | 01-27-2021 13:08:15 | 01-27-2021 13:08:15 | > I don't know if the shift methods are used for something else in the seq2seq methods, but if this was their only use, we could maybe deprecate them?
those are used for exactly the same reason, `prepare decoder_input_ids` by shifting `labels`, and those are mostly used inside the models, so yeah, think we could deprecate them<|||||>I agree we could remove the `pad_token_id` argument. |
transformers | 9,843 | closed | SQUAD Question Answering example:: RuntimeError: Could not infer dtype of NoneType | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.1
- Platform: Linux
- Python version: 3.6.12
- PyTorch version (GPU?): 1.7.1
- Tensorflow version (GPU?): 2.4.0
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
- @sgugger, @patil-suraj
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library: @sgugger
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples: @patil-suraj
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [X] the official example scripts: (give details below)
mkdir squad
wget https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v2.0.json -O squad/train-v2.0.json
wget https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json -O squad/dev-v2.0.json
import json
from pathlib import Path
def read_squad(path):
path = Path(path)
with open(path, 'rb') as f:
squad_dict = json.load(f)
contexts = []
questions = []
answers = []
for group in squad_dict['data']:
for passage in group['paragraphs']:
context = passage['context']
for qa in passage['qas']:
question = qa['question']
for answer in qa['answers']:
contexts.append(context)
questions.append(question)
answers.append(answer)
return contexts, questions, answers
train_contexts, train_questions, train_answers = read_squad('squad/train-v2.0.json')
val_contexts, val_questions, val_answers = read_squad('squad/dev-v2.0.json')
def add_end_idx(answers, contexts):
for answer, context in zip(answers, contexts):
gold_text = answer['text']
start_idx = answer['answer_start']
end_idx = start_idx + len(gold_text)
# sometimes squad answers are off by a character or two โ fix this
if context[start_idx:end_idx] == gold_text:
answer['answer_end'] = end_idx
elif context[start_idx-1:end_idx-1] == gold_text:
answer['answer_start'] = start_idx - 1
answer['answer_end'] = end_idx - 1 # When the gold label is off by one character
elif context[start_idx-2:end_idx-2] == gold_text:
answer['answer_start'] = start_idx - 2
answer['answer_end'] = end_idx - 2 # When the gold label is off by two characters
add_end_idx(train_answers, train_contexts)
add_end_idx(val_answers, val_contexts)
from transformers import DistilBertTokenizerFast
tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased')
train_encodings = tokenizer(train_contexts, train_questions, truncation=True, padding=True)
val_encodings = tokenizer(val_contexts, val_questions, truncation=True, padding=True)
def add_token_positions(encodings, answers):
start_positions = []
end_positions = []
for i in range(len(answers)):
start_positions.append(encodings.char_to_token(i, answers[i]['answer_start']))
end_positions.append(encodings.char_to_token(i, answers[i]['answer_end']))
# if start position is None, the answer passage has been truncated
if start_positions[-1] is None:
start_positions[-1] = tokenizer.model_max_length
# if end position is None, the 'char_to_token' function points to the space before the correct token - > add + 1
if end_positions[-1] is None:
end_positions[-1] = encodings.char_to_token(i, answers[i]['answer_end'] + 1)
encodings.update({'start_positions': start_positions, 'end_positions': end_positions})
add_token_positions(train_encodings, train_answers)
add_token_positions(val_encodings, val_answers)
import torch
class SquadDataset(torch.utils.data.Dataset):
def __init__(self, encodings):
self.encodings = encodings
def __getitem__(self, idx):
return {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
def __len__(self):
return len(self.encodings.input_ids)
train_dataset = SquadDataset(train_encodings)
val_dataset = SquadDataset(val_encodings)
from transformers import DistilBertForQuestionAnswering, Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=3, # total number of training epochs
per_device_train_batch_size=16, # batch size per device during training
per_device_eval_batch_size=64, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs', # directory for storing logs
logging_steps=10,
)
model = DistilBertForQuestionAnswering.from_pretrained("distilbert-base-uncased")
trainer = Trainer(
model=model, # the instantiated ๐ค Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=val_dataset # evaluation dataset
)
trainer.train()
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: (give the name) SQUaD
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Ran the example from [squad question answering](https://huggingface.co/transformers/custom_datasets.html#question-answering-with-squad-2-0)
2. getting the following
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-9-fe1badbb2679> in <module>
21 )
22
---> 23 trainer.train()
/media/data2/anaconda/envs/hr/lib/python3.6/site-packages/transformers/trainer.py in train(self, model_path, trial)
871 self.control = self.callback_handler.on_epoch_begin(self.args, self.state, self.control)
872
--> 873 for step, inputs in enumerate(epoch_iterator):
874
875 # Skip past any already trained steps if resuming training
/media/data2/anaconda/envs/hr/lib/python3.6/site-packages/torch/utils/data/dataloader.py in __next__(self)
433 if self._sampler_iter is None:
434 self._reset()
--> 435 data = self._next_data()
436 self._num_yielded += 1
437 if self._dataset_kind == _DatasetKind.Iterable and \
/media/data2/anaconda/envs/hr/lib/python3.6/site-packages/torch/utils/data/dataloader.py in _next_data(self)
473 def _next_data(self):
474 index = self._next_index() # may raise StopIteration
--> 475 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
476 if self._pin_memory:
477 data = _utils.pin_memory.pin_memory(data)
/media/data2/anaconda/envs/hr/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index)
42 def fetch(self, possibly_batched_index):
43 if self.auto_collation:
---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]
45 else:
46 data = self.dataset[possibly_batched_index]
/media/data2/anaconda/envs/hr/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py in <listcomp>(.0)
42 def fetch(self, possibly_batched_index):
43 if self.auto_collation:
---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]
45 else:
46 data = self.dataset[possibly_batched_index]
<ipython-input-8-a9d5c9a06902> in __getitem__(self, idx)
6
7 def __getitem__(self, idx):
----> 8 return {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
9
10 def __len__(self):
<ipython-input-8-a9d5c9a06902> in <dictcomp>(.0)
6
7 def __getitem__(self, idx):
----> 8 return {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
9
10 def __len__(self):
RuntimeError: Could not infer dtype of NoneType
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior: should have run without the error.
<!-- A clear and concise description of what you would expect to happen. -->
| 01-27-2021 13:05:56 | 01-27-2021 13:05:56 | I'm not able to reproduce the issue. I went to [this page](https://huggingface.co/transformers/custom_datasets.html), then clicked on "Open in colab" on the top right (chose PyTorch), and then run the question-answering tutorial, and it's working fine for me:

<|||||>Hi @paniabhisek
For QA you could use the official `run_qa.py ` example scripts which now supports `Trainer` and `datasets`. You can find it here
https://github.com/huggingface/transformers/tree/master/examples/question-answering
<|||||>@NielsRogge I ran the code in colab, it's working for me too. But not in conda environment.
@patil-suraj [example-script](https://github.com/huggingface/transformers/tree/master/examples/question-answering) only supports squad 1.1 ? Does it support squad 2.0 ?<|||||>It supports squad V1 and V2. For V2, just add the flag `--version2_with_negative` (on top of `--dataset_nme squad_v2`)<|||||>If you try to call `train_dataset[137]`, it returns an error (`[136]` and `[138]` both work properly). It is because `end_positions.append(encodings.char_to_token(i, answers[i]['answer_end']))` and `end_positions.append(encodings.char_to_token(i, answers[i]['answer_end'] + 1))` do not find the correct token; the `end_position[-1]` is None. The code before #9378 should work.<|||||>[#9378-comment](https://github.com/huggingface/transformers/pull/9378#issuecomment-759717949) have worked for me. I was wondering how to use a snippet without an unfamiliar script so I can use my own language model. thanks @kevinthwu .
btw thanks @sgugger I can use the squad 2.0 with the option `--version2_with_negative`.
I'm not closing as the docs are not updated yet.<|||||>> It supports squad V1 and V2. For V2, just add the flag `--version2_with_negative` (on top of `--dataset_nme squad_v2`)
the argument name is '**version_2_with_negative**' (line 444 run_qa.py)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,842 | closed | Fix model templates | Fixes the style issue with model templates | 01-27-2021 13:00:41 | 01-27-2021 13:00:41 | |
transformers | 9,841 | closed | Multi-TPU training uses just 1 out of 8 cores. | ## Environment info
- `transformers` version: 4.2.2
- Platform: n1-standard-64 Google Cloud
- Python version: 3.7
- PyTorch version (GPU?): 1.7 XLA
- Tensorflow version (GPU?):
- Using GPU in script?: NO, using TPU
- Using distributed or parallel set-up in script?: YES; I try to run it in parallel using all 8 cores with xla_spawn.py setting num_cores to 8 in a V3-8.
### Who can help
@patrickvonplaten, @LysandreJik @sgugger
## Information
Model I am using (Bert, XLNet ...): ALBERT base
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The problem occurs when I try to train with run_mlm_wwm.py through xla_spawn.py. I've checked that when xla_spawn calls run_mlm_ww.py, xm.xrt_world_size() is 8, which should be. However, when the Trainer starts to train, its batch size is only 64, but should be 64 * num_cores = 512. I've printed out the parameters sent by xla_spawn and those received by run_mlm_wwm.py, and they coincide, thus I don't understand why in line 690 of trainer: ```{python}total_train_batch_size = self.args.train_batch_size * xm.xrt_world_size()``` the total_train_batch_size is not converted to 512...
This is the full call:
```{bash}
XRT_TPU_CONFIG="tpu_worker;0;10.44.99.146:8470" python -u transformers/examples/xla_spawn.py --num_cores 8 \
transformers/examples/language-modeling/run_mlm_wwm.py \
--model_type albert \
--config_name ./config/albert-base-v2.json \
--tokenizer_name ./tokenizer_2912 \
--train_file ./train_texts_1_percent.txt \
--validation_file ./validation_data/good_texts.csv \
--output_dir ./models/model_1_percent \
--overwrite_output_dir \
--do_train \
--do_eval \
--evaluation_strategy steps \
--per_device_train_batch_size 64 \
--per_device_eval_batch_size 128 \
--gradient_accumulation_steps 8 \
--learning_rate 0.00176 \
--save_steps 1000 \
--logging_steps 1000 \
--overwrite_cache \
--max_seq_length 512 \
--eval_accumulation_steps 10 \
--load_best_model_at_end \
--run_name model_1_percent \
--save_total_limit 20 --tpu_metrics_debug
```
The model starts to train, but it doesn't take into account that it has 8 tpu cores:
```
[INFO|trainer.py:662] 2021-01-27 12:22:50,282 >> ***** Running training *****
[INFO|trainer.py:663] 2021-01-27 12:22:50,282 >> Num examples = 5835032
[INFO|trainer.py:664] 2021-01-27 12:22:50,282 >> Num Epochs = 3
[INFO|trainer.py:665] 2021-01-27 12:22:50,282 >> Instantaneous batch size per device = 64
[INFO|trainer.py:666] 2021-01-27 12:22:50,282 >> Total train batch size (w. parallel, distributed & accumulation) = 512
[INFO|trainer.py:667] 2021-01-27 12:22:50,282 >> Gradient Accumulation steps = 8
[INFO|trainer.py:668] 2021-01-27 12:22:50,282 >> Total optimization steps = 4272
0%| | 3/4272 [04:18<113:20:52, 95.58s/it]
```
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: Whole Word Masked Language Modelling
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Instantiate a Google Cloud V3-8 TPU and a n1-standard-64 Google Cloud instance.
2. Use any toy text dataset and any tokenizer and model name from the ones available in Transformers (these won't change the problem, so it's not necessary to have your own pretrained tokenizer or own dataset).
3. Try to execute the command I posted above but setting XRT_TPU_CONFIG to the IP address of your TPU.
## Expected behavior
It's expected that xla_spawn.py runs the python file passed to it in a multiprocessing fashion, distributing the batches and model over the TPU cores; however, at some point the xrt_world_size() turns to 1 and it doesn't see all the devices available anymore, but only one. | 01-27-2021 12:28:42 | 01-27-2021 12:28:42 | Hi there. It's just a logging problem in the reporting of the total batch size. If we do the math, from your 5835032 samples, we get 91,172 batches per device, 11,396 batches total (divided by the number of cores) and 1,424 optimization steps (divided by the accumulation steps), which, multiplied by the 3 epochs, gives us the 4,272 steps you see.
So the number of cores is indeed taken into account.<|||||>Ahh, I see, my bad, I didn't calculate the number of steps correctly then (what a Data Scientist :P) Thank You very much @sgugger |
transformers | 9,840 | closed | Fix TF template | # What does this PR do?
This PR fixes the template and a cast issue.
| 01-27-2021 12:14:22 | 01-27-2021 12:14:22 | |
transformers | 9,839 | closed | Run GA on forks | 01-27-2021 11:31:07 | 01-27-2021 11:31:07 | ||
transformers | 9,838 | closed | logging_epochs argument for TrainingArguments | # ๐ Feature request
There is no `logging_epochs` argument in `TrainingArguments`. When someone wants to train with `EvaluationStrategy.EPOCH`, he/she wants to see the logs after each epoch. Currently it is not possible.
## Motivation
Better logging for training
## Your contribution
If I have time, I would like to add it. However, I am not available in a coming couple of weeks.
| 01-27-2021 10:23:04 | 01-27-2021 10:23:04 | Just like `evaluation_strategy` chooses between `'steps'` and `'epoch'`, to maintain consistency I think it is better to introduce either of:
1. a new enumeration `LoggingStrategy` with values
1. `'epoch'` for per-epoch functionality
2. `'steps'` functionality by falling back on `logging_steps`
2. a new bool argument `log_per_epoch` to decide between `epoch` or `steps` functionality and proceed similarly as above
@hasansalimkanmaz If you're still occupied, is it okay if I take a stab at this?<|||||>Feel free to go ahead @tanmay17061 I am still busy with some other staff. Thanks for your interest. |
transformers | 9,837 | closed | Fixing flaky conversational test + flag it as a pipeline test. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 01-27-2021 09:30:29 | 01-27-2021 09:30:29 | |
transformers | 9,836 | closed | [docs] use `versionadded`, `versionchanged` and `deprecated` directive | # ๐ Feature request
## Documentation
Use `.. versionadded::`, `.. versionchanged::` and `.. deprecated::` directive, so that user knows which features are added / changed / deprecated in which version and they can navigate the docs easily without changing the docs from version to version.
Ref: https://www.sphinx-doc.org/en/master/usage/restructuredtext/directives.html#directive-versionadded
## Motivation
To be able to know (without going to github) which features are introduced / changed / deprecated / improved in which version just from the docs.
Since `transfomers` is widely used in production, this slight change to the docs can help users see bird eye view for the features of the library.
Let me know what do you think
| 01-27-2021 09:21:39 | 01-27-2021 09:21:39 | Cool idea! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,835 | closed | support Mixed Precision and avoid 'dtype=float32' in implementation | # ๐ Feature request
When using Longformer model, I found the dtype of many tensors are assigned directly by `dtype=tf.dtypes.float32` or equations, which makes mixed precision raining impossible. And I found that other models also have this problem.
Of course It is not a bug to not support mixed precision training. But because Transformer models are usually very large, it will be appreciated to support mix precision training when implementing new models.
So I suggest if possible please use dtype inference instead of direct dtype assignment.
| 01-27-2021 08:49:10 | 01-27-2021 08:49:10 | Hello!
Thanks for the feature requests. We are currently working on this, some of them already support mixed precision ๐ <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,834 | closed | Improved TF inputs | # What does this PR do?
This PR aims to improve the input processing of the TF models. Currently the inputs of each model are processed at least twice (once in the model, once in the main layer) and at most four times for the Seq2Seq models (once in the model, once in the main layer, once in the encoder layer and once in the decoder layer). This is a bit overkill and slows down the performance of a forward pass.
To fix this issue, we introduce a flag in order to know if the incoming inputs are already processed or not, if yes we keep them as they are otherwise we run the input processing.
| 01-27-2021 08:46:59 | 01-27-2021 08:46:59 | As said offline, I feel like this adds unnecessary complexity for no real gain. I don't think this necessarily slows things down, and if it does I'm sure it's by a negligible margin.
As you mentioned offline this also fixes a bug, so if you find a way to integrate this in the `input_processing` method as you've mentioned I may be in favor of this change.<|||||>Ok, I will rethink this to integrate it inside `input_processing`<|||||>@LysandreJik @patrickvonplaten the check is now inside `input_processing` (done for BERT only to show an example). Does-it fits you better?<|||||>Yes it's cleaner! I still don't really like the `already_processed=True`, but I understand why it's necessary.
By the way, is there a reason we re-specify all the inputs for base models after processing the inputs? Can't we just unpack the `inputs` directly in the transformer?
Instead of:
```py
inputs = input_processing(
func=self.call,
config=self.config,
input_ids=input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
training=training,
kwargs_call=kwargs,
)
outputs = self.bert(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
token_type_ids=inputs["token_type_ids"],
position_ids=inputs["position_ids"],
head_mask=inputs["head_mask"],
inputs_embeds=inputs["inputs_embeds"],
output_attentions=inputs["output_attentions"],
output_hidden_states=inputs["output_hidden_states"],
return_dict=inputs["return_dict"],
training=inputs["training"],
)
```
we would have:
```py
inputs = input_processing(
func=self.call,
config=self.config,
input_ids=input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
training=training,
kwargs_call=kwargs,
)
outputs = self.bert(**inputs)
```
which looks cleaner and we there would be no need to mention `already_processed`. Looking at it, I would expect the `input_processing` method to do the full processing for the model inputs, so I don't see why we would need to redefine what inputs we're sending to the model; the selection should already have been made in the `input_processing`.
Please let me know if this has already been discussed or if I'm missing something.<|||||>That's cleaner indeed, but without the `already_processed` argument I don't see how we can know that the input has already been processed or not.
How do you know from:
```python
inputs = input_processing(
func=self.call,
config=self.config,
input_ids=input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
training=training,
kwargs_call=kwargs,
)
```
That the given inputs have already been processed or not without having a flag that gives you this info? :(<|||||>OK, I might have found the solution, need to think a bit more about this, but wait the next push and you will let me know if it looks better :)<|||||>@LysandreJik Since the last push now `input_processing` handles 100% of the process, and now the calls can be like `outputs = self.bert(**inputs)`.
If everyone is ok with this last version I will update accordingly the other models.<|||||>To be honest, I don't really see the point of this PR (but maybe there is something I'm not seeing or misunderstood) - is this PR just to run `input_processing` 1 time instead of possibly 4 times? Or is it also fixing a bug/enabling functionality that didn't exist before? Is that speed-up even noticeable?
I don't think the trade-off between an (I assume) tiny speed-up of the forward pass vs. added complex logic for the user + much more code is worth it here...also this PR would again change all forward functions of all models I think, no? So, we'll run into a bunch of merge conflicts here again (not that big of an issue though)<|||||>This is also for fixing an issue on the inputs that are a list. If the input is a list, the list is recursively processed. I thought you saw the thread we had with @LysandreJik . I'm copy pasting the explanation here.
If we have the input `input_ids=[[[1,2,3]], [[1,1,1]]]` after the first processing we get `{"input_ids": [[1,2,3]], "attention_mask": [[1,1,1]]}`, after the second processing we get `"input_ids": [1,2,3], "attention_mask": [1,1,1]`, after the third processing we get `input_ids=1, attention_mask=1` after the fourth processing we get an error.
So in order to avoid this issue, we should parse only once eveytime the input.<|||||>> This is also for fixing an issue on the inputs that are a list. If the input is a list, the list is recursively processed. I thought you saw the thread we had with @LysandreJik . I'm copy pasting the explanation here.
>
> If we have the input `input_ids=[[[1,2,3]], [[1,1,1]]]` after the first processing we get `{"input_ids": [[1,2,3]], "attention_mask": [[1,1,1]]}`, after the second processing we get `"input_ids": [1,2,3], "attention_mask": [1,1,1]`, after the third processing we get `input_ids=1, attention_mask=1` after the fourth processing we get an error.
>
> So in order to avoid this issue, we should parse only once eveytime the input.
Got it! Thanks for sharing this here! Then yes, adding as little new logic as possible and boilerplate code to fix it is fine with me<|||||>I agree with @patrickvonplaten comments here and I would like to avoid touching all the files for this (however touching all the files for the change `outputs = self.bert(**inputs)` would be welcome as it's more readable).
An option to do it all in the `input_processing` without hurting the readability of all model files is to have `input_processing` return a subclass of dict that we could call `ProcessedInputs`. Then testing for that subclass at the beginning of the function (and directly returning the result in that case) would be enough.<|||||>> An option to do it all in the input_processing without hurting the readability of all model files is to have input_processing return a subclass of dict that we could call ProcessedInputs. Then testing for that subclass at the beginning of the function (and directly returning the result in that case) would be enough.
WHOA!!! I love this idea! This is much better indeed, we get a better readability and a better checking of what has been processed or not. Thank you very much for sharing this idea ! I will close this PR and rethink it accordingly to what you proposed :)<|||||>If there is no bug, I think it may be wise not to spend too much time on it either. Doing the computation 4 times isn't much of an issue as it seems negligible compared to a forward pass' execution time.
Also, knowing how TF becomes annoying in graph mode, I would be very surprised if it could handle conditional statements with subclasses in its graph<|||||>Now that no issue has been identified, I would put this as an aside project. I don't mind checking this on my personal time :) |
transformers | 9,833 | closed | Mixed Precision support and avoid โๅๅฏๅๅคชโ | # ๐ Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| 01-27-2021 08:28:22 | 01-27-2021 08:28:22 | |
transformers | 9,832 | closed | ImportError: cannot import name 'get_last_checkpoint' | from transformers.trainer_utils import get_last_checkpoint, is_main_process
ImportError: cannot import name 'get_last_checkpoint' | 01-27-2021 08:20:38 | 01-27-2021 08:20:38 | HI there,
what's your transformers version ? `get_last_checkpoint` is available on master, so you should install from source to use it<|||||>hi @yuxuan2015 , the latest stable release of transformers (4.2.2) has no 'get_last_checkpoint' function, so if you installed via package manager you won't be able to use that function. like patil said, you need to install from source<|||||>I solved this error after reinstalling transformers from pip. The version of transformers I installed is 4.3.3<|||||>As mentioned in the `examples/readme.md` [here](https://github.com/huggingface/transformers/tree/master/examples#important-note), to run the examples, always install from source.
Closing this issue. |
transformers | 9,831 | closed | [Setup.py] update jaxlib | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes failing Ciricle CI because of version mismatch: https://app.circleci.com/pipelines/github/huggingface/transformers/19040/workflows/75599a81-f58c-40c6-8feb-f824d57a1d65/jobs/157385
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-27-2021 07:58:18 | 01-27-2021 07:58:18 | |
transformers | 9,830 | closed | [MT5 Import init] Fix typo | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes a typo
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-27-2021 07:57:58 | 01-27-2021 07:57:58 | Sorry about that! |
transformers | 9,829 | closed | Update run_xnli.py to use Datasets library | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #9754
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-27-2021 07:32:59 | 01-27-2021 07:32:59 | Just tested the script locally and it seems to work great, congrats! We are almost done! The last part would be top adapt the end of the README in the text-classification folder to reflect how to use the new script (since the arguments are a bit different).<|||||>> Just tested the script locally and it seems to work great, congrats! We are almost done! The last part would be top adapt the end of the README in the text-classification folder to reflect how to use the new script (since the arguments are a bit different).
I've changed the script in README from
```
export XNLI_DIR=/path/to/XNLI
python run_xnli.py \
--model_name_or_path bert-base-multilingual-cased \
--language de \
--train_language en \
--do_train \
--do_eval \
--data_dir $XNLI_DIR \
--per_device_train_batch_size 32 \
--learning_rate 5e-5 \
--num_train_epochs 2.0 \
--max_seq_length 128 \
--output_dir /tmp/debug_xnli/ \
--save_steps -1
```
to
```
python run_xnli.py \
--model_name_or_path bert-base-multilingual-cased \
--language de \
--train_language en \
--do_train \
--do_eval \
--per_device_train_batch_size 32 \
--learning_rate 5e-5 \
--num_train_epochs 2.0 \
--max_seq_length 128 \
--output_dir /tmp/debug_xnli/ \
--save_steps -1
```
I've also removed these sentences below from [Fine-tuning on XNLI](https://github.com/huggingface/transformers/blob/master/examples/text-classification/README.md#fine-tuning-on-xnli)
```
The data for XNLI can be downloaded with the following links and should be both saved (and un-zipped) in a $XNLI_DIR directory.
XNLI 1.0
XNLI-MT 1.0
```<|||||>> This is in good shape to be merged, thanks a lot for your work! I just have a few comments on how to simplify things here and there since there is only one task to deal with in the new script.
>
> One question I have is, is the tokenizer the same for the training and evaluation datasets, even if the languages, can be different?
I'm puzzled. Is `Trainer()` class doing the magic under the hood when the languages are different? or is it `AutoTokenizer.from_pretrained`?
<|||||>I missed this is a multinlingual checkpoint, so there is no need for different tokenizers.
@patil-suraj it's good to merge IMO, I'll let you review one last time and merge if you approve.<|||||> @sgugger Yay :)
@patil-suraj let me know if there's anything you would like me to change further
<|||||>Thanks @sgugger @patil-suraj for your helpful comments and guidance. I was jumping in at the deep end when I attempted this PR to be honest, but yay it's merged ๐<|||||>Great job adding this example and thanks a lot for your PR! Don't hesitate to brag a little bit on Twitter about your contribution ;-) |
transformers | 9,828 | closed | [LedFastTokenizer] Correct missing None statement | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
LEDTokenizerFast was not set to None when not being imported, which broke this script, e.g.:
https://colab.research.google.com/github/pytorch/pytorch.github.io/blob/master/assets/hub/huggingface_pytorch-transformers.ipynb.
This PR should fix it.
Ci-Failure is unrelated.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
cc @sgugger @LysandreJik
| 01-27-2021 07:22:47 | 01-27-2021 07:22:47 | Thanks for fixing! |
transformers | 9,827 | closed | I am trying to Fine tune on BartForConditionalGeneration but I end up getting all <pad_tokens>. Can you please help resolve it? | 01-27-2021 06:59:25 | 01-27-2021 06:59:25 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead?
Thanks! |
|
transformers | 9,826 | closed | Delete a needless duplicate condition | # What does this PR do?
Delete a needless duplicate condition in the class `PrefixConstrainedLogitsProcessor` (`src/transformers/generation_logits_process.py`).
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@patrickvonplaten
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-27-2021 06:58:23 | 01-27-2021 06:58:23 | Thank you!
|
transformers | 9,825 | closed | Add tpu_zone and gcp_project in training_args_tf.py | # What does this PR do?
Add ```tpu_zone``` and ```gcp_project``` in ```training_args_tf.py```.
For using TPUs created in a zone different from the VM zone, ```tpu_zone``` must be allocated.
See official Bert repo,
https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/run_pretraining.py#L426
- trainer: @sgugger | 01-27-2021 05:51:37 | 01-27-2021 05:51:37 | @sgugger
I got the error message with ```make style```.
```
kiyoung@medical-ubuntu:~/transformers$ make style
running deps_table_update
updating src/transformers/dependency_versions_table.py
black examples tests src utils
make: black: Command not found
Makefile:42: recipe for target 'style' failed
make: *** [style] Error 127
```<|||||>You need to follow the steps of the [contributing guide](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) do be able to make PRs. In particular you didn't follow the installation part by running `pip install -e ".[dev]"` since you don't have `black` installed.<|||||>@sgugger
Thanks, I did it.<|||||>Thanks for fixing! Now the problem comes from a new release of jax, which has been fixed in master so this is safe to merge.<|||||>This PR introduced a `datasets` submodule. I'm removing it in #9868. |
transformers | 9,824 | closed | [wip] [doc] Performance and Scalability notes | Let's start another doc. I think it works the best to work on these as an issue and not a PR since anybody can read these easily, rather than reading a markdown.
As in the other similar [work-in-progress-doc](https://github.com/huggingface/transformers/issues/9766), let me write the bulk of it out and then you can ask questions / make requests and clarifications.
---------------------------------------------
# Performance and Scalability: How To Fit a Bigger Model and Train It Faster
Quick notes:
This section gives brief ideas on how to make training faster and support bigger models. Later sections will expand, demonstrate and elucidate each of these.
### Faster Training
HW:
- fast connectivity between GPUs
* same node: NVLink
* multiple nodes: ???
SW:
- Data Parallel / Distributed Data Parallel
- fp16 (autocast caching)
### Bigger Models
HW:
- bigger GPUs
SW:
- ZeRO-Offload
- ZeRO-DP
- Pipeline Parallelism
- fp16 (smaller data)
## Hardware
### Multi-GPU Connectivity
If you use multiple GPUs the way cards are inter-connected can have a huge impact on the total training time.
If the GPUs are on the same physical node, you can run:
```
nvidia-smi topo -m
```
and it will tell you how the GPUs are inter-connected.
On a machine with dual-GPU and which are connected with NVLink, you will most likely see something like:
```
GPU0 GPU1 CPU Affinity NUMA Affinity
GPU0 X NV2 0-23 N/A
GPU1 NV2 X 0-23 N/A
```
on a different machine w/o NVLink we may see:
```
GPU0 GPU1 CPU Affinity NUMA Affinity
GPU0 X PHB 0-11 N/A
GPU1 PHB X 0-11 N/A
```
The report includes this Legend:
```
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
```
So the first report `NV2` tells us the GPUs are interconnected with 2 NVLinks, and the second report `PHB` we have a typical consumer-level PCIe+Bridge setup.
Check what type of connectivity you have on your setup. Some of these will make the communication between cards faster (e.g. NVLink), others slower (e.g. PHB).
Depending on the type of scalability solution used, the connectivity speed could have a major or a minor impact. If the GPUs need to sync rarely, as in DDP, the impact of a slower connection will be less significant. If the GPUs need to send messages to each other often, as in ZeRO-DP, then faster connectivity becomes super important to achieve faster training.
### NVlink
[NVLink](https://en.wikipedia.org/wiki/NVLink) is a wire-based serial multi-lane near-range communications link developed by Nvidia.
Each new generation provides a faster bandwidth, e.g. here is a quote from [Nvidia Ampere GA102 GPU Architecture](https://www.nvidia.com/content/dam/en-zz/Solutions/geforce/ampere/pdf/NVIDIA-ampere-GA102-GPU-Architecture-Whitepaper-V1.pdf):
> Third-Generation NVLinkยฎ
> GA102 GPUs utilize NVIDIAโs third-generation NVLink interface, which includes four x4 links,
> with each link providing 14.0625 GB/sec bandwidth in each direction between two GPUs. Four
> links provide 56.25 GB/sec bandwidth in each direction, and 112.5 GB/sec total bandwidth
> between two GPUs. Two RTX 3090 GPUs can be connected together for SLI using NVLink.
> (Note that 3-Way and 4-Way SLI configurations are not supported.)
So the higher `X` you get in the report of `NVX` in the output of `nvidia-smi topo -m` the better. The generation will depend on your GPU architecture.
Let's compare the execution of a gpt2 language model training over a small sample of wikitext.
The results are:
|type| time secs |
|----|-----|
| w/ NVlink| 101 |
| w/o NVlink | 131 |
You can see that NVLink completes the training ~23% faster.
In the second benchmark we use `NCCL_P2P_DISABLE=1` to tell the GPUs not to use NVLink.
Here is the full benchmark code and outputs:
```
# DDP w/ NVLink
rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node 2 \
examples/language-modeling/run_clm.py --model_name_or_path gpt2 --dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 --do_train --output_dir /tmp/test-clm
--per_device_train_batch_size 4 --max_steps 200
{'train_runtime': 101.9003, 'train_samples_per_second': 1.963, 'epoch': 0.69}
# DDP w/o NVLink
rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 NCCL_P2P_DISABLE=1 python -m torch.distributed.launch \
--nproc_per_node 2 examples/language-modeling/run_clm.py --model_name_or_path gpt2 --dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 --do_train --output_dir /tmp/test-clm \
--per_device_train_batch_size 4 --max_steps 200
{'train_runtime': 131.4367, 'train_samples_per_second': 1.522, 'epoch': 0.69}
```
Hardware: 2x TITAN RTX 24GB each + NVlink with 2 NVLinks (`NV2` in `nvidia-smi topo -m`)
Software: `pytorch-1.8-to-be` + `cuda-11.0` / `transformers==4.3.0.dev0`
## Software
### Anatomy of Model's Memory
The components on GPU memory are the following:
- the model weights
- the forward activations saved for gradient computation
- the gradients
- the optimizer state
### `forward` vs `backward` Execution Speed
For convolutions and linear layers there are 2x flops in the backward compared to the forward, which generally translates into ~2x slower (sometimes more, because sizes in the backward tend to be more awkward). Activations are usually bandwidth-limited, and itโs typical for an activation to have to read more data in the backward than in the forward (e.g. activation forward reads once, writes once, activation backward reads twice, gradOutput and output of the forward, and writes once, gradInput).
### fp16
AMP = Automatic Mixed Precision
If we look at what's happening with FP16 training (mixed precision) we have:
- the model in full precision so no memory saved there
- the forward activations saved for gradient computation are in mixed precision
- the gradients are computed in mixed precision *but* converted to full precision for the update, so no saving there
- the optimizer state is in full precision as all the updates are done in full precision
So the saving only happen for the forward activations saved for the backward computation, and there is a slight overhead because the gradients are properly stored both in half and full precision. (This is probably over-simplified but I think it's enough to explain what follows.)
Now let's look at a simple text-classification fine-tuning on 2 GPUs (I'm giving the command for reference):
```
export BS=16
python -m torch.distributed.launch \
--nproc_per_node 2 examples/text-classification/run_glue.py \
--model_name_or_path bert-base-cased \
--task_name mrpc \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size $BS \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir /tmp/mrpc \
--overwrite_output_dir \
--fp16
```
Since the only savings we get are in the model activations saved for the backward passed, it's logical that the bigger those activations are, the bigger the saving will be. If we try different batch sizes, I indeed get (this is with nvidia-smi so not completely reliable as said above but it will be a fair comparison):
| batch size | without --fp16 | with --fp16 | FP16 savings |
|:-:|:-:|:-:|:-:|
| 8 | 4247 | 4163 | 84 |
| 16 | 4971 | 4793 | 178 |
| 32 | 6827 | 6207 | 620 |
| 64 | 10037 | 8061 | 1976 |
So there is only a real memory saving if we train at a high batch size (and it's not half) and at batch sizes lower than 8, you actually get a bigger memory footprint (because of the overhead mentioned above). The gain for FP16 training is that in each of those cases, the training with the flag `--fp16` is twice as fast, which does require every tensor to have every dimension be a multiple of 8 (so if your batch size is not a multiple of 8, you won't get that speed-up, and the script `finetune_trainer.py` does not pad the tensors to a sequence length that is a multiple of 8).
TL;DR: FP16 with apex or AMP will only give you some memory savings with a reasonably high batch size.
Some amazing tutorials to read on mixed precision:
- @sgugger wrote a great explanation of mixed precision [here](https://docs.fast.ai/callback.fp16.html#A-little-bit-of-theory)
- Aleksey Bilogur's [A developer-friendly guide to mixed precision training with PyTorch](https://spell.ml/blog/mixed-precision-training-with-pytorch-Xuk7YBEAACAASJam)
### fp16 caching
pytorch `autocast` which performs AMP include a caching feature, which speed things up by caching fp16-converted values. Here is the full description from this [comment](https://discuss.pytorch.org/t/autocast-and-torch-no-grad-unexpected-behaviour/93475/3):
Autocast maintains a cache of the FP16 casts of model params (leaves). This helps streamline parameter reuse: if the same FP32 param is used in several different FP16list ops, like several matmuls, instead of re-casting the param to FP16 on entering each matmul, the cast will occur on the first matmul, the casted FP16 copy will be cached, and for all later matmuls the FP16 copy will be reused. The cache is maintained only within a particular outermost autocast context. When you exit the autocast context the cache is dropped. For recommended usage, in which autocast wraps the forward pass, and then you exit the context before calling backward(), this means the cache only lasts the duration of the forward pass each iteration, and will be rebuilt next iteration. (The cache of FP16-casted copies MUST be rebuilt each iteration. The FP32 params get updated by the optimizer, so the FP16 copies must be recreated, otherwise the FP16 values will be stale.)
### DP vs DDP
`DistributedDataParallel` (DDP) is typically faster than `DataParallel` (DP), but it is not always the case:
* while DP is python threads-based, DDP is multiprocess-based - and as such it has no python threads limitations, such as GIL
* on the other hand a slow inter-connectivity between the GPU cards could lead to an actual slower outcome with DDP
Here are the main differences in the inter-GPU communication overhead between the two modes:
[DDP](https://pytorch.org/docs/master/notes/ddp.html):
- At the start time the main process replicates the model once from gpu 0 to the rest of gpus
- Then for each batch:
1. each gpu consumes each own mini-batch of data directly
2. during `backward`, once the local gradients are ready, they are then averaged across all processes
[DP](https://pytorch.org/docs/master/generated/torch.nn.DataParallel.html):
For each batch:
1. gpu 0 reads the batch of data and then sends a mini-batch to each gpu
2. replicates the up-to-date model from gpu 0 to each gpu
3. runs `forward` and sends output from each gpu to gpu 0, computes loss
4. scatters loss from gpu 0 to all gpus, runs `backward`
5. sends gradients from each gpu to gpu 0 and averages those
The only communication DDP performs per batch is sending gradients, whereas DP does 5 different data exchanges per batch.
DP copies data within the process via python threads, whereas DDP copies data via [torch.distributed](https://pytorch.org/docs/master/distributed.html).
Under DP gpu 0 performs a lot more work than the rest of the gpus, thus resulting in under-utilization of gpus.
You can use DDP across multiple machines, but this is not the case with DP.
There are other differences between DP and DDP but they aren't relevant to this discussion.
If you want to go really deep into understanding these 2 modes, this [article](https://www.telesens.co/2019/04/04/distributed-data-parallel-training-using-pytorch-on-aws/) is highly recommended, as it has great diagrams, includes multiple benchmarks and profiler outputs on various hardware, explains all the nuances that you may need to know.
Let's look at an actual benchmark:
|type| time secs |
|----|-----|
| 2:DP w/ NVlink| 110 |
| 2:DDP w/ NVlink| 101 |
| 2:DDP w/o NVlink | 131 |
Analysis:
Here DP is ~10% slower than DDP w/ NVlink, but ~15% faster than DDP w/o NVlink
The real difference will depend on how much data each GPU needs to sync with the others - the more there is to sync, the more a slow link will slow down the total runtime.
Here is the full benchmark code and outputs:
`NCCL_P2P_DISABLE=1` was used to disable the NVLink feature on the corresponding benchmark.
```
# DP
rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \
python examples/language-modeling/run_clm.py \
--model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \
--do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200
{'train_runtime': 110.5948, 'train_samples_per_second': 1.808, 'epoch': 0.69}
# DDP w/ NVlink
rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \
python -m torch.distributed.launch --nproc_per_node 2 examples/language-modeling/run_clm.py \
--model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \
--do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200
{'train_runtime': 101.9003, 'train_samples_per_second': 1.963, 'epoch': 0.69}
# DDP w/o NVlink
rm -r /tmp/test-clm; NCCL_P2P_DISABLE=1 CUDA_VISIBLE_DEVICES=0,1 \
python -m torch.distributed.launch --nproc_per_node 2 examples/language-modeling/run_clm.py \
--model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \
--do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200
{'train_runtime': 131.4367, 'train_samples_per_second': 1.522, 'epoch': 0.69}
```
Hardware: 2x TITAN RTX 24GB each + NVlink with 2 NVLinks (`NV2` in `nvidia-smi topo -m`)
Software: `pytorch-1.8-to-be` + `cuda-11.0` / `transformers==4.3.0.dev0`
### Batch Sizes
The best performance is achieved when the tensor's batch size dimension is a multiple of 8. It's the final batch size of the tensor that gets passed to the GPU to calculate something that's important.
Examples:
- if you use a DP or DDP on 2 GPUs you want to have a total batch size of at least 16 (2x8), or a higher multiple. If your total batch size is 8, then each GPU will get a mini-batch of 4.
- if you use a Pipeline you want to make sure that after chunking you end up with micro-batches that are multiples of 8. For example if `chunks=3` is used, you want the batch size to be 24 (or a higher multiple of 8). Because if you use a batch size of 16, you will end up with 3 micro-batches of size 6,5,5.
There is no harm in using smaller batch sizes and at times one can hardly squeeze a batch size of 1 before getting OOM, it just won't be as fast as it can be.
### DataLoader
One of the important requirements to reach great training speed is the ability to feed the GPU at the maximum speed it can handle. By default everything happens in the main process and it might not be able to read the data from disk fast enough, and thus create a bottleneck, leading to GPU under-utilization.
- `DataLoader(`pin_memory=True`, ...)` which ensures that the data gets preloaded into the pinned memory on CPU and typically leads to much faster transfers from CPU to GPU memory.
- `DataLoader(`num_workers=4`, ...)` - spawn several workers to pre-load data faster - during training watch the GPU utilization stats and if it's far from 100% experiment with raising the number of workers. Of course, the problem could be elsewhere so a very big number of workers won't necessarily lead to a better performance.
### Faster optimizer
pytorch-nightly introduced `torch.optim._multi_tensor` which should significantly speed up the optimizers for situations with lots of small feature tensors. It should eventually become the default, but if you want to experiment with it sooner and don't mind using the bleed-edge, see: https://github.com/huggingface/transformers/issues/9965
-----------------
## Credits
It'd be difficult to track and record every contribution, so in order to keep things practical I will try to keep track of major contributors. And I have a huge gratitude to everybody who has ever asked or answered a question on forums/issues/slacks/SO/etc., parts or summaries of which were integrated into this article. Thank you!
The major contributors:
- @sgugger: fp16 section from [here](https://github.com/huggingface/transformers/issues/9742#issuecomment-765488087)
- @moyix: ideas on NVLink testing https://github.com/huggingface/transformers/issues/9371
- @ngimel: multiple insights on pytorch slack/issues
- @mcarilli: pytorch autocast
| 01-27-2021 02:19:52 | 01-27-2021 02:19:52 | The automatic mixed precision and performance tuning recipes may be helpful.
https://pytorch.org/tutorials/recipes/recipes/amp_recipe.html
https://pytorch.org/tutorials/recipes/recipes/tuning_guide.html
<|||||>thank you very much, @mcarilli - this is exactly what I was looking for!<|||||>Going to make it into a real doc here: https://github.com/huggingface/transformers/pull/12258 |
transformers | 9,823 | closed | Allow --arg Value for booleans in HfArgumentParser | # What does this PR do?
The primary reason I dived into this PR is because when launching a training with sagemaker, bool arguments need to be passed along with something (e.g. we can't do `--do_train`, we have to do `--do_train True` because the arguments are passed as a dict). This is refused by the current argparser.
This PR changes a little bit the way `HfArgumentParser` handles bool fields in dataclasses. Up until now:
- a bool arg `foo` with `True` as a default gives a flag `--no_foo` that stores `False` in foo
- a bool arg `bar` with no default value or `False` as a default value gives a flag `bar` that stores `True` in bar
- an optional bool arg `opt` gives the same as a bool if its default is True or False. If the default is None, it gives a flag `--opt` that requires an argument that accepts anything and stores the value as a string (which is obviously a bug)
After this PR, the following happens:
- a bool arg `foo` with `True` as a default gives a flag `--no_foo` that stores `False` in foo, it also gives a flag `--foo` that can be used as is (will store `True` in foo), or by using any truthy/falsy value (`--foo yes`, `--foo True`, `--foo no`...) that will store the result as a proper bool.
- a bool arg `bar` with no default value or `False` gives a flag `bar` that can be used as is (will store `True` in bar), or by using any truthy/falsy value (`--bar yes`, `--bar True`, `--bar no`...) that will store the result as a proper bool.
- an optional bool arg `opt` gives the same as a bool if its default is True or False. If the default is None, it gives a flag `--opt` that requires an argument that accepts a truthy value and stores the value as a proper bool.
In all cases above, when a truthy value is expected but something else is passed (that is not `true`, `false`, `yes`, `no`, `1`, `0`, `t`, `f`, `y`, `n`), an error is raised.
So no breaking changes at all and all bool values can be used with an argument so that sagemaker is happy. Tests are updated and improved to check the behaviors summarized above are all correct. | 01-27-2021 01:56:04 | 01-27-2021 01:56:04 | |
transformers | 9,822 | closed | Fix auto-resume training from checkpoint | # What does this PR do?
This fixes a few minor issues with training auto-resume, as discussed [here](https://github.com/huggingface/transformers/pull/9776#issuecomment-767841895)
1. `checkpoints = [path for path in content if _re_checkpoint.search(path) is not None and os.path.isdir(path)]` was returning empty. I changed `os.path.isdir(path)` to `os.path.isdir(os.path.join(folder, path))` and now it returns a list of the checkpoint folders as expected.
2. Similarly, the `get_last_checkpoint` function was returning the basename of the checkpoint folder, not the full path, which seems to be expected based on the updates to the example scripts. I changed the last line of the function to `return os.path.join(folder, max(checkpoints, key=lambda x: int(_re_checkpoint.search(x).groups()[0])))`
3. After I made those update, it was resuming from the oldest checkpoint, not the newest. I noticed the checkpoint regex was only capturing the final digit in the directory name. I changed it to `_re_checkpoint = re.compile(r"^" + PREFIX_CHECKPOINT_DIR + r"\-(\d+)$")` with the `+` inside the capture group, and now `get_last_checkpoint` is giving me the newest checkpoint as expected.
## Who can review?
- trainer: @sgugger | 01-27-2021 00:31:55 | 01-27-2021 00:31:55 | Oh you need to run `make style` on your branch for the styling test to pass. Let me know if you run into any issue doing that!<|||||>Sorry! I'm pretty rusty on the software dev stuff - college was 16 years ago. I think I've fixed it now. |
transformers | 9,821 | closed | [trainer] renaming cl args/ trainer attributes to be clear per-gpu vs total | As we started discussing here https://github.com/huggingface/transformers/issues/9801#issuecomment-767825869 perhaps we could have a design session where we look at all of the trainer cl args (and their class attribute counterparts) and see which of them contain ambiguity wrt per-gpu vs total (and perhaps other important renames where we find things are confusing).
The intention is to make the API more intuitive and minimize the number of time we introduce breaking changes, but to attempt to do that in one go as much as possible.
One such item we started to discuss is `--max_steps`, then @sgugger mentioned `--num_train_epochs` and there are probably others.
I also proposed to potentially entertain creating a back-compat module to minimize the breaking changes pain where it's possible - renames fall perfectly into this category. I wrote:
> In some previous projects for such things we also had a back-compat mode, which ones enabled supported a whole bunch of old ways until the user was ready to make the shift to the new code. Surely a rename of a cl arg could be easily supported by such feature. So here, instead of a deprecation cycle per item the approach is to keep anything old around but only if it's loaded from a helper module. So that the main code remains clean of deprecated things. This was in a different programming environment where it was developer, so I will have to think how to do the same here.
@LysandreJik, @patrickvonplaten, @sgugger
| 01-26-2021 21:45:29 | 01-26-2021 21:45:29 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread.<|||||>I'd love to get feedback on that. Thank you!<|||||>As it doesn't seem to resonate as a real need, I'm closing this one. |
transformers | 9,820 | closed | Add a flag for find_unused_parameters | # What does this PR do?
This PR adds a flag to control whether `find_unused_parameters` is set to `True` or not in DDP training, while keeping the current behavior as default to avoid any breaking change.
Fixes #9802 | 01-26-2021 21:45:03 | 01-26-2021 21:45:03 | |
transformers | 9,819 | closed | Add head_mask and decoder_head_mask to FSMT | This PR implements `head_mask` and `decoder_head_mask` for FSMT and it is the follow-up to the open issue #9814.
**Motivation:** This PR is a part of an endeavour to enable the usage of `head_mask` and `decoder_head_mask` for all encoder-decoder transformers following the recent work on BART-like models (#9569).
<hr>
Fixes: https://github.com/huggingface/transformers/issues/9814
Reviewer: @stas00 | 01-26-2021 21:39:06 | 01-26-2021 21:39:06 | I know than one can add, for example, a line like this
```
# Copied from transformers.models.bart.modeling_bart.BartAttention with Bart->FSMT
```
before `Attention` module in FSMT. However, this does not copy only additions, but the whole module from BART, which is, in this case, undesirable, I guess, as these modules are a little bit different. But maybe there is another way I am not aware of.<|||||>@LysandreJik, @patrickvonplaten - how can we make sure fsmt gets tracked and synced with all the bart-family changes? while the tokenizer is different, the model is ~95% identical.<|||||>as @stancld said, we can do that with some statements of the following kind:
```
# Copied from transformers.models.bart.modeling_bart.BartAttention with Bart->FSMT
```
The difference between the BART and FSMT implementation of the targeted object must only be the "BART" occurrences that change to "FSMT". @sgugger can tell you more about it.<|||||>Thank you, @LysandreJik
I think this is really a question to @patrickvonplaten - who I remember was planning to refactor FSMT to match what he did for Bart. So if this is still planned, Patrick perhaps you could add this item to the agenda - keeping FSMT in sync with the Bart-family (modeling only - tokenizer is similar to xlm).
So the currently proposed solution can't be used, since Bart diverged since FSMT forked it.
It might help to treat FSMT as Bart with the main difference of it having a dual vocab and no tied weights - and a few layers that are different - but identical otherwise. (again for the model only).<|||||>> I think this is really a question to @patrickvonplaten - who I remember was planning to refactor FSMT to match what he did for Bart. So if this is still planned, Patrick perhaps you could add this item to the agenda - keeping FSMT in sync with the Bart-family (modeling only - tokenizer is similar to xlm).
Yes, the FSTM / ProphetNet refactor is still on my ToDo List (think next week is reasonable). After the refactor I'll try to add as many # Copied from statements to keep the models in sync. Nevertheless, this PR can be merged as it is now!
Great work @stancld |
transformers | 9,818 | closed | When resuming training from checkpoint, Trainer loads model | # What does this PR do?
Trainer was not reloading model when resuming training from a checkpoint, which was confusing for users (see #9099) and also was preventing the recent auto-reload from checkpoint to fully work.
This isn't a breaking change (if users were passing a model with the checkpoint already loaded, it is just loaded twice). | 01-26-2021 21:17:36 | 01-26-2021 21:17:36 | If I may add, under `def train():`, I think the initialisation of `self._globalstep_last_logged = 0` should be `self._globalstep_last_logged=self.state.global_step`, to ensure that the first logging of the loss is correct when you later divide by `self.state.global_step-self._globalstep_last_logged`? |
transformers | 9,817 | closed | [docs] expand install instructions | This PR: expands the "install from source" section in the instruction file to:
- clarify that the user is not installing the release version but bleed edge
- expand how to update it
- gives the shortcut at doing it all in one command and not needing to keep the checkout folder around
@LysandreJik, @sgugger | 01-26-2021 20:49:52 | 01-26-2021 20:49:52 | |
transformers | 9,816 | closed | Setup logging with a stdout handler | # What does this PR do?
Explicitly add stdout as a handler for the logging configuration in the example scripts, otherwise no logs are reported when training on sagemaker. Also consistently sets the level of the logging outside of the config method, as otherwise it does not work (probably a bug in the logging module).
| 01-26-2021 20:24:47 | 01-26-2021 20:24:47 | |
transformers | 9,815 | closed | Fix a bug in run_glue.py (#9812) | # What does this PR do?
It seems the `if` statement for `label_to_id` seems to be strange.
There should be `not` before `is_regression`.
Fixes #9812
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj | 01-26-2021 19:15:59 | 01-26-2021 19:15:59 | Thanks for the fix!<|||||>Thank you for your quick response! |
transformers | 9,814 | closed | Missing head_mask and decoder_head_mask arguments in encoder-decoder models | # ๐ Feature request
Following the PRs #9569, #9634 and #9639, there are other encoder-decoder models, which either do not support `head_mask` and `decoder_head_mask` input arguments at all or can be only provided with a single `head_mask` argument used for head masking both in encoder and decoder. It would be, therefore, nice to make this feature uniform over all the decoder-models.
<hr>
**Models:**
| Model | Pytorch | TensorFlow | PR | Copy dependency |
| ------ | :------: | :---------: | :--: | :-----: |
| BERTGeneration | โ๏ธ | โ๏ธ | - | - |
| EncoderDecoderModel | โ๏ธ | โ๏ธ | - | - |
| FSMT | โ
| โ๏ธ | #9819 | - |
| LED | โ
| โ๏ธ | PT - #9856 ; TF - #9988 | - |
| ProphetNet | โ๏ธ | โ๏ธ | #9964 | - |
| Longformer | โ
| โ๏ธ | PT - #9856; TF - #9988 | LED |
## Your contribution
I'm happy to add this feature in the following days, both for PyTorch and TensorFlow models. (Likely in shorter PRs in order not to create large, overwhelming PRs)
<hr>
Reviewers: @patrickvonplaten, @jplu, @sgugger, @LysandreJik, @stas00 . | 01-26-2021 19:06:17 | 01-26-2021 19:06:17 | |
transformers | 9,813 | closed | ADD BORT | Hi,
this is a "clean" follow-up PR to the first attempt of adding Bort to Transformers (see #9112).
As Bort is based on the BERT architecture, there's no need to define own model classes, such as `BortModel`. This is done in the main Bort configuration via:
```json
"model_type": "bert"
```
Bort uses the same vocab as RoBERTa, so the tokenizer instance is also configured in the model configuration:
```json
"tokenizer_class": "RobertaTokenizer"
```
Basic integration tests and a (hopefully verbose) conversion script are also included in this PR. | 01-26-2021 18:46:05 | 01-26-2021 18:46:05 | > Thanks for adding this new model! When referencing other pages in the documentation, it's better to use `:doc:` instead of a hard link, as it will then work in all versions of the documentation (which don't have the same base url).
Sorry that was my bad! I copied it from DialoGPT -> Updated it there as well<|||||>Ah sorry @patrickvonplaten, the `model_doc/` should be removed are the pages are in the same folder. That should resolve the build doc error.<|||||>Great job @stefan-it |
transformers | 9,812 | closed | `label_to_id` in `run_glue.py` seems to have a wrong `if` statement | ## Environment info
- `transformers` version: 4.3.0.dev0
- Platform: Linux-4.4.0-179-generic-x86_64-with-glibc2.10
- Python version: 3.8.0
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): 2.4.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
## Information
Model I am using (Bert, XLNet ...): Bert, xlm-roberta-large
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
It seems the `if` statement for `label_to_id` seems to be strange.
https://github.com/huggingface/transformers/blob/eba418ac5df71d08927efb7e3b738833998162ff/examples/text-classification/run_glue.py#L316-L333
About `and is_regression` in L320, should not it be `and not is_regression`?
I inserted `logging.info` to check the True/False as below:
```python
label_to_id = None
logger.info("--- label_to_id if statement check ---")
logger.info(f"{model.config.label2id != PretrainedConfig(num_labels=num_labels).label2id}")
logger.info(f"{data_args.task_name is not None}")
logger.info(f"{is_regression}")
if (
model.config.label2id != PretrainedConfig(num_labels=num_labels).label2id
and data_args.task_name is not None
and is_regression
):
logger.info("loading model.config.label2id")
# Some have all caps in their config, some don't.
label_name_to_id = {k.lower(): v for k, v in model.config.label2id.items()}
```
Then I got:
```python
01/27/2021 03:23:24 - INFO - __main__ - --- label_to_id if statement check ---
01/27/2021 03:23:24 - INFO - __main__ - False
01/27/2021 03:23:24 - INFO - __main__ - True
01/27/2021 03:23:24 - INFO - __main__ - False
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 4/4 [00:00<00:00, 6.02ba/s]
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:00<00:00, 20.86ba/s]
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:00<00:00, 12.80ba/s]
01/27/2021 03:23:25 - INFO - __main__ - Sample 2619 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'idx': 2916, 'input_ids': [0, 581, 172337, 5180, 3542, 39958, 1257, 678, 117303, 1010, 22230, 1810, 150, 592, 2363, 7225, 26548, 2022, 112478, 6, 4, 16454, 3912, 37967, 111, 60525, 1810, 150, 592, 747, 125682, 7, 26548, 4049, 6, 5, 2, 2, 84607, 26420, 5180, 3542, 39958, 1257, 678, 117303, 1010, 22230, 1810, 150, 592, 2363, 7225, 26548, 2022, 112478, 6, 4, 16454, 10, 3912, 9, 22469, 94309, 1363, 31330, 47, 70, 29685, 6, 5, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'label': 1, 'sentence1': 'The proceedings were taken up with prosecutors outlining their case against Amrozi , reading 33 pages of documents outlining allegations against him .', 'sentence2': 'Proceedings were taken up with prosecutors outlining their case against Amrozi , reading a 33-page accusation letter to the court .'}.
```
When `is_regression` is False (when the task is `classification`), `model.config.label2id` is never used for `label_to_id`.
If I'm not mistaken, would not this behave differently than what is intended?
I am sorry that I could not find an appropriate task/model combination to show when all other conditions would be true.
Thank you in advance. | 01-26-2021 18:36:33 | 01-26-2021 18:36:33 | Yes, there should be a not here. Do you want to open a PR since you found the problem and its fix?<|||||>Thanks, I'd love to open a PR! Please wait a minute.<|||||>I've opened a PR to fix this issue, and all checks have passed.
I would be grateful if you could check it when you have time. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.