repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
5,700
closed
How to visualize the output of the encoder using T-sne plots?
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details I was thinking of just mean or max pooling (along the sequence length dimension) the output of the encoder and visualizing that, but I was wondering if there were better ways of doing so.
07-12-2020 22:57:23
07-12-2020 22:57:23
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,699
closed
Add beta 1 and beta 2 option in `TrainingArguments` for `AdamW` optimizer.
I want to set the Adam Optimizers beta 2 to 0.98 - this is because I want to train a new RoBERTa LM. The paper sais, that it improves stability. The default is 0.999 and it can not be set in `TrainingArguments`. Could you please add the option to specify beta 1 and 2 for AdamW in the `TrainingArguments`? `adam_epsilon ` can already be specified. If you want me to I can provide a PR. What do you think?
07-12-2020 19:04:31
07-12-2020 19:04:31
I'm not sure we would like to add this to the `TrainingArguments`. If we add all possible params this could quickly explode. Note that you can instantiate your own optimizer and pass it here: https://github.com/huggingface/transformers/blob/7096e47513127d4f072111a7f58f109842a2b6b0/src/transformers/trainer.py#L158 Also pinging @julien-c here.<|||||>Well - my arguments for this change is that adam_epsilon is possible to be set set and so beta 1 and 2 should also be possible to be set. Especialy because the RoBERTa paper suggests an others setting then default. 2nd argument is that it is not that easy to instantiate your own optimizer because there is a dependency to `model`. See here: https://github.com/huggingface/transformers/blob/7096e47513127d4f072111a7f58f109842a2b6b0/src/transformers/trainer.py#L326-L335 <|||||>Closing this in favor of #5592
transformers
5,698
closed
Create README.md
07-12-2020 13:40:08
07-12-2020 13:40:08
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5698?src=pr&el=h1) Report > Merging [#5698](https://codecov.io/gh/huggingface/transformers/pull/5698?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0befb513278f6e42b722be340dbc667e0ba2718e&el=desc) will **decrease** coverage by `0.25%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5698/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5698?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5698 +/- ## ========================================== - Coverage 78.26% 78.01% -0.26% ========================================== Files 146 146 Lines 25998 25998 ========================================== - Hits 20348 20283 -65 - Misses 5650 5715 +65 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5698?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5698/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5698/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: | | [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5698/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5698/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.02% <0.00%> (-1.29%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5698/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (ø)` | | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5698/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: | | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5698/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5698/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5698?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5698?src=pr&el=footer). Last update [0befb51...3e4d8eb](https://codecov.io/gh/huggingface/transformers/pull/5698?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,697
closed
How can I evaluate on GLUE but without fine-tune BERT. Just train the rest layers?
# ❓ Questions & Help ## Details
07-12-2020 09:56:25
07-12-2020 09:56:25
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,696
closed
Update README.md
07-12-2020 09:36:15
07-12-2020 09:36:15
transformers
5,695
closed
Update README.md
07-12-2020 09:25:38
07-12-2020 09:25:38
transformers
5,694
closed
[Don't merge] Run make style on templates
When run `make style` locally, these files got modified. I noticed that `black` and `isort` seem to have conflicts on these files. Is there a solution? ``` > make style black --line-length 119 --target-version py35 examples templates tests src utils reformatted /Users/canwenxu/transformers/src/transformers/__init__.py reformatted /Users/canwenxu/transformers/templates/adding_a_new_example_script/run_xxx.py reformatted /Users/canwenxu/transformers/templates/adding_a_new_example_script/utils_xxx.py All done! ✨ 🍰 ✨ 3 files reformatted, 339 files left unchanged. isort --recursive examples templates tests src utils Fixing /Users/canwenxu/transformers/templates/adding_a_new_example_script/run_xxx.py Fixing /Users/canwenxu/transformers/templates/adding_a_new_example_script/utils_xxx.py Fixing /Users/canwenxu/transformers/src/transformers/__init__.py ```
07-12-2020 08:14:02
07-12-2020 08:14:02
It seems to be an isort version problem
transformers
5,693
closed
__init__() missing 1 required positional argument: 'logits'
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Language I am using the model on (English, Chinese ...): The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. python ./examples/text-classification/run_glue.py --model_name_or_path bert-base-uncased --task_name $TASK_NAME --do_train --do_eval --data_dir $GLUE_DIR/$TASK_NAME --max_seq_length 128 --per_device_eval_batch_size=2 --per_device_train_batch_size=2 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir /tmp/$TASK_NAME/ <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> File "./examples/text-classification/run_glue.py", line 246, in <module> main() File "./examples/text-classification/run_glue.py", line 173, in main model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None File "/work/vnhh/anaconda3/lib/python3.6/site-packages/transformers/trainer.py", line 499, in train tr_loss += self._training_step(model, inputs, optimizer) File "/work/vnhh/anaconda3/lib/python3.6/site-packages/transformers/trainer.py", line 622, in _training_step outputs = model(**inputs) File "/work/vnhh/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/work/vnhh/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 153, in forward return self.gather(outputs, self.output_device) File "/work/vnhh/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 165, in gather return gather(outputs, output_device, dim=self.dim) File "/work/vnhh/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 68, in gather res = gather_map(outputs) File "/work/vnhh/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 63, in gather_map return type(out)(map(gather_map, zip(*outputs))) TypeError: __init__() missing 1 required positional argument: 'logits' ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> It should be able to run and finish training ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Linux-4.4.0-165-generic-x86_64-with-debian-stretch-sid - Python version: 3.6.5 - PyTorch version (GPU?): 1.3.1 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> -tensorboardX: 1.9.0
07-12-2020 07:12:05
07-12-2020 07:12:05
i faced the same error yesterday. Installing version 3.0.1 fixed the issue for me.<|||||>Installing one or two older versions can fix this. However, I will leave it here so that they know this bug exists in their newest version.<|||||>It appears that the CircleCI doesn't run gpu tests (or just multiple gpu?), all sub-tests `test_multigpu_data_parallel_forward` fail., e.g.: `tests/test_modeling_flaubert.py::FlaubertModelTest::test_multigpu_data_parallel_forward`. ``` pytest --disable-warnings -n 1 tests/test_modeling_bert.py::BertModelTest::test_multigpu_data_parallel_forward ====================================================================== test session starts ======================================================================= platform linux -- Python 3.7.5, pytest-5.4.3, py-1.9.0, pluggy-0.13.1 rootdir: /mnt/nvme1/code/huggingface/transformers-tests-1 plugins: hypothesis-5.5.4, filter-subpackage-0.1.1, arraydiff-0.3, flaky-3.6.1, ipynb-1.1.1.dev0, cov-2.10.0, astropy-header-0.1.2, forked-1.2.0, doctestplus-0.5.0, openfiles-0.4.0, remotedata-0.3.2, xdist-1.32.0 gw0 [1] F [100%] ============================================================================ FAILURES ============================================================================ _______________________________________________________ BertModelTest.test_multigpu_data_parallel_forward ________________________________________________________ [gw0] linux -- Python 3.7.5 /home/stas/anaconda3/envs/main/bin/python self = <tests.test_modeling_bert.BertModelTest testMethod=test_multigpu_data_parallel_forward> @require_multigpu def test_multigpu_data_parallel_forward(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() # some params shouldn't be scattered by nn.DataParallel # so just remove them if they are present. blacklist_non_batched_params = ["head_mask"] for k in blacklist_non_batched_params: inputs_dict.pop(k, None) # move input tensors to cuda:O for k, v in inputs_dict.items(): if torch.is_tensor(v): inputs_dict[k] = v.to(0) for model_class in self.all_model_classes: model = model_class(config=config) model.to(0) model.eval() # Wrap model in nn.DataParallel model = torch.nn.DataParallel(model) with torch.no_grad(): > _ = model(**self._prepare_for_class(inputs_dict, model_class)) tests/test_modeling_common.py:807: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/nn/modules/module.py:550: in __call__ result = self.forward(*input, **kwargs) /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py:156: in forward return self.gather(outputs, self.output_device) /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py:168: in gather return gather(outputs, output_device, dim=self.dim) /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py:68: in gather res = gather_map(outputs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ outputs = [BaseModelOutputWithPooling(last_hidden_state=tensor([[[ 1.0115e+00, 1.4145e+00, -5.7332e-01, ..., -4.6471e-01, ... 0.1111, -0.0592, -0.1177, 0.0074, -0.0155, -0.1015]], device='cuda:1'), hidden_states=None, attentions=None)] def gather_map(outputs): out = outputs[0] if isinstance(out, torch.Tensor): return Gather.apply(target_device, dim, *outputs) if out is None: return None if isinstance(out, dict): if not all((len(out) == len(d) for d in outputs)): raise ValueError('All dicts must have the same number of keys') return type(out)(((k, gather_map([d[k] for d in outputs])) for k in out)) > return type(out)(map(gather_map, zip(*outputs))) E TypeError: __init__() missing 1 required positional argument: 'pooler_output' /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py:63: TypeError ==================================================================== short test summary info ===================================================================== FAILED tests/test_modeling_bert.py::BertModelTest::test_multigpu_data_parallel_forward - TypeError: __init__() missing 1 required positional argument: 'pooler_... ================================================================= 1 failed, 4 warnings in 5.44s ================================================================== ``` ``` pytest --disable-warnings -n 1 tests/test_modeling_flaubert.py::FlaubertModelTest::test_multigpu_data_parallel_forward ====================================================================== test session starts ======================================================================= platform linux -- Python 3.7.5, pytest-5.4.3, py-1.9.0, pluggy-0.13.1 rootdir: /mnt/nvme1/code/huggingface/transformers-tests-1 plugins: hypothesis-5.5.4, filter-subpackage-0.1.1, arraydiff-0.3, flaky-3.6.1, ipynb-1.1.1.dev0, cov-2.10.0, astropy-header-0.1.2, forked-1.2.0, doctestplus-0.5.0, openfiles-0.4.0, remotedata-0.3.2, xdist-1.32.0 gw0 [1] F [100%] ============================================================================ FAILURES ============================================================================ _____________________________________________________ FlaubertModelTest.test_multigpu_data_parallel_forward ______________________________________________________ [gw0] linux -- Python 3.7.5 /home/stas/anaconda3/envs/main/bin/python self = <tests.test_modeling_flaubert.FlaubertModelTest testMethod=test_multigpu_data_parallel_forward> @require_multigpu def test_multigpu_data_parallel_forward(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() # some params shouldn't be scattered by nn.DataParallel # so just remove them if they are present. blacklist_non_batched_params = ["head_mask"] for k in blacklist_non_batched_params: inputs_dict.pop(k, None) # move input tensors to cuda:O for k, v in inputs_dict.items(): if torch.is_tensor(v): inputs_dict[k] = v.to(0) for model_class in self.all_model_classes: model = model_class(config=config) model.to(0) model.eval() # Wrap model in nn.DataParallel model = torch.nn.DataParallel(model) with torch.no_grad(): > _ = model(**self._prepare_for_class(inputs_dict, model_class)) tests/test_modeling_common.py:807: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/nn/modules/module.py:550: in __call__ result = self.forward(*input, **kwargs) /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py:156: in forward return self.gather(outputs, self.output_device) /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py:168: in gather return gather(outputs, output_device, dim=self.dim) /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py:68: in gather res = gather_map(outputs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ outputs = [MaskedLMOutput(loss=None, logits=tensor([[[-0.0008, 0.3751, -0.0050, ..., 0.0933, -0.1563, 0.0494], [-0....0, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]]], device='cuda:1'), hidden_states=None, attentions=None)] def gather_map(outputs): out = outputs[0] if isinstance(out, torch.Tensor): return Gather.apply(target_device, dim, *outputs) if out is None: return None if isinstance(out, dict): if not all((len(out) == len(d) for d in outputs)): raise ValueError('All dicts must have the same number of keys') return type(out)(((k, gather_map([d[k] for d in outputs])) for k in out)) > return type(out)(map(gather_map, zip(*outputs))) E TypeError: __init__() missing 1 required positional argument: 'logits' /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py:63: TypeError ==================================================================== short test summary info ===================================================================== FAILED tests/test_modeling_flaubert.py::FlaubertModelTest::test_multigpu_data_parallel_forward - TypeError: __init__() missing 1 required positional argument: ... ================================================================= 1 failed, 4 warnings in 5.54s ============================================================= ```<|||||>Digging deeper it appears that `torch.nn.parallel.scatter_gather.gather` can't gather outputs that are `dataclasses` - it gets a list of outputs that are `dataclasses` and completely breaks them down into just one value. This pytorch hack fixes the problem for the failing tests. Swap the gather function for this one (including import): ``` # torch/nn/parallel/scatter_gather.py import dataclasses def gather(outputs, target_device, dim=0): r""" Gathers tensors from different GPUs on a specified device (-1 means the CPU). """ def gather_map(outputs): out = outputs[0] if dataclasses.is_dataclass(out): outputs = [dataclasses.asdict(out) for out in outputs] out = outputs[0] if isinstance(out, torch.Tensor): return Gather.apply(target_device, dim, *outputs) if out is None: return None if isinstance(out, dict): if not all((len(out) == len(d) for d in outputs)): raise ValueError('All dicts must have the same number of keys') return type(out)(((k, gather_map([d[k] for d in outputs])) for k in out)) return type(out)(map(gather_map, zip(*outputs))) # Recursive function calls like this create reference cycles. # Setting the function to None clears the refcycle. try: res = gather_map(outputs) finally: gather_map = None return res ``` It converts the dataclass output into a dict and then it works - at least the tests do, I haven't tried OP's example. What I added is: ``` import dataclasses ``` and ``` if dataclasses.is_dataclass(out): outputs = [dataclasses.asdict(out) for out in outputs] out = outputs[0] ``` I filed a bug report with pytorch: https://github.com/pytorch/pytorch/issues/41327 <|||||>My pytorch tweak fixes the transformers tests, but when trying to use it on OP's use - it fails elsewhere: ``` export TASK_NAME=CoLA export GLUE_DIR=/tmp/glue_data/ python ./examples/text-classification/run_glue.py --model_name_or_path bert-base-uncased --task_name $TASK_NAME --do_train --do_eval --data_dir $GLUE_DIR/$TASK_NAME --max_seq_length 128 --per_device_eval_batch_size=2 --per_device_train_batch_size=2 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir /tmp/$TASK_NAME/ ``` ``` ... File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 98, in <listcomp> outputs = [dataclasses.asdict(out) for out in outputs] File "/home/stas/anaconda3/envs/main/lib/python3.7/dataclasses.py", line 1045, in asdict return _asdict_inner(obj, dict_factory) File "/home/stas/anaconda3/envs/main/lib/python3.7/dataclasses.py", line 1052, in _asdict_inner value = _asdict_inner(getattr(obj, f.name), dict_factory) File "/home/stas/anaconda3/envs/main/lib/python3.7/dataclasses.py", line 1086, in _asdict_inner return copy.deepcopy(obj) File "/home/stas/anaconda3/envs/main/lib/python3.7/copy.py", line 161, in deepcopy y = copier(memo) File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/tensor.py", line 44, in __deepcopy__ raise RuntimeError("Only Tensors created expl RuntimeError: Only Tensors created explicitly by the user (graph leaves) support the deepcopy protocol at the moment ``` So that conversion from dataclass from dict didn't work elsewhere. Needs more digging. <|||||>@vanh17, until this is sorted out, you may choose to run on a single gpu which I tested to work. You can accomplish that by adding to your command line: ``` env CUDA_VISIBLE_DEVICES=0 python ./examples/text-classification/run_glue.py ... ``` change 0 to whichever GPU you want it to be run on.<|||||>I think this is related to https://github.com/huggingface/transformers/pull/5685 When used in a `nn.DataParallel` setup a model should be instantiated with `return_tuple=True`. It would be nice to check if there is a way for a model to know that it's being part of a `nn.DataParallel` so it can setup this argument automatically. If someone wants to give it a look.... cc @sgugger <|||||>I can look at this when I'm back next week. In the meantime, merging #5685 will fix the issue.<|||||>> merging #5685 will fix the issue. I verified that the `run_glue.py` on dual gpu work after this merge. Is there a CirleCI config that supports dual gpu tests? edit: multigpu tests still fail as before. I forgot to back out the pytorch hack.<|||||>So, if with n_gpu > 1, it works w/o returning outputs wrapped in a model's output dataclass, why do we need to ever return a dataclass and not *always* a tuple regardless of n_gpu's value? same goes for the suggestion by @thomwolf - only with `nn.DataParallel`. https://github.com/huggingface/transformers/pull/5685 just moved the problem elsewhere, since it's not possible to rely on a model to return an output dataclass and the behavior is different depending on the hardware setup.<|||||>Always returning tuples require user to know which output is at which position (and it changes depending on the parameters you pass to the model) so having something self-documenting was a feature users asked for a long time. <|||||>I totally understand that and this is great. But if a user codes for that API relying on outputs being a dataclass, and their code is then run in multi-gpu env it will break. Are we on the same page now? I can see 2 solutions that lead to a consistent API: 1. getting pytorch to support not only dict outputs but also dataclass in `gather` https://github.com/pytorch/pytorch/issues/41327 2. re-encapsulate the tuple into the original output dataclass when it returns from pytorch to transformers and before it is passed back to the user. There will be an additional small overhead. But we don't really have a proxy to insert such manipulation, so probably this is not feasible at the moment. <|||||>I updated my earlier comment - multigpu tests still fail after @sgugger's commit as before - so only part of the problem has been worked around. I forgot to back out the proposed pytorch hack so it looked like it worked, but it is not.<|||||>wrt the change https://github.com/huggingface/transformers/pull/5685, won't this be fitting: ``` # Our model outputs do not work with DataParallel, so forcing return tuple. - if self.args.n_gpu > 1: + if isinstance(model, nn.DataParallel): inputs["return_tuple"] = True ``` as @thomwolf suggested. But perhaps practically they are covering the same cases. I'm digging for where else this is needed to make the tests work. <|||||>OK, to make the common tests work, this is needed: ``` diff --git a/tests/test_modeling_common.py b/tests/test_modeling_common.py index 0021f23c..683b7913 100644 --- a/tests/test_modeling_common.py +++ b/tests/test_modeling_common.py @@ -803,6 +803,7 @@ class ModelTesterMixin: # Wrap model in nn.DataParallel model = torch.nn.DataParallel(model) + inputs_dict["return_tuple"] = True with torch.no_grad(): _ = model(**self._prepare_for_class(inputs_dict, model_class)) ``` yikes. PR for both: https://github.com/huggingface/transformers/pull/5733 Let me know if you prefer a separate PR for each.<|||||>Also why does the `return_tuple` param defaults to `None` and not `False` in most models, whereas in some it's `False`. It probably should be `False` everywhere, no? Same applies to `output_hidden_states` and `output_attentions` `forward` params - sometimes they default to `None` and other times `False`. Probably should be `False` everywhere. <|||||>I think we can find a work-around on this in the meantime by allowing our output data classes to accepts list/tuple as inputs to the first argument and spread these over the other arguments in `__post_init__`. I'll try to make a PR on this.<|||||>> I think we can find a work-around on this in the meantime by allowing our output data classes to accepts list/tuple as inputs to the first argument and spread these over the other arguments in `__post_init__`. I'll try to make a PR on this. To me, it is now working with this workaround (fine-tuning LMs). But, shall I get concerned about the reliability of the results?<|||||>> shall I get concerned about the reliability of the results? If you're referring to https://github.com/huggingface/transformers/pull/5685 commit, there is no reason to be concerned. There was no "functional" change per se, this is really sorting out the API - trying to make it consistent.<|||||>I also ran into a similar problem when running the script from `examples/question-answering` using two GPUs from the master branch: ``` python run_squad.py \ --model_type bert \ --model_name_or_path bert-base-uncased \ --do_train \ --do_eval \ --do_lower_case \ --train_file $SQUAD_DIR/train-v1.1.json \ --predict_file $SQUAD_DIR/dev-v1.1.json \ --per_gpu_train_batch_size 12 \ --per_gpu_eval_batch_size=16 \ --learning_rate 3e-5 \ --num_train_epochs 2.0 \ --max_seq_length 320 \ --doc_stride 128 \ --output_dir $SQUAD_DIR/bert-base-uncased-squad_v1 ``` The error looks like below: ``` File "run_squad.py", line 821, in <module> main() File "run_squad.py", line 764, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File "run_squad.py", line 202, in train outputs = model(**inputs) File "/home/qqcao/work/transformers/.env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/qqcao/work/transformers/.env/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 156, in forward return self.gather(outputs, self.output_device) File "/home/qqcao/work/transformers/.env/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 168, in gather return gather(outputs, output_device, dim=self.dim) File "/home/qqcao/work/transformers/.env/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 68, in gather res = gather_map(outputs) File "/home/qqcao/work/transformers/.env/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 63, in gather_map return type(out)(map(gather_map, zip(*outputs))) TypeError: __init__() missing 2 required positional arguments: 'start_logits' and 'end_logits' ``` I have to roll back to version 3.0.0. Do you have any ETA when this will get fixed? Thanks.<|||||>@csarron, this should fix it. ``` --- a/examples/question-answering/run_squad.py +++ b/examples/question-answering/run_squad.py @@ -199,6 +199,9 @@ def train(args, train_dataset, model, tokenizer): {"langs": (torch.ones(batch[0].shape, dtype=torch.int64) * args.lang_id).to(args.device)} ) + if isinstance(model, torch.nn.DataParallel): + inputs["return_tuple"] = True + outputs = model(**inputs) # model outputs are always tuple in transformers (see doc) loss = outputs[0] ``` It appears that this will now need to be added **everywhere** before model is invoked, and users will need to do that too should they code their own and intend to use `DataParallel`. Surely, there must be a better way. I suppose that when this neat `dataclass` feature was added it wasn't tested on `nn.DataParallel`. Perhaps best to back it out, figure out for pytorch to support `dataclasses` in scatter/gather and then put it back in with perhaps a monkeypatch for older pytorch versions. https://github.com/pytorch/pytorch/issues/41327 p.s. Note that the project's scripts/modules don't consistently `import torch.nn as nn`, so sometimes it's `torch.nn.DataParallel`, whereas other times `nn.DataParallel`.<|||||>Got same problem here.<|||||>@sgugger came up with a transparent solution for this issue: https://github.com/huggingface/transformers/pull/5941<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,692
closed
rename the functions to match the rest of the test convention
no functional change
07-12-2020 07:05:37
07-12-2020 07:05:37
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5692?src=pr&el=h1) Report > Merging [#5692](https://codecov.io/gh/huggingface/transformers/pull/5692?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0befb513278f6e42b722be340dbc667e0ba2718e&el=desc) will **decrease** coverage by `0.16%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5692/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5692?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5692 +/- ## ========================================== - Coverage 78.26% 78.09% -0.17% ========================================== Files 146 146 Lines 25998 25998 ========================================== - Hits 20348 20304 -44 - Misses 5650 5694 +44 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5692?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5692/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5692/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.70% <0.00%> (-2.51%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5692/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.02% <0.00%> (-1.29%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5692/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.49% <0.00%> (+0.29%)` | :arrow_up: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5692/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5692?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5692?src=pr&el=footer). Last update [0befb51...921cabc](https://codecov.io/gh/huggingface/transformers/pull/5692?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,691
closed
Cannot import EvalPrediction from transformers
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Language I am using the model on (English, Chinese ...): The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. python3 ./examples/text-classification/run_glue.py --model_name_or_path bert-base-uncased --task_name $TASK_NAME --do_train --do_eval --data_dir $GLUE_DIR/$TASK_NAME --max_seq_length 128 --per_device_eval_batch_size=2 --per_device_train_batch_size=2 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir /tmp/$TASK_NAME/cd ../.. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> It should run normally but then give error: cannot import module EvalPrediction ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.5.1 - Platform: Linux-4.4.0-165-generic-x86_64-with-debian-stretch-sid - Python version: 3.6.5 - PyTorch version (GPU?): 1.2.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in>
07-12-2020 06:14:06
07-12-2020 06:14:06
Hi @vanh17 , your transformers version is old, `EvalPrediction` is not available in 2.5.1. You can install transformers from source and then run the examples.
transformers
5,690
closed
How I can predict missing letters in a sentence, like " I want to b _ _ the car because it is cheap."
Hi, I am new on NLP, I want to predict missing letters in a sentence. Here is an example, ```text I want to b _ _ the car because it is cheap. ```
07-12-2020 01:22:02
07-12-2020 01:22:02
I am not sure how to predict letters, but you can use BERT to predict words. <|||||>I'd try to train a character-level model. Some of the Reformer models are pretrained in a char-level setting, if I remember correctly: https://huggingface.co/models?search=reformer In the future however, this question is more suited to [discuss.huggingface.co](https://discuss.huggingface.co)
transformers
5,689
closed
Is Writing With Transform open source?
I want to use Writing with Transform with my own model? My first though was that I could just download it and modify the source to point to my model., but I can't find WWT anywhere in the repo. Do I have to publish my model then make a request for you to add it?
07-11-2020 22:36:58
07-11-2020 22:36:58
If you add your model to the hub, you'll get the inference widget & API that you can use for demos or integration into your product: https://huggingface.co/docs<|||||>We don't currently have short-term plans to open source Write With Transformers's frontend Following up on what @clmnt said, we have an option (currently in private beta) for GPU acceleration of the inference API which would let you built similar (fast!) applications.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,688
closed
doc improvements
a few documentation improvements - one variable name consistency rename, and the rest are small tweaks with one clarification.
07-11-2020 19:21:08
07-11-2020 19:21:08
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5688?src=pr&el=h1) Report > Merging [#5688](https://codecov.io/gh/huggingface/transformers/pull/5688?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0267668c3d648c6e41afda97f5df8671ee880ac3&el=desc) will **decrease** coverage by `0.24%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5688/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5688?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5688 +/- ## ========================================== - Coverage 77.01% 76.76% -0.25% ========================================== Files 128 146 +18 Lines 21615 25983 +4368 ========================================== + Hits 16646 19945 +3299 - Misses 4969 6038 +1069 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5688?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/5688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.24% <ø> (+0.11%)` | :arrow_up: | | [src/transformers/benchmark/benchmark.py](https://codecov.io/gh/huggingface/transformers/pull/5688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrLnB5) | `74.01% <ø> (+5.16%)` | :arrow_up: | | [src/transformers/benchmark/benchmark\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3MucHk=) | `86.04% <ø> (+0.68%)` | :arrow_up: | | [src/transformers/benchmark/benchmark\_args\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdGYucHk=) | `87.50% <ø> (ø)` | | | [src/transformers/benchmark/benchmark\_args\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdXRpbHMucHk=) | `89.13% <ø> (-7.75%)` | :arrow_down: | | [src/transformers/benchmark/benchmark\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3RmLnB5) | `61.53% <ø> (ø)` | | | [src/transformers/benchmark/benchmark\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `69.50% <ø> (-3.60%)` | :arrow_down: | | [src/transformers/configuration\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100.00% <ø> (ø)` | | | [src/transformers/configuration\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `93.18% <ø> (+0.32%)` | :arrow_up: | | [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `93.75% <ø> (+0.41%)` | :arrow_up: | | ... and [109 more](https://codecov.io/gh/huggingface/transformers/pull/5688/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5688?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5688?src=pr&el=footer). Last update [dc31a72...2242bb2](https://codecov.io/gh/huggingface/transformers/pull/5688?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,687
closed
Making ONNX conversion directly load the model and tokenizer + adding tests
This is a proposal for an update in the ONNX conversion: - First, I investigated issue #4906 as I was having the same issue. This is due to a dependency on this [commit](https://github.com/pytorch/pytorch/commit/96989a2a114de9b77e7dd9495d62c4a8a549b40d) from the ONNX team available from version 1.5.0 of PyTorch, I therefore added it to the extra requirements and added to the messages to make it more obvious (along with keras2onnx for TF) - The bigger part of this PR aims to remove the dependency of the script for specific pipelines. I believe this dependency is not needed as a conversion to ONNX simply requires a model, and a tokenizer. There are a few advantages to doing it this way: 1. This would solve #4788, and help greatly towards #5075. With this update, the conversion to ONNX no longer requires a given pipeline, and therefore any model can be converted (I have tested for instance with T5) 2. It is maybe clearer since the elements of the pipeline are not exported onto ONNX. Let me know your thoughts on that one. - I added some fast-running integration testing of the script to the existing tests. - I made the onnx export compatible with the ModelOutput refactor (I believe previously it wasn't?) A question I have is whether I should add a message or add the possibility for the user to provide a pipeline even if it is not used in order to make it back-compatible?
07-11-2020 17:16:42
07-11-2020 17:16:42
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5687?src=pr&el=h1) Report > Merging [#5687](https://codecov.io/gh/huggingface/transformers/pull/5687?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0267668c3d648c6e41afda97f5df8671ee880ac3&el=desc) will **increase** coverage by `0.89%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5687/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5687?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5687 +/- ## ========================================== + Coverage 77.01% 77.90% +0.89% ========================================== Files 128 146 +18 Lines 21615 25983 +4368 ========================================== + Hits 16646 20243 +3597 - Misses 4969 5740 +771 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5687?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/5687/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.24% <ø> (+0.11%)` | :arrow_up: | | [src/transformers/benchmark/benchmark.py](https://codecov.io/gh/huggingface/transformers/pull/5687/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrLnB5) | `74.01% <ø> (+5.16%)` | :arrow_up: | | [src/transformers/benchmark/benchmark\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5687/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3MucHk=) | `86.04% <ø> (+0.68%)` | :arrow_up: | | [src/transformers/benchmark/benchmark\_args\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5687/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdGYucHk=) | `87.50% <ø> (ø)` | | | [src/transformers/benchmark/benchmark\_args\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5687/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdXRpbHMucHk=) | `89.13% <ø> (-7.75%)` | :arrow_down: | | [src/transformers/benchmark/benchmark\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5687/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3RmLnB5) | `61.53% <ø> (ø)` | | | [src/transformers/benchmark/benchmark\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5687/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `69.50% <ø> (-3.60%)` | :arrow_down: | | [src/transformers/configuration\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5687/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100.00% <ø> (ø)` | | | [src/transformers/configuration\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5687/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `100.00% <ø> (+7.14%)` | :arrow_up: | | [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5687/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `93.75% <ø> (+0.41%)` | :arrow_up: | | ... and [108 more](https://codecov.io/gh/huggingface/transformers/pull/5687/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5687?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5687?src=pr&el=footer). Last update [dc31a72...977fc15](https://codecov.io/gh/huggingface/transformers/pull/5687?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>I added a bit to this. The wrapper model is now recursive, which lets models such as T5 export properly. Additionally, thanks to the ModelOutput refactor, now output names can be automatically extracted!<|||||>Any news on this @mfuntowicz ? I'm waiting to have this merged to attempt to make a T5 -> ONNX -> ONNX.js script.<|||||>I'm adding bunch of people to the PR as I would like to get their feeling on these changes 😊 <|||||>Thanks for the review @mfuntowicz and I definitely understand the need to get a consensus on this. To sum up why I think it might be a good idea to remove the dependency on pipelines is because at the moment the pipeline is only used as a way to load a model and a tokenizer. I came into this issue while trying to export T5 (and it seems that a few people also had that issue in the thread for #5518). Doing it directly without passing by the pipelines is closer in my opinion to what the script actually does (grabbing a model and a tokenizer), makes every ONNX-compatible model exportable by default, and avoids confusions such as that choosing a different pipeline for the same model would change the output ONNX model. As for the other elements, thanks a lot for noting them, I completely agree with the points, and didn't know about the return_tuples parameter, that should save a lot of weirdness in the code! Although I guess this might lose the named outputs part, but I'm sure there's a cleaner way to do it.<|||||>If I may add some motivation to such PR, some tasks like MultiChoice questions are not managed by pipelines. Therefore we had to perform the conversion ourselves, and it appeared that depending of the model, there are some bugs in pytorch to onnx method (from pytorch lib) in the order input have to be provided (https://github.com/microsoft/onnxruntime/issues/4292, I know you have fixed a similar issue in this repo but for pipeline tasks only). Now, we are using some other model that have not such bug, we need to manage both cases, etc. Being able to rely on the lib code (well tested / documented) instead of having to maintain our own code would be a big improvement and may directly increase the number of teams putting Transformer based models in prod (onnx runtime providing big perf boost).<|||||>Hey @mfuntowicz ! Just wanted to check in to see whether I should adjust the PR with your comments or whether the team prefers to keep the pipelines as is.<|||||>Hi @abelriboulot, do you think your PR can solve https://github.com/huggingface/transformers/issues/6503 as well?<|||||>Hi @Zhen-hao, I took a look at your issue and I think it might. I haven't heard back from huggingface on this PR, so I think I might make a separate package to easily convert huggingface models to ONNX. If you're interested I'll keep you in the loop!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>In case anyone is looking for it, the package is [there](https://github.com/abelriboulot/onnxt5)! Hope it helps.<|||||>Is there any way to convert Helsinki-NLP/opus-mt-en-ROMANCE to onnx format ?<|||||>@patil-suraj was working on a library that might help with that, he might know better!
transformers
5,686
closed
[Fix] github actions CI by reverting #5138
On github actions, even when the tests fail the job is green, presumably because of my artifacts change( #5318 ). I am not clear on why this happens, but have verified that this does not happen on CircleCI. In the screenshot below, there is a green check mark, but also 2 tests have failed: ![image](https://user-images.githubusercontent.com/6045025/87226810-6103b400-c364-11ea-875a-dcbd7c1e49ca.png)
07-11-2020 14:53:03
07-11-2020 14:53:03
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5686?src=pr&el=h1) Report > Merging [#5686](https://codecov.io/gh/huggingface/transformers/pull/5686?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0267668c3d648c6e41afda97f5df8671ee880ac3&el=desc) will **increase** coverage by `1.10%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5686/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5686?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5686 +/- ## ========================================== + Coverage 77.01% 78.11% +1.10% ========================================== Files 128 146 +18 Lines 21615 25983 +4368 ========================================== + Hits 16646 20297 +3651 - Misses 4969 5686 +717 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5686?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/5686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.24% <ø> (+0.11%)` | :arrow_up: | | [src/transformers/benchmark/benchmark.py](https://codecov.io/gh/huggingface/transformers/pull/5686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrLnB5) | `74.01% <ø> (+5.16%)` | :arrow_up: | | [src/transformers/benchmark/benchmark\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3MucHk=) | `86.04% <ø> (+0.68%)` | :arrow_up: | | [src/transformers/benchmark/benchmark\_args\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdGYucHk=) | `87.50% <ø> (ø)` | | | [src/transformers/benchmark/benchmark\_args\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdXRpbHMucHk=) | `89.13% <ø> (-7.75%)` | :arrow_down: | | [src/transformers/benchmark/benchmark\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3RmLnB5) | `61.53% <ø> (ø)` | | | [src/transformers/benchmark/benchmark\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `69.50% <ø> (-3.60%)` | :arrow_down: | | [src/transformers/configuration\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100.00% <ø> (ø)` | | | [src/transformers/configuration\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `93.18% <ø> (+0.32%)` | :arrow_up: | | [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `93.75% <ø> (+0.41%)` | :arrow_up: | | ... and [107 more](https://codecov.io/gh/huggingface/transformers/pull/5686/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5686?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5686?src=pr&el=footer). Last update [dc31a72...7c84917](https://codecov.io/gh/huggingface/transformers/pull/5686?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Did you try to use `2>&1 | tee output.txt` instead of just `| tee output.txt` ? Don't know at all if this would work instead, but might be worth trying giving the first answer in this stack overflow: https://stackoverflow.com/questions/418896/how-to-redirect-output-to-a-file-and-stdout<|||||>@patrickvonplaten I think that will just make the logfile include stderr, and the job might still succeed for the same reason it is succeeding now. Do you know how to test locally? <|||||>Gunna try this for the 7pm run tonight and then we can be more aggressive later.
transformers
5,685
closed
Fix Trainer in DataParallel setting
The new output types seem to break data parallel FYI, see comment on #5671. This is is because of the line ``` return type(out)(map(gather_map, zip(*outputs))) ``` in `scatter_gather` which tries to reconstruct an output of the same type as ours (and fails since it does not provide the necessary arguments). There is no way to fix our `ModelOutput` to work with this AFAICT. However, we have the `return_tuple` argument to fix the issue :-)
07-11-2020 12:27:33
07-11-2020 12:27:33
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5685?src=pr&el=h1) Report > Merging [#5685](https://codecov.io/gh/huggingface/transformers/pull/5685?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7fad617dc1fc681a7f5da5e0172c8b83f4bf0024&el=desc) will **decrease** coverage by `0.20%`. > The diff coverage is `25.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5685/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5685?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5685 +/- ## ========================================== - Coverage 78.11% 77.91% -0.21% ========================================== Files 146 146 Lines 25983 25987 +4 ========================================== - Hits 20297 20247 -50 - Misses 5686 5740 +54 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5685?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5685/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.84% <25.00%> (-0.12%)` | :arrow_down: | | [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5685/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `44.56% <0.00%> (-46.35%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5685/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `63.55% <0.00%> (-31.78%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5685/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `79.94% <0.00%> (-6.02%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5685/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.92% <0.00%> (-1.97%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5685/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: | | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5685/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5685/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5685?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5685?src=pr&el=footer). Last update [7fad617...05ec8f6](https://codecov.io/gh/huggingface/transformers/pull/5685?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,684
closed
fix incorrect docstring on bart summarization example
Change summarization example for BART from ``` inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors='pt') ``` to ``` inputs = tokenizer.encode_plus(ARTICLE_TO_SUMMARIZE, max_length=1024, return_tensors='pt') ```
07-11-2020 12:07:13
07-11-2020 12:07:13
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5684?src=pr&el=h1) Report > Merging [#5684](https://codecov.io/gh/huggingface/transformers/pull/5684?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7fad617dc1fc681a7f5da5e0172c8b83f4bf0024&el=desc) will **decrease** coverage by `0.15%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5684/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5684?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5684 +/- ## ========================================== - Coverage 78.11% 77.96% -0.16% ========================================== Files 146 146 Lines 25983 25983 ========================================== - Hits 20297 20258 -39 - Misses 5686 5725 +39 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5684?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5684/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.80% <ø> (ø)` | | | [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5684/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `44.56% <0.00%> (-46.35%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5684/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `63.55% <0.00%> (-31.78%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5684/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `82.95% <0.00%> (-3.01%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5684/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.92% <0.00%> (-1.97%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5684/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: | | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5684/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5684/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5684?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5684?src=pr&el=footer). Last update [7fad617...05ce6f7](https://codecov.io/gh/huggingface/transformers/pull/5684?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Ah, I am sorry, it's my mistake, I mixed up in my environment with the previous release version 2.9.0 which doesn't have `__call__` function implemented on the base tokenizer class.
transformers
5,683
closed
Add Microsoft's CodeBERT
@guoday
07-11-2020 12:04:18
07-11-2020 12:04:18
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5683?src=pr&el=h1) Report > Merging [#5683](https://codecov.io/gh/huggingface/transformers/pull/5683?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7fad617dc1fc681a7f5da5e0172c8b83f4bf0024&el=desc) will **decrease** coverage by `0.75%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5683/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5683?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5683 +/- ## ========================================== - Coverage 78.11% 77.36% -0.76% ========================================== Files 146 146 Lines 25983 25983 ========================================== - Hits 20297 20102 -195 - Misses 5686 5881 +195 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5683?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5683/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5683/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `63.55% <0.00%> (-31.78%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5683/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.71% <0.00%> (-0.26%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5683/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: | | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5683/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5683/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5683?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5683?src=pr&el=footer). Last update [7fad617...cbd3d7a](https://codecov.io/gh/huggingface/transformers/pull/5683?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>> @guoday Hi, @JetRunner . Thanks a lot. It look great.
transformers
5,682
closed
What is the decoder_input for encoder-decoder transformer in training time?
https://datascience.stackexchange.com/questions/76261/whats-the-input-dimension-for-transformer-decoder-during-training Is the link's answer right? Thank you very much!
07-11-2020 10:48:07
07-11-2020 10:48:07
The spirit of the answer is right. There is a lot more detail in this [blogpost](https://sshleifer.github.io/blog_v2/jupyter/2020/03/12/bart.html)<|||||>In the future, this would be a great Q for our forums: https://discuss.huggingface.co/ since it doesn't directly involve issues with the library. <|||||>Thank you again.
transformers
5,681
closed
[pipelines] Update fill mask pipeline to remove special tokens in the output
Small fix to remove the special tokens from the output of the fill mask pipeline.
07-11-2020 10:08:26
07-11-2020 10:08:26
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5681?src=pr&el=h1) Report > Merging [#5681](https://codecov.io/gh/huggingface/transformers/pull/5681?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7fad617dc1fc681a7f5da5e0172c8b83f4bf0024&el=desc) will **decrease** coverage by `0.20%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5681/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5681?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5681 +/- ## ========================================== - Coverage 78.11% 77.91% -0.21% ========================================== Files 146 146 Lines 25983 25983 ========================================== - Hits 20297 20244 -53 - Misses 5686 5739 +53 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5681?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5681/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `76.36% <100.00%> (ø)` | | | [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5681/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `44.56% <0.00%> (-46.35%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5681/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `63.55% <0.00%> (-31.78%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5681/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `79.44% <0.00%> (-6.52%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5681/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.92% <0.00%> (-1.97%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5681/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: | | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5681/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5681/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5681?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5681?src=pr&el=footer). Last update [7fad617...c2fbb71](https://codecov.io/gh/huggingface/transformers/pull/5681?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Hi Thomas, I found the output is something like this with the latest version of `transformers`: ```json [ { "sequence": "<s>if (x is not None) and(x>1)</s>", "score": 0.7236990928649902, "token": 8, "token_str": "Ġand" }, { "sequence": "<s>if (x is not None) &(x>1)</s>", "score": 0.10633797943592072, "token": 359, "token_str": "Ġ&" }, { "sequence": "<s>if (x is not None)and(x>1)</s>", "score": 0.021604137495160103, "token": 463, "token_str": "and" }, { "sequence": "<s>if (x is not None) AND(x>1)</s>", "score": 0.02122747339308262, "token": 4248, "token_str": "ĠAND" }, { "sequence": "<s>if (x is not None) if(x>1)</s>", "score": 0.016991324722766876, "token": 114, "token_str": "Ġif" } ] ``` However, when using `2.9.1`, I get: ```python {'sequence': '<s> if (x is not None) and (x>1)</s>', 'score': 0.6049249172210693, 'token': 8} {'sequence': '<s> if (x is not None) or (x>1)</s>', 'score': 0.30680200457572937, 'token': 50} {'sequence': '<s> if (x is not None) if (x>1)</s>', 'score': 0.02133703976869583, 'token': 114} {'sequence': '<s> if (x is not None) then (x>1)</s>', 'score': 0.018607674166560173, 'token': 172} {'sequence': '<s> if (x is not None) AND (x>1)</s>', 'score': 0.007619690150022507, 'token': 4248} ``` The output sequence of `2.9.1` is way much cleaner. Can this PR fix this?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,680
closed
How to produce customized attention mask for BertModel?
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> The attention mask for BertModel needs a tensor sized (batch, seq-length). But what if I need to customize the attention for each token, just like UniLM, or some diagonal attention like GPT?
07-11-2020 09:05:48
07-11-2020 09:05:48
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Is there any help?<|||||>@null-id I also want to know how to customize query-key attention mask. Did you solve the problem?
transformers
5,679
closed
Pipeline model type check
Add model type check for pipelines. https://github.com/huggingface/transformers/issues/5678
07-11-2020 08:27:33
07-11-2020 08:27:33
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5679?src=pr&el=h1) Report > Merging [#5679](https://codecov.io/gh/huggingface/transformers/pull/5679?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7fad617dc1fc681a7f5da5e0172c8b83f4bf0024&el=desc) will **decrease** coverage by `1.44%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5679/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5679?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5679 +/- ## ========================================== - Coverage 78.11% 76.67% -1.45% ========================================== Files 146 146 Lines 25983 25998 +15 ========================================== - Hits 20297 19934 -363 - Misses 5686 6064 +378 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5679?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `76.96% <100.00%> (+0.59%)` | :arrow_up: | | [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/5679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `19.81% <0.00%> (-79.28%)` | :arrow_down: | | [...rc/transformers/data/datasets/language\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/5679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `34.69% <0.00%> (-57.15%)` | :arrow_down: | | [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `44.56% <0.00%> (-46.35%)` | :arrow_down: | | [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `50.74% <0.00%> (-35.83%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `63.55% <0.00%> (-31.78%)` | :arrow_down: | | [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `60.00% <0.00%> (-25.72%)` | :arrow_down: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `16.66% <0.00%> (-21.30%)` | :arrow_down: | | [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `32.00% <0.00%> (-17.10%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `74.43% <0.00%> (-11.53%)` | :arrow_down: | | ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/5679/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5679?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5679?src=pr&el=footer). Last update [7fad617...81471f4](https://codecov.io/gh/huggingface/transformers/pull/5679?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>This is a reasonable change, however next time on Pipelines please wait for review from @mfuntowicz, @LysandreJik or I before merging (especially as it can impact the inference API)<|||||>> This is a reasonable change, however next time on Pipelines please wait for review from @mfuntowicz, @LysandreJik or I before merging > > > > (especially as it can impact the inference API) Ok, I wasn't aware of that. Is there some written guideline about these requirements? I'm a little confused from time to time.
transformers
5,678
closed
Weird output when using unexpected model type for pipelines
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): CodeBERT Language I am using the model on (English, Chinese ...): Code The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: This is the right code and right outputs: ```python from transformers import RobertaConfig, RobertaTokenizer, RobertaForMaskedLM, pipeline model = RobertaForMaskedLM.from_pretrained('microsoft/codebert-base-mlm') tokenizer = RobertaTokenizer.from_pretrained('microsoft/codebert-base-mlm') CODE = "if (x is not None) <mask> (x>1)" fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer) outputs = fill_mask(CODE) print(outputs) ``` Output: ```python [{'sequence': '<s>if (x is not None) and(x>1)</s>', 'score': 0.7236990928649902, 'token': 8, 'token_str': 'Ġand'}, {'sequence': '<s>if (x is not None) &(x>1)</s>', 'score': 0.10633797943592072, 'token': 359, 'token_str': 'Ġ&'}, {'sequence': '<s>if (x is not None)and(x>1)</s>', 'score': 0.021604137495160103, 'token': 463, 'token_str': 'and'}, {'sequence': '<s>if (x is not None) AND(x>1)</s>', 'score': 0.02122747339308262, 'token': 4248, 'token_str': 'ĠAND'}, {'sequence': '<s>if (x is not None) if(x>1)</s>', 'score': 0.016991324722766876, 'token': 114, 'token_str': 'Ġif'}] ``` But if we load the model with `RobertaModel` and proceed with the same pipeline: ```python from transformers import RobertaConfig, RobertaTokenizer, RobertaModel, pipeline model = RobertaModel.from_pretrained('microsoft/codebert-base-mlm') tokenizer = RobertaTokenizer.from_pretrained('microsoft/codebert-base-mlm') CODE = "if (x is not None) <mask> (x>1)" fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer) outputs = fill_mask(CODE) print(outputs) ``` Then the output makes no sense at all: ```python [{'sequence': '<s>if (x is not None) real(x>1)</s>', 'score': 0.9961338043212891, 'token': 588, 'token_str': 'Ġreal'}, {'sequence': '<s>if (x is not None)n(x>1)</s>', 'score': 1.70519979292294e-05, 'token': 282, 'token_str': 'n'}, {'sequence': '<s>if (x is not None) security(x>1)</s>', 'score': 1.5919968063826673e-05, 'token': 573, 'token_str': 'Ġsecurity'}, {'sequence': '<s>if (x is not None) Saturday(x>1)</s>', 'score': 1.5472969607799314e-05, 'token': 378, 'token_str': 'ĠSaturday'}, {'sequence': '<s>if (x is not None) here(x>1)</s>', 'score': 1.543204598419834e-05, 'token': 259, 'token_str': 'Ġhere'}] ``` - `transformers` version: 3.0.1 - Platform: Colab - Python version: Doesn't matter - PyTorch version (GPU?): Doesn't matter - Tensorflow version (GPU?): Doesn't matter - Using GPU in script?: Doesn't matter - Using distributed or parallel set-up in script?: Doesn't matter
07-11-2020 06:52:41
07-11-2020 06:52:41
I'm working on a fix now.<|||||>This bug occurs irrespective `transformer` version I checked it for 2.8.0, 2.90 and 3.0.1 Pipeline returns incorrect output only when the model and tokenizer classes are used to initialize the pipeline. If you use model and tokernizer parameters as path instead in form of string. The output is fine. Following snippet demonstrates this : ``` from transformers import RobertaModel, RobertaTokenizer, RobertaConfig from transformers import pipeline MODEL_PATH = 'roberta-base' model = RobertaModel.from_pretrained(MODEL_PATH) tokenizer = RobertaTokenizer.from_pretrained(MODEL_PATH) fill_from_path = pipeline( 'fill-mask', model=MODEL_PATH, tokenizer=MODEL_PATH ) fill_from_model = pipeline( 'fill-mask', model=model, tokenizer=tokenizer ) seq = 'I found a bug in <mask>' print(fill_from_path(seq)) print(fill_from_model(seq)) ``` The output is the following. You can see the first output is fine where we used the model paths, but the second output where we provided the model and tokenizer classes has a problem. ``` [{'sequence': '<s> I found a bug in Firefox</s>', 'score': 0.051126863807439804, 'token': 30675}, {'sequence': '<s> I found a bug in Gmail</s>', 'score': 0.027283240109682083, 'token': 29004}, {'sequence': '<s> I found a bug in Photoshop</s>', 'score': 0.024683473631739616, 'token': 35197}, {'sequence': '<s> I found a bug in Java</s>', 'score': 0.021543316543102264, 'token': 24549}, {'sequence': '<s> I found a bug in Windows</s>', 'score': 0.018485287204384804, 'token': 6039}] [{'sequence': '<s> I found a bug in real</s>', 'score': 0.9705745577812195, 'token': 588}, {'sequence': '<s> I found a bug in here</s>', 'score': 0.00013350950030144304, 'token': 259}, {'sequence': '<s> I found a bug in within</s>', 'score': 6.807789031881839e-05, 'token': 624}, {'sequence': '<s> I found a bug in San</s>', 'score': 6.468965875683352e-05, 'token': 764}, {'sequence': '<s> I found a bug in 2015</s>', 'score': 6.282260437728837e-05, 'token': 570}] ```<|||||>@ashutosh-dwivedi-e3502 Try changing this line `model = RobertaModel.from_pretrained(MODEL_PATH)` into `model = AutoModelForMaskedLM.from_pretrained(MODEL_PATH)`<|||||>@JuhaKiili That fixes it. Output with `model = AutoModelForMaskedLM.from_pretrained(MODEL_PATH)` is : ``` Some weights of RobertaForMaskedLM were not initialized from the model checkpoint at roberta-base and are newly initialized: ['lm_head.decoder.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. /Users/asdwivedi/.virtualenvs/test-demo-TklxO9OB/lib/python3.8/site-packages/transformers/modeling_auto.py:796: FutureWarning: The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models. warnings.warn( Some weights of RobertaForMaskedLM were not initialized from the model checkpoint at roberta-base and are newly initialized: ['lm_head.decoder.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. [{'sequence': '<s>I found a bug in Firefox</s>', 'score': 0.05709619075059891, 'token': 30675, 'token_str': 'ĠFirefox'}, {'sequence': '<s>I found a bug in Gmail</s>', 'score': 0.03430333733558655, 'token': 29004, 'token_str': 'ĠGmail'}, {'sequence': '<s>I found a bug in WordPress</s>', 'score': 0.028388172388076782, 'token': 33398, 'token_str': 'ĠWordPress'}, {'sequence': '<s>I found a bug in Java</s>', 'score': 0.02571324072778225, 'token': 24549, 'token_str': 'ĠJava'}, {'sequence': '<s>I found a bug in Python</s>', 'score': 0.01953786611557007, 'token': 31886, 'token_str': 'ĠPython'}] [{'sequence': '<s>I found a bug in Firefox</s>', 'score': 0.05709619075059891, 'token': 30675, 'token_str': 'ĠFirefox'}, {'sequence': '<s>I found a bug in Gmail</s>', 'score': 0.03430333733558655, 'token': 29004, 'token_str': 'ĠGmail'}, {'sequence': '<s>I found a bug in WordPress</s>', 'score': 0.028388172388076782, 'token': 33398, 'token_str': 'ĠWordPress'}, {'sequence': '<s>I found a bug in Java</s>', 'score': 0.02571324072778225, 'token': 24549, 'token_str': 'ĠJava'}, {'sequence': '<s>I found a bug in Python</s>', 'score': 0.01953786611557007, 'token': 31886, 'token_str': 'ĠPython'}] ```
transformers
5,677
closed
[WIP] Added indexes in grouped entity NER
Based on [issue #5676](https://github.com/huggingface/transformers/issues/5676) Any application that requires users to locate grouped named entities would require some sort of index. This feature is present in the standard NER pipeline and should also exist in the grouped entity NER pipeline as well. This is a very short addition to the model and is a relevant use case to many developers.
07-11-2020 04:35:20
07-11-2020 04:35:20
I think this would be a nice addition as well. Most of the tests are failing because they were not adapted to your addition. Do you mind adapting them? PS: would you mind changing `indexes` to `indices`? That's what we try to use in the repository for the plural of index :)<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5677?src=pr&el=h1) Report > Merging [#5677](https://codecov.io/gh/huggingface/transformers/pull/5677?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7fad617dc1fc681a7f5da5e0172c8b83f4bf0024&el=desc) will **increase** coverage by `0.39%`. > The diff coverage is `80.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5677/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5677?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5677 +/- ## ========================================== + Coverage 78.11% 78.51% +0.39% ========================================== Files 146 146 Lines 25983 26326 +343 ========================================== + Hits 20297 20669 +372 + Misses 5686 5657 -29 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5677?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/commands/train.py](https://codecov.io/gh/huggingface/transformers/pull/5677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy90cmFpbi5weQ==) | `0.00% <ø> (ø)` | | | [src/transformers/benchmark/benchmark\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3RmLnB5) | `65.03% <50.00%> (+3.49%)` | :arrow_up: | | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/5677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.24% <100.00%> (ø)` | | | [src/transformers/benchmark/benchmark.py](https://codecov.io/gh/huggingface/transformers/pull/5677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrLnB5) | `81.88% <100.00%> (+7.87%)` | :arrow_up: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: | | [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `71.21% <0.00%> (-12.88%)` | :arrow_down: | | [src/transformers/testing\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90ZXN0aW5nX3V0aWxzLnB5) | `72.72% <0.00%> (-3.75%)` | :arrow_down: | | [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/5677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `97.41% <0.00%> (-1.69%)` | :arrow_down: | | [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/5677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `89.21% <0.00%> (-0.51%)` | :arrow_down: | | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.13% <0.00%> (-0.22%)` | :arrow_down: | | ... and [35 more](https://codecov.io/gh/huggingface/transformers/pull/5677/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5677?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5677?src=pr&el=footer). Last update [7fad617...516926a](https://codecov.io/gh/huggingface/transformers/pull/5677?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Made changes suggested by @LysandreJik, then rebased. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,676
closed
Add indexes to grouped entity NER pipeline
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> **There should be indexes in the output of the grouped entity NER pipeline** The standard NER pipeline from transformers outputs entities that contain the word, score, entity type, and index. The following snippet demonstrates the normal behavior of the NER pipeline with the default `grouped_entities=False` option. ```python from transformers import pipeline nlp_without_grouping = pipeline("ner") sequence = "Hugging Face Inc. is a company based in New York City." print(nlp_without_grouping(sequence)) [ {'word': 'Hu', 'score': 0.9992662668228149, 'entity': 'I-ORG', 'index': 1}, {'word': '##gging', 'score': 0.9808881878852844, 'entity': 'I-ORG', 'index': 2}, {'word': 'Face', 'score': 0.9953625202178955, 'entity': 'I-ORG', 'index': 3}, {'word': 'Inc', 'score': 0.9993382096290588, 'entity': 'I-ORG', 'index': 4}, {'word': 'New', 'score': 0.9990268349647522, 'entity': 'I-LOC', 'index': 11}, {'word': 'York', 'score': 0.9988483190536499, 'entity': 'I-LOC', 'index': 12}, {'word': 'City', 'score': 0.9991773366928101, 'entity': 'I-LOC', 'index': 13} ] ``` However, the NER pipeline with `grouped_entities=True` outputs only word, score, and entity type. Here's the code snippet and output. There's also the problem of 'New York City' being duplicated, but I will address that in a new issue. ```python from transformers import pipeline nlp_with_grouping = pipeline("ner", grouped_entities=True) sequence = "Hugging Face Inc. is a company based in New York City." print(nlp_with_grouping(sequence)) [ {'entity_group': 'I-ORG', 'score': 0.9937137961387634, 'word': 'Hugging Face Inc'}, {'entity_group': 'I-LOC', 'score': 0.9990174969037374, 'word': 'New York City'}, {'entity_group': 'I-LOC', 'score': 0.9990174969037374, 'word': 'New York City'} ] ``` I believe that the grouped entities returned should also include the tokens of the entities. Sample output would look as such ```python [ {'entity_group': 'I-ORG', 'score': 0.9930560886859894, 'word': 'Hugging Face Inc', 'indexes': [1, 2, 3, 4]}, {'entity_group': 'I-LOC', 'score': 0.998809814453125, 'word': 'New York City', 'indexes': [11, 12, 13]}, {'entity_group': 'I-LOC', 'score': 0.998809814453125, 'word': 'New York City', 'indexes': [11, 12, 13]} ] ``` ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> **Any application that requires users to locate grouped named entities would require some sort of index.** This feature is present in the standard NER pipeline and should also exist in the grouped entity NER pipeline as well. In my case, I am trying to append the type to the text right after the named entity ("Apple" would become "Apple \<I-ORG\>") so I need to be able to locate the named entity within my phrase. ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md --> I have been able to fix this by adding two lines to `group_sub_entities` function https://github.com/huggingface/transformers/blob/7fad617dc1fc681a7f5da5e0172c8b83f4bf0024/src/transformers/pipelines.py#L1042 ```python def group_sub_entities(self, entities: List[dict]) -> dict: """ Returns grouped sub entities """ # Get the first entity in the entity group entity = entities[0]["entity"] scores = np.mean([entity["score"] for entity in entities]) tokens = [entity["word"] for entity in entities] indexes = [entity["index"] for entity in entities] # my added line entity_group = { "entity_group": entity, "score": np.mean(scores), "word": self.tokenizer.convert_tokens_to_string(tokens), "indexes": indexes # my added line } return entity_group ```
07-11-2020 03:51:52
07-11-2020 03:51:52
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>I am facing the same issue, Does this issue got fixed<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Fixed by https://github.com/huggingface/transformers/pull/8781
transformers
5,675
closed
Deepset model not loading using default code
# 🐛 Bug ## Information Model I am using (Bert): Bert Language I am using the model on (English): English The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Run this script below ``` from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("deepset/bert-large-uncased-whole-word-masking-squad2") model = AutoModelForQuestionAnswering.from_pretrained("deepset/bert-large-uncased-whole-word-masking-squad2") ``` 2. Notice this error stack: ``` --------------------------------------------------------------------------- OSError Traceback (most recent call last) <ipython-input-29-2a5e47891fb0> in <module> ----> 1 tokenizer = AutoTokenizer.from_pretrained("deepset/bert-large-uncased-whole-word-masking-squad2") 2 model = AutoModelForQuestionAnswering.from_pretrained("deepset/bert-large-uncased-whole-word-masking-squad2") 3 4 reg_tokenizer = RegexpTokenizer(r'\w+') ~/anaconda3/lib/python3.7/site-packages/transformers/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 107 return RobertaTokenizer.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) 108 elif 'bert' in pretrained_model_name_or_path: --> 109 return BertTokenizer.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) 110 elif 'openai-gpt' in pretrained_model_name_or_path: 111 return OpenAIGPTTokenizer.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) ~/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils.py in from_pretrained(cls, *inputs, **kwargs) 280 281 """ --> 282 return cls._from_pretrained(*inputs, **kwargs) 283 284 ~/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils.py in _from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs) 344 pretrained_model_name_or_path, ', '.join(s3_models), 345 pretrained_model_name_or_path, --> 346 list(cls.vocab_files_names.values()))) 347 348 # Get files from url, cache, or disk depending on the case OSError: Model name 'deepset/bert-large-uncased-whole-word-masking-squad2' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased). We assumed 'deepset/bert-large-uncased-whole-word-masking-squad2' was a path or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url. ``` ## Expected behavior I switched to a conda environment and reinstalled transformers package with conda. Before, having used just pip install, this segment of code was working. Now, it no longer even finds the relevant model. The expected behavior is to load this specific model. ## Environment info - `transformers` version: 2.11.0 - Platform: Linux-3.10.0-1062.12.1.el7.x86_64-x86_64-with-redhat-7.8-Maipo - Python version: 3.6.8 - PyTorch version (GPU?): 1.5.1 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: n/a - Using distributed or parallel set-up in script?: no
07-10-2020 23:57:23
07-10-2020 23:57:23
Looks like the inference on the model hub works if it helps: https://huggingface.co/deepset/bert-large-uncased-whole-word-masking-squad2?text=Where+do+I+live%3F&context=My+name+is+Wolfgang+and+I+live+in+Berlin
transformers
5,674
closed
Can't get BART to generate EOS token.
# 🐛 Bug I finetuned Bart on a few seq2seq tasks. It seems to learn the right thing, but it never seems to stop generating text unless I set `max_length`, i.e. it never generates the EOS token on its own. This seems to be the case for the pretrained model as well: if I run the example [here](https://huggingface.co/transformers/model_doc/bart.html#bartforconditionalgeneration), it always produces very long summaries even if the input text is quite small. To make the bug easy to reproduce, I set up a toy task below where I am finetuning Bart to just repeat the input. ## Information Model I am using (Bert, XLNet ...): Bart Language I am using the model on (English, Chinese ...): English The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Create a dataset for seq2seq training where the target is a copy of the source, e.g. below I create a dataset of sentences with `4,000` examples in the training set. ```python import nlp d = nlp.load_dataset('snli') d = list(set(d['train']['premise'][:20000])) import os folder = '/tmp/bartz2' if os.path.exists(folder): os.system('rm -rf %s' % folder) os.mkdir(folder) N = 4000 f = open(os.path.join(folder, 'train.source'), 'w') f.write('\n'.join(d[:N])) f = open(os.path.join(folder, 'val.source'), 'w') f.write('\n'.join(d[N:])) f.close() f = open(os.path.join(folder, 'test.source'), 'w') f.write('\n'.join(d[N:])) f.close() f = open(os.path.join(folder, 'train.target'), 'w') f.write('\n'.join(d[:N])) f.close() f = open(os.path.join(folder, 'val.target'), 'w') f.write('\n'.join(d[N:])) f.close() f = open(os.path.join(folder, 'test.target'), 'w') f.write('\n'.join(d[N:])) f.close() ``` 2. Finetune bart on this dataset, i.e. ```sh ./finetune.sh --data_dir /tmp/bartz2/ --train_batch_size=4 --eval_batch_size=4 --output_dir=copybart --num_train_epochs 3 --model_name_or_path facebook/bart-large --n_val=1000 --n_test=1000 --task=translation ``` Caveat: I commented out `--fp16` in `finetune.sh` 3. Generate text with the finetuned model ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model = AutoModelForSeq2SeqLM.from_pretrained(f'/home/marcotcr/work/transformers/examples/seq2seq/copybart//best_tfmr') tokenizer = AutoTokenizer.from_pretrained(f'/home/marcotcr/work/transformers/examples/seq2seq/copybart/best_tfmr') model.to('cuda'); for text in d[N+1:N+10]: print(text) inputs = tokenizer([text], max_length=1024, return_tensors='pt', truncation=True).to('cuda') a = model.generate(**inputs, early_stopping=True, num_beams=4, max_length=100, early_stoppy=True) dec = tokenizer.batch_decode(a, skip_special_tokens=True, clean_up_tokenization_spaces=False) print(dec[0]) print() ``` Output (first few lines): ``` A black man walks away with a large basket full of items on his head. A black man walks away with a large basket full of items on his head of a woman-on-shoreline-a-chosie-centre-of items in a basket full-items on their head.A fellow-side of the basket.- A person is doing a bicycle trick over the rocks while a bystander gives a thumbs up. A person is doing a bicycle trick over the rocks while a bystander gives a thumbs up.A couple of people are afoot in a bicycle trunk over the Rocks while a passerer gives an thumbs up to the bystander is a thumb up.T-up. Female with long black hair and wearing eyeglasses, a blue shirt, sitting next to a male in a striped shirt, at a table, cutting into a chocolate cake with a single red candle in it. Female with long black hair and wearing eyeglasses, a blue shirt, sitting next to a male in a striped shirt, at a table, cutting into a chocolate cake with a single red candle in it.The cake is a blue shirts, a purple-colored tissue- ``` ## Expected behavior I would expect it to emit the EOS token after copying the sentence. It seems to learn the right behavior up to that point (it copies the sentence), and I saw this in my real seq2seq tasks as well (i.e. it did the right thing but failed to stop). This behavior is also present in `test_generations.txt`. I also tried training on a dataset where the target sentences ended in `</s>`, but that didn't change it. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 (pulled from master this morning) - Platform: Linux-5.3.0-61-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.8 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no
07-10-2020 21:40:42
07-10-2020 21:40:42
Hmm, as far as I know the pretrained BART models do produce the EOS token and can actually generate pretty short summaries. For example if you look into the test `test_cnn_summarization_same_as_fairseq` you can see that the summaries are actually quite small (some are much smaller than `max_length`) Also you could try to replace this line: ```python a = model.generate(**inputs, early_stopping=True, num_beams=4, max_length=100, early_stoppy=True) ``` with ```python a = model.generate(**inputs, early_stopping=True, num_beams=4, max_length=100, length_penalty=2.0, no_repeat_ngram_size=3) ``` Also pinging @sshleifer here.<|||||>That test helped me figure it out, thanks. `min_length` is set to some default value, I'm guessing around `50`. If I set `min_length=0`, it works as expected. It sounds from the documentation that `0` should be the default value, so I guess there is a bug either in the documentation or in the code: > min_length – (optional) int The min length of the sequence to be generated. Between 0 and infinity. Default to 0. I'd close this, but I'm leaving it open just in case you guys want to fix this mismatch. Thanks,<|||||>Interesting! Usually `config.min_length` should be set to 0 by default. Not sure why it is not set to `0` in your case. Thanks for the feedback though!<|||||>There is a bigger problem somewhere I suspect. This started happening around generation_utils.py time, and is happening in the blenderbot PR as well. Also see #5656 We should be able to generate EOS with min_length=50. I'll try to take a look later in the week if the mystery remains unsolved. Does early_stopping=True/False make any difference?<|||||>No early_stopping=True/False is not making any difference. And setting config.min_length = 0 is still not working for my case as the fine-tuned model still producing truncated outputs.<|||||>@tromedlov22 you also tried BART? This issue is about BART.<|||||>@marcotcr the confusion is caused by the fact that by default we use `config.task_specific_params['summarization']`. The way to override is to save the desired config locally. ```python In [8]: AutoConfig.from_pretrained('facebook/bart-large').task_specific_params['summarization'] Out[8]: {'early_stopping': True, 'length_penalty': 2.0, 'max_length': 142, 'min_length': 56, 'no_repeat_ngram_size': 3, 'num_beams': 4} ``` We should probably add a `logger.info` statement that we are using task specific params. <|||||>Same issue here. I've been using BART-large-cnn (and -xsum) to finetune on my dataset. By setting `max_tokens=620`, and `min_tokens=0, 300, 500, and 600`, the BART still produces truncated (incomplete) sentences, which does not make sense to me. Any workaround/solution to this? @sshleifer Thanks!
transformers
5,673
closed
Document model outputs
This PR adds proper documentation for all model outputs introduced in #5226. There are a few fixes to some docstrings for proper sphinx formatting, and a tiny change in the function that generates docstring to reference to `ModelOutput` types by their full names (since they are not in the init).
07-10-2020 21:10:30
07-10-2020 21:10:30
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5673?src=pr&el=h1) Report > Merging [#5673](https://codecov.io/gh/huggingface/transformers/pull/5673?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/201d23f2854c7a13d3c32df4947af9fd7365c2cd&el=desc) will **decrease** coverage by `0.48%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5673/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5673?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5673 +/- ## ========================================== - Coverage 77.34% 76.85% -0.49% ========================================== Files 146 146 Lines 25948 25949 +1 ========================================== - Hits 20070 19944 -126 - Misses 5878 6005 +127 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5673?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5673/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `82.04% <ø> (ø)` | | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5673/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.39% <ø> (ø)` | | | [src/transformers/modeling\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5673/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `81.55% <ø> (ø)` | | | [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5673/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `89.45% <ø> (ø)` | | | [src/transformers/modeling\_outputs.py](https://codecov.io/gh/huggingface/transformers/pull/5673/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vdXRwdXRzLnB5) | `100.00% <ø> (ø)` | | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5673/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.19% <100.00%> (+0.05%)` | :arrow_up: | | [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/5673/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `19.81% <0.00%> (-79.28%)` | :arrow_down: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5673/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: | | [...rc/transformers/data/datasets/language\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/5673/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `34.69% <0.00%> (-57.15%)` | :arrow_down: | | [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5673/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `50.74% <0.00%> (-35.83%)` | :arrow_down: | | ... and [12 more](https://codecov.io/gh/huggingface/transformers/pull/5673/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5673?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5673?src=pr&el=footer). Last update [223084e...858c827](https://codecov.io/gh/huggingface/transformers/pull/5673?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>I am so glad you noticed since this is the bit I spent the most time on :-)
transformers
5,672
closed
Added first description of the model
Added general description, information about the tags and also some example usage code.
07-10-2020 18:41:09
07-10-2020 18:41:09
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5672?src=pr&el=h1) Report > Merging [#5672](https://codecov.io/gh/huggingface/transformers/pull/5672?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/201d23f2854c7a13d3c32df4947af9fd7365c2cd&el=desc) will **decrease** coverage by `0.12%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5672/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5672?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5672 +/- ## ========================================== - Coverage 77.34% 77.22% -0.13% ========================================== Files 146 146 Lines 25948 25948 ========================================== - Hits 20070 20038 -32 - Misses 5878 5910 +32 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5672?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5672/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5672/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-2.51%)` | :arrow_down: | | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5672/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5672?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5672?src=pr&el=footer). Last update [223084e...4c2abf8](https://codecov.io/gh/huggingface/transformers/pull/5672?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,671
closed
Deprecate old past arguments
As discussed internally, previous arguments `past`, `decoder_cached_states` and `decoder_past_key_value_states` are deprecated and replaced by either `past_key_values` or `decoder_past_key_values`. This also fixes the mentions to those arguments in the input docstrings, the output docstrings already refer to the correct arg (this was done in #5226 ). In passing, replace `DeprecationWarning` in the other deprecated args by `FutureWarning`, since it's the right way to do it.
07-10-2020 18:07:28
07-10-2020 18:07:28
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5671?src=pr&el=h1) Report > Merging [#5671](https://codecov.io/gh/huggingface/transformers/pull/5671?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/201d23f2854c7a13d3c32df4947af9fd7365c2cd&el=desc) will **decrease** coverage by `0.14%`. > The diff coverage is `68.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5671/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5671?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5671 +/- ## ========================================== - Coverage 77.34% 77.20% -0.15% ========================================== Files 146 146 Lines 25948 25981 +33 ========================================== - Hits 20070 20059 -11 - Misses 5878 5922 +44 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5671?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5671/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `82.04% <ø> (ø)` | | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5671/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.39% <ø> (ø)` | | | [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5671/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.82% <ø> (ø)` | | | [src/transformers/modeling\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5671/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `81.55% <ø> (ø)` | | | [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/5671/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `89.71% <ø> (ø)` | | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5671/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <ø> (ø)` | | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5671/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.78% <ø> (ø)` | | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5671/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.80% <60.00%> (-0.64%)` | :arrow_down: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5671/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.11% <60.00%> (-0.55%)` | :arrow_down: | | [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5671/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.37% <71.42%> (-1.54%)` | :arrow_down: | | ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/5671/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5671?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5671?src=pr&el=footer). Last update [201d23f...f51c43e](https://codecov.io/gh/huggingface/transformers/pull/5671?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>i was able to use gpt2 using trainer 10-12 hours ago but i am getting an error now, i think the "past" variable replacing is not consistent with the trainer.py class that's why i am getting this error(now i am working with 3.0.1) ` TypeError Traceback (most recent call last) <ipython-input-15-3435b262f1ae> in <module> ----> 1 trainer.train() ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/trainer.py in train(self, model_path) 497 continue 498 --> 499 tr_loss += self._training_step(model, inputs, optimizer) 500 501 if (step + 1) % self.args.gradient_accumulation_steps == 0 or ( ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/trainer.py in _training_step(self, model, inputs, optimizer) 620 inputs["mems"] = self._past 621 --> 622 outputs = model(**inputs) 623 loss = outputs[0] # model outputs are always tuple in transformers (see doc) 624 ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 530 result = self._slow_forward(*input, **kwargs) 531 else: --> 532 result = self.forward(*input, **kwargs) 533 for hook in self._forward_hooks.values(): 534 hook_result = hook(self, input, result) ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py in forward(self, *inputs, **kwargs) 151 replicas = self.replicate(self.module, self.device_ids[:len(inputs)]) 152 outputs = self.parallel_apply(replicas, inputs, kwargs) --> 153 return self.gather(outputs, self.output_device) 154 155 def replicate(self, module, device_ids): ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py in gather(self, outputs, output_device) 163 164 def gather(self, outputs, output_device): --> 165 return gather(outputs, output_device, dim=self.dim) 166 167 ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py in gather(outputs, target_device, dim) 66 # Setting the function to None clears the refcycle. 67 try: ---> 68 res = gather_map(outputs) 69 finally: 70 gather_map = None ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py in gather_map(outputs) 61 return type(out)(((k, gather_map([d[k] for d in outputs])) 62 for k in out)) ---> 63 return type(out)(map(gather_map, zip(*outputs))) 64 65 # Recursive function calls like this create reference cycles. TypeError: __init__() missing 1 required positional argument: 'logits' `<|||||>This is not linked to this PR, as the Trainer never uses past. It seems linked to the model output PR (#5226). You need to instantiate your model by passing `return_tuple=True` to avoid the new behavior, or by adding it to your config like this: ``` config.return_tuple = True ``` <|||||>Then why the trainer that works in 3.0.1 does not work after this PR merge. I am new to this library and trying to understand it would be very helpful if you explained a bit. Thanks On Sat, Jul 11, 2020, 6:10 PM Sylvain Gugger <[email protected]> wrote: > This is not linked to this PR, as the Trainer never uses past. It seems > linked to the model output PR (#5226 > <https://github.com/huggingface/transformers/issues/5226>) will push a > quick fix soon. > > — > You are receiving this because you commented. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/pull/5671#issuecomment-657054389>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AHYJOHP4KBK2FJA2JC2J7V3R3BJEDANCNFSM4OWZ67WQ> > . > <|||||>I think you mistake the PR that caused the problem, #5226 was merged just a bit before.
transformers
5,670
closed
[WIP] add DeFormer (ACL 2020) example
Hi there, I'm one of the authors of the [DeFormer paper](https://www.aclweb.org/anthology/2020.acl-main.411/), and I'd like to adapt the [DeFormer codebase](https://github.com/StonyBrookNLP/deformer) to this awesome transformers library. To get the adaptation done, I put a few high-level todos in the README (also here). I plan to get a working example by/before August but don't have a precise timeline yet. Let me hear what you think. Thanks. - [ ] use HF preprocessing (use HF nlp library) - [ ] convert original TF DeFormer to HF version - [ ] convert pre-trained checkpoints - [ ] compare and test accuracy for SQuAD, RACE, and BoolQ - [ ] prepare demo and upload to model cards
07-10-2020 17:56:35
07-10-2020 17:56:35
Hi Qingqing, good to see you here! Yeah, I checked your official code repo and found it isn't based on 🤗's Transformers. It'll be nice if you can reimplement it and add Deformer in our library! ~~@thomwolf Re. `🤗nlp`, should we use `🤗nlp` in `transformers` right away? Since it would make `🤗nlp` a dependency of `transformers/examples`.~~ Never mind, it already is.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,669
closed
[squad] add version tag to squad cache
This diff is to add a version number to the SQuAD cache file so that cached SQuADv1.1 features are not mistakenly read when you request SQuADv2. Addresses #5668
07-10-2020 17:39:01
07-10-2020 17:39:01
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5669?src=pr&el=h1) Report > Merging [#5669](https://codecov.io/gh/huggingface/transformers/pull/5669?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0267668c3d648c6e41afda97f5df8671ee880ac3&el=desc) will **increase** coverage by `0.52%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5669/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5669?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5669 +/- ## ========================================== + Coverage 77.01% 77.53% +0.52% ========================================== Files 128 145 +17 Lines 21615 25367 +3752 ========================================== + Hits 16646 19668 +3022 - Misses 4969 5699 +730 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5669?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/5669/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.24% <ø> (+0.11%)` | :arrow_up: | | [src/transformers/benchmark/benchmark.py](https://codecov.io/gh/huggingface/transformers/pull/5669/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrLnB5) | `74.01% <ø> (+5.16%)` | :arrow_up: | | [src/transformers/benchmark/benchmark\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5669/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3MucHk=) | `86.04% <ø> (+0.68%)` | :arrow_up: | | [src/transformers/benchmark/benchmark\_args\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5669/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdGYucHk=) | `87.50% <ø> (ø)` | | | [src/transformers/benchmark/benchmark\_args\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5669/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdXRpbHMucHk=) | `89.13% <ø> (-7.75%)` | :arrow_down: | | [src/transformers/benchmark/benchmark\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5669/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3RmLnB5) | `61.53% <ø> (ø)` | | | [src/transformers/benchmark/benchmark\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5669/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `69.84% <ø> (-3.26%)` | :arrow_down: | | [src/transformers/configuration\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5669/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100.00% <ø> (ø)` | | | [src/transformers/configuration\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5669/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `93.18% <ø> (+0.32%)` | :arrow_up: | | [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5669/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `93.75% <ø> (+0.41%)` | :arrow_up: | | ... and [118 more](https://codecov.io/gh/huggingface/transformers/pull/5669/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5669?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5669?src=pr&el=footer). Last update [bfacb2e...77a74f7](https://codecov.io/gh/huggingface/transformers/pull/5669?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,668
closed
SquadDataset should use version number in cache file name
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): N/A Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: * [x] my own modified scripts: simple example code given below The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce 1. Load `SquadDataset` with `args.version_2_with_negative = False`. You will see the progress bars for creating cached features. 2. Load `SquadDataset` with `args.version_2_with_negative = True`. Rather than seeing it create a new cache for the v2 dataset, you will see it automatically use the already made cache file for v1. Example code: ``` from transformers import AutoTokenizer from transformers import SquadDataset from transformers import SquadDataTrainingArguments tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased') args = SquadDataTrainingArguments() # FIXME: Change this path to your local SQuAD dataset path args.data_dir = os.path.expanduser("~/.torch/nlp/SQuAD") args.version_2_with_negative = False squadv1 = SquadDataset(args, tokenizer) args.version_2_with_negative = True squadv2 = SquadDataset(args, tokenizer) ``` ## Expected behavior Separate cache files should be created for the v1.1 and v2 versions of SQuAD ## Environment info - `transformers` version: 3.0.2 - Platform: Linux-5.0.0-1035-azure-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.7 - PyTorch version (GPU?): 1.5.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: N/A - Using distributed or parallel set-up in script?: no
07-10-2020 17:31:41
07-10-2020 17:31:41
closes by your own #5669 Thanks for your contribution :)
transformers
5,667
closed
pytorch_model.bin file is different after uploading to HuggingFace Models
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): `distilroberta-base` Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behaviour: 1. Save my model with `.save_pretrained()` 2. Download the models `pytorch_model.bin` 3. Check the `diff` between `pytorch_model.bin` _before_ uploading and _after_ downloading, it is not the same. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> I first noticed that a model I have trained produced different outputs when I loaded it from a local directory compared to uploading it to https://huggingface.co/models and downloading it. Unfortunately, the error is a little hard to reproduce as you need access to my saved model. I have uploaded it [here](https://drive.google.com/file/d/1C8okSoS4tJHtZllQ8qIJbmXRLb8ITL6N/view?usp=sharing). With that model downloaded, I compare its outputs before/after uploading: _Before uploading_ (e.g. loading the model from a local directory) ```python import torch from scipy.spatial.distance import cosine from transformers import AutoModel, AutoTokenizer # Load the model tokenizer = AutoTokenizer.from_pretrained("declutr-small") model = AutoModel.from_pretrained("declutr-small") # Prepare some text to embed text = [ "A smiling costumed woman is holding an umbrella.", "A happy woman in a fairy costume holds an umbrella.", ] inputs = tokenizer(text, padding=True, truncation=True, return_tensors="pt") # Embed the text with torch.no_grad(): sequence_output, _ = model(**inputs, output_hidden_states=False) # Mean pool the token-level embeddings to get sentence-level embeddings embeddings = torch.sum( sequence_output * inputs["attention_mask"].unsqueeze(-1), dim=1 ) / torch.clamp(torch.sum(inputs["attention_mask"], dim=1, keepdims=True), min=1e-9) # Compute a semantic similarity via the cosine distance semantic_sim = 1 - cosine(embeddings[0], embeddings[1]) print(semantic_sim) # => ~0.83 ``` _After uploading_ (e.g. loading the model from https://huggingface.co/models) ```python # Load the model tokenizer = AutoTokenizer.from_pretrained("johngiorgi/declutr-small", force_download=True) model = AutoModel.from_pretrained("johngiorgi/declutr-small", force_download=True) # Prepare some text to embed text = [ "A smiling costumed woman is holding an umbrella.", "A happy woman in a fairy costume holds an umbrella.", ] inputs = tokenizer(text, padding=True, truncation=True, return_tensors="pt") # Embed the text with torch.no_grad(): sequence_output, _ = model(**inputs, output_hidden_states=False) # Mean pool the token-level embeddings to get sentence-level embeddings embeddings = torch.sum( sequence_output * inputs["attention_mask"].unsqueeze(-1), dim=1 ) / torch.clamp(torch.sum(inputs["attention_mask"], dim=1, keepdims=True), min=1e-9) # Compute a semantic similarity via the cosine distance semantic_sim = 1 - cosine(embeddings[0], embeddings[1]) print(semantic_sim) # => ~0.99, NOT the same as the local model! ``` The embeddings must be different, as their semantic similarity is. After some more digging, I realized that the `pytorch_model.bin` of the local model and the uploaded then downloaded model are not the same, which I checked with `diff`. I tried everything I could think of, deleting my `transformers` cache folder, deleting the model from https://huggingface.co/models and re-uploading. I also tried uploading it from both macOS/Linux. The error persists. Does anyone have any clue how this could happen? It's such a frustrating error b/c it essentially passes silently until you look at your models outputs. ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> I expect that the outputs of my model to be identical when I load it from a local directory, and when I upload it and then download it from https://huggingface.co/models. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Linux-4.15.0-109-generic-x86_64-with-debian-buster-sid - Python version: 3.7.7 - PyTorch version (GPU?): 1.5.1 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No. - Using distributed or parallel set-up in script?: No.
07-10-2020 17:30:13
07-10-2020 17:30:13
Very strange, I just tried the code and it returns `0.8289950489997864`. Maybe you try to re-download the model with: ```python # Load the model tokenizer = AutoTokenizer.from_pretrained("johngiorgi/declutr-small", force_download=True) model = AutoModel.from_pretrained("johngiorgi/declutr-small", force_download=True) ``` Hopefully this helps :)<|||||>Hi @stefan-it, I did try with `force_download=True` a bunch of times but it never worked. I updated the example to include that. Hmm, mind sending the exact code you used to get the `0.8289950489997864`? I had a collegue run the code on their own system and like me they got the wrong answer of: `0.9928748607635498`. I tried on two machines (linux and mac), both produce `0.9928748607635498`. Just to be sure, I deleted my conda environment, made a new one and reinstalled `transformers`. Then I deleted the default cache dir (for me this was at `~/.cache/torch/transformers/`). Finally, I tried the code again this time with `force_download=True`. No beans, exactly the same issue and the `semantic_sim` is ~0.99: ```bash In [1]: import torch ...: from scipy.spatial.distance import cosine ...: ...: from transformers import AutoModel, AutoTokenizer In [2]: tokenizer = AutoTokenizer.from_pretrained("johngiorgi/declutr-small", force_download=True, cache_dir="./declutr-small") ...: model = AutoModel.from_pretrained("johngiorgi/declutr-small", force_download=True, cache_dir="./declutr-small") Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 547/547 [00:00<00:00, 277kB/s] Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 798k/798k [00:00<00:00, 4.71MB/s] Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 456k/456k [00:00<00:00, 3.83MB/s] Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 239/239 [00:00<00:00, 157kB/s] Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 54.0/54.0 [00:00<00:00, 35.4kB/s] Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 547/547 [00:00<00:00, 403kB/s] Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 331M/331M [00:14<00:00, 23.2MB/s] In [3]: # Prepare some text to embed ...: text = [ ...: "A smiling costumed woman is holding an umbrella.", ...: "A happy woman in a fairy costume holds an umbrella.", ...: ] ...: inputs = tokenizer(text, padding=True, truncation=True, return_tensors="pt") ...: ...: # Embed the text ...: with torch.no_grad(): ...: sequence_output, _ = model(**inputs, output_hidden_states=False) ...: ...: # Mean pool the token-level embeddings to get sentence-level embeddings ...: embeddings = torch.sum( ...: sequence_output * inputs["attention_mask"].unsqueeze(-1), dim=1 ...: ) / torch.clamp(torch.sum(inputs["attention_mask"], dim=1, keepdims=True), min=1e-9) ...: ...: # Compute a semantic similarity via the cosine distance ...: semantic_sim = 1 - cosine(embeddings[0], embeddings[1]) ...: print(semantic_sim) # => ~0.99, NOT the same as the local model! 0.992874801158905 ```<|||||>@JohnGiorgi I just ran: ```python import torch from scipy.spatial.distance import cosine from transformers import AutoTokenizer, AutoModel # Load the model tokenizer = AutoTokenizer.from_pretrained("johngiorgi/declutr-small", force_download=True) model = AutoModel.from_pretrained("johngiorgi/declutr-small", force_download=True) # Prepare some text to embed text = [ "A smiling costumed woman is holding an umbrella.", "A happy woman in a fairy costume holds an umbrella.", ] inputs = tokenizer(text, padding=True, truncation=True, return_tensors="pt") # Embed the text with torch.no_grad(): sequence_output, _ = model(**inputs, output_hidden_states=False) # Mean pool the token-level embeddings to get sentence-level embeddings embeddings = torch.sum( sequence_output * inputs["attention_mask"].unsqueeze(-1), dim=1 ) / torch.clamp(torch.sum(inputs["attention_mask"], dim=1, keepdims=True), min=1e-9) # Compute a semantic similarity via the cosine distance semantic_sim = 1 - cosine(embeddings[0], embeddings[1]) print(semantic_sim) ``` I re-downloaded the model and it still returns 0.8289950489997864 😅<|||||>That is maddening, I literally copy-pasted that code and I get `0.992874801158905` 😢 Thanks anyways for confirming it works somewhere at least! <img width="1440" alt="image" src="https://user-images.githubusercontent.com/8917831/87212687-d1232300-c2ed-11ea-9039-3b05077be964.png">
transformers
5,666
closed
How do you connect Convolutional layers to Transformers?
# ❓ Questions & Help Hi everyone! Where can I find code example that makes clear how to connect Convolutional layers to Transformers and how it needs to be shaped in order to make such a connection? I have a bit of a hard time figuring it out. Thank you for your support.
07-10-2020 17:13:21
07-10-2020 17:13:21
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,665
closed
[AutoModels] Fix config params handling of all PT and TF AutoModels
As shown in #5474, currently, a command like: ```python from transformers import AutoModelForCausalLM] model = AutoModelForCausalLM.from_pretrained('bert-base-uncased', is_decoder=True) ``` fails because `is_decoder` is carried on as a model init argument even though it should *only* be used as a config init argument. This PR fixes one `AutoModelFor....` for this, but this still has be applied for other `AutoModelFor...` classes. Pinging @LysandreJik @sgugger @thomwolf - are you guys ok with this change (bug fix) in general?
07-10-2020 15:45:53
07-10-2020 15:45:53
Isn't the canonical way: ``` config, kwargs = AutoConfig.from_pretrained(pretrained_model_name_or_path, return_unused_kwargs=True, **kwargs) ``` in the test?<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5665?src=pr&el=h1) Report > Merging [#5665](https://codecov.io/gh/huggingface/transformers/pull/5665?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ce374ba87767d551f720242d5e64bfa976531079&el=desc) will **decrease** coverage by `1.11%`. > The diff coverage is `55.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5665/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5665?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5665 +/- ## ========================================== - Coverage 78.43% 77.32% -1.12% ========================================== Files 146 146 Lines 26002 26002 ========================================== - Hits 20395 20105 -290 - Misses 5607 5897 +290 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5665?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5665/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `63.03% <50.00%> (ø)` | | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5665/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `74.41% <60.00%> (ø)` | | | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5665/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5665/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5665/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.49% <0.00%> (+0.29%)` | :arrow_up: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5665/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5665?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5665?src=pr&el=footer). Last update [ce374ba...3bb7966](https://codecov.io/gh/huggingface/transformers/pull/5665?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>> Isn't the canonical way: > > ``` > config, kwargs = AutoConfig.from_pretrained(pretrained_model_name_or_path, return_unused_kwargs=True, **kwargs) > ``` > > in the test? Oh yeah, that's much cleaner. We should probably update all AutoModels in PT and TF with this then, no?<|||||>I think that's correct, and the way it was always meant to be :raised_eyebrow: <|||||>> We should probably update all AutoModels in PT and TF with this then, no? Yes, I agree.
transformers
5,664
closed
[PyTorch] Load and run a model CPU which was traced and saved on GPU
# ❓ Questions & Help ## Details I am trying to trace/save openai_gpt on GPU and use that model on a CPU and facing issues. Is this possible to do? I have attached the link to the question posted on the discuss forum as well. ### Sample Script ```python from transformers import OpenAIGPTTokenizer, OpenAIGPTModel import torch tokenizer = OpenAIGPTTokenizer.from_pretrained('openai-gpt') model = OpenAIGPTModel.from_pretrained('openai-gpt') inputs = torch.tensor([tokenizer.encode("Hello, my dog is cute")]) outputs = model(inputs) print(outputs) print("To CUDA:") inputs = inputs.to("cuda") model = model.to("cuda") traced_model = torch.jit.trace(model, (inputs,)) torch.jit.save(traced_model, "openai_gpt_cuda.pt") print(traced_model.graph) print("\n") print("Load model onto CPU") loaded = torch.jit.load("openai_gpt_cuda.pt", map_location=torch.device("cpu")) inputs = inputs.to("cpu") print("\n") print(loaded.graph) outputs = loaded(inputs) print(outputs) ``` ### Error seen ``` Traceback (most recent call last): File "gpt.py", line 23, in <module> outputs = loaded(inputs) File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) RuntimeError: Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _th_index_select The above operation failed in interpreter. Traceback (most recent call last): Serialized File "code/__torch__/torch/nn/modules/module/___torch_mangle_147.py", line 35 position_ids = torch.arange(_20, dtype=4, layout=0, device=torch.device("cuda:0"), pin_memory=False) input0 = torch.view(torch.unsqueeze(position_ids, 0), [-1, _19]) _21 = torch.add((_14).forward(input, ), (_13).forward(input0, ), alpha=1) ~~~~~~~~~~~~ <--- HERE input1 = torch.add(_21, CONSTANTS.c0, alpha=1) _22 = (_12).forward(input1, ) /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/functional.py(1484): embedding /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/sparse.py(114): forward /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py(516): _slow_forward /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py(530): __call__ /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/modeling_openai.py(433): forward /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py(516): _slow_forward /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py(530): __call__ /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/jit/__init__.py(1034): trace_module /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/jit/__init__.py(882): trace gpt.py(14): <module> Serialized File "code/__torch__/torch/nn/modules/module/___torch_mangle_0.py", line 8, in forward def forward(self: __torch__.torch.nn.modules.module.___torch_mangle_0.Module, input: Tensor) -> Tensor: position_embeds = torch.embedding(self.weight, input, -1, False, False) ~~~~~~~~~~~~~~~ <--- HERE return position_embeds The above operation failed in interpreter. Traceback (most recent call last): ``` https://discuss.huggingface.co/t/pytorch-trace-on-cpu-and-use-on-gpu/181/3
07-10-2020 15:35:56
07-10-2020 15:35:56
@vdantu Thanks for reporting the issue. The problem arises in `modeling_openai.py`when the user do not provide the `position_ids` function argument thus leading to the inner `position_ids` being created during the forward call. This is fine in classic PyTorch because `forward` is actually evaluated at each call. When it comes to tracing, this is an issue, because the device specified in the forward to actually create the tensor will be hardcoded and you can actually see it in the generated graph: ```python %input.1 : Tensor = aten::view(%input_ids.1, %64) %140 : Device = prim::Constant[value="cuda:0"]() %position_ids.1 : Tensor = aten::arange(%59, %67, %45, %140, %70) %73 : Tensor = aten::unsqueeze(%position_ids.1, %45) ``` Above you can see `%140` is a constant which value is actually set to `"cuda:0"` and then, it is reused to create the `%position_ids.1` tensor through `aten::arange(..., %140, ...)` which of course leads to the error you're seeing. I'll have a fix to generate the `position_ids` buffer correctly registered at the Module initialisation and not during forward, so it should be correctly handled by the `map_location` parameter while exporting.<|||||>The above PR should fix the issue, below is the output of the code you provided. If you want to give it a try, let us know if it works on your end too 👍 ```python (pytorch) mfuntowicz@brutasse:~/Workspace/transformers$ python test.py ftfy or spacy is not installed using BERT BasicTokenizer instead of SpaCy & ftfy. Some weights of OpenAIGPTModel were not initialized from the model checkpoint at openai-gpt and are newly initialized: ['position_ids'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. cpu cpu (tensor([[[ 7.3001e-02, -1.2431e+00, 7.9122e-01, ..., 1.6806e+00, -4.3945e-01, 1.1449e+00], [-3.6239e-01, -8.3647e-01, 1.2019e+00, ..., 1.5575e+00, -8.4237e-04, 1.0779e+00], [-1.0138e+00, -7.1014e-01, 6.3509e-01, ..., 1.6684e+00, -4.6458e-01, 1.5093e+00], [-6.1989e-01, -2.9500e-01, 9.9504e-01, ..., 2.0421e+00, 4.2680e-01, 2.1920e+00], [-5.2932e-01, -1.7606e-02, 7.4836e-01, ..., 2.2980e+00, 3.4807e-01, 2.7045e+00], [-1.4679e-01, -9.8566e-02, 1.3909e+00, ..., 1.9108e+00, 6.0797e-01, 2.1617e+00]]], grad_fn=<ViewBackward>),) To CUDA: /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:176: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! w = w / math.sqrt(v.size(-1)) /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:179: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! b = self.bias[:, :, : w.size(-2), : w.size(-1)] graph(%self.1 : __torch__.transformers.modeling_openai.OpenAIGPTModel, %input_ids : Long(1:6, 6:1)): %4489 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name="h"](%self.1) %4490 : __torch__.transformers.modeling_openai.___torch_mangle_139.Block = prim::GetAttr[name="11"](%4489) %4463 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name="h"](%self.1) %4464 : __torch__.transformers.modeling_openai.___torch_mangle_127.Block = prim::GetAttr[name="10"](%4463) %4437 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name="h"](%self.1) %4438 : __torch__.transformers.modeling_openai.___torch_mangle_115.Block = prim::GetAttr[name="9"](%4437) %4411 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name="h"](%self.1) %4412 : __torch__.transformers.modeling_openai.___torch_mangle_103.Block = prim::GetAttr[name="8"](%4411) %4385 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name="h"](%self.1) %4386 : __torch__.transformers.modeling_openai.___torch_mangle_91.Block = prim::GetAttr[name="7"](%4385) %4359 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name="h"](%self.1) %4360 : __torch__.transformers.modeling_openai.___torch_mangle_79.Block = prim::GetAttr[name="6"](%4359) %4333 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name="h"](%self.1) %4334 : __torch__.transformers.modeling_openai.___torch_mangle_67.Block = prim::GetAttr[name="5"](%4333) %4307 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name="h"](%self.1) %4308 : __torch__.transformers.modeling_openai.___torch_mangle_55.Block = prim::GetAttr[name="4"](%4307) %4281 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name="h"](%self.1) %4282 : __torch__.transformers.modeling_openai.___torch_mangle_43.Block = prim::GetAttr[name="3"](%4281) %4255 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name="h"](%self.1) %4256 : __torch__.transformers.modeling_openai.___torch_mangle_31.Block = prim::GetAttr[name="2"](%4255) %4229 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name="h"](%self.1) %4230 : __torch__.transformers.modeling_openai.___torch_mangle_19.Block = prim::GetAttr[name="1"](%4229) %4203 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name="h"](%self.1) %4204 : __torch__.transformers.modeling_openai.Block = prim::GetAttr[name="0"](%4203) %4178 : __torch__.torch.nn.modules.dropout.Dropout = prim::GetAttr[name="drop"](%self.1) %4177 : __torch__.torch.nn.modules.sparse.___torch_mangle_0.Embedding = prim::GetAttr[name="positions_embed"](%self.1) %4175 : __torch__.torch.nn.modules.sparse.Embedding = prim::GetAttr[name="tokens_embed"](%self.1) %4173 : Tensor = prim::GetAttr[name="position_ids"](%self.1) %458 : int = prim::Constant[value=0]() # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:438:0 %459 : int = aten::size(%input_ids, %458) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:438:0 %460 : Long() = prim::NumToTensor(%459) %4020 : int = aten::Int(%460) %461 : int = prim::Constant[value=1]() # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:438:0 %462 : int = aten::size(%input_ids, %461) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:438:0 %463 : Long() = prim::NumToTensor(%462) %4021 : int = aten::Int(%463) %470 : int = aten::Int(%463) %464 : int = aten::Int(%463) %465 : int = prim::Constant[value=-1]() # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:439:0 %466 : int[] = prim::ListConstruct(%465, %464) %input.1 : Long(1:6, 6:1) = aten::view(%input_ids, %466) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:439:0 %468 : int = prim::Constant[value=0]() # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:447:0 %469 : Long(1:512, 512:1) = aten::unsqueeze(%4173, %468) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:447:0 %471 : int = prim::Constant[value=1]() # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:447:0 %input.2 : Long(1:512) = aten::select(%469, %471, %470) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:447:0 %4638 : Tensor = prim::CallMethod[name="forward"](%4175, %input.1) %4639 : Tensor = prim::CallMethod[name="forward"](%4177, %input.2) %481 : int = prim::Constant[value=1]() # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:479:0 %482 : Float(1:4608, 6:768, 768:1) = aten::add(%4638, %4639, %481) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:479:0 %483 : Long() = prim::Constant[value={0}]() # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:479:0 %484 : int = prim::Constant[value=1]() # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:479:0 %input.3 : Float(1:4608, 6:768, 768:1) = aten::add(%482, %483, %484) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:479:0 %4640 : Tensor = prim::CallMethod[name="forward"](%4178, %input.3) %489 : int = prim::Constant[value=-1]() # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:482:0 %490 : int = aten::size(%4640, %489) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:482:0 %491 : Long() = prim::NumToTensor(%490) %4022 : int = aten::Int(%491) %4641 : Tensor = prim::CallMethod[name="forward"](%4204, %4640) %4642 : Tensor = prim::CallMethod[name="forward"](%4230, %4641) %4643 : Tensor = prim::CallMethod[name="forward"](%4256, %4642) %4644 : Tensor = prim::CallMethod[name="forward"](%4282, %4643) %4645 : Tensor = prim::CallMethod[name="forward"](%4308, %4644) %4646 : Tensor = prim::CallMethod[name="forward"](%4334, %4645) %4647 : Tensor = prim::CallMethod[name="forward"](%4360, %4646) %4648 : Tensor = prim::CallMethod[name="forward"](%4386, %4647) %4649 : Tensor = prim::CallMethod[name="forward"](%4412, %4648) %4650 : Tensor = prim::CallMethod[name="forward"](%4438, %4649) %4651 : Tensor = prim::CallMethod[name="forward"](%4464, %4650) %4652 : Tensor = prim::CallMethod[name="forward"](%4490, %4651) %4023 : int[] = prim::ListConstruct(%4020, %4021, %4022) %4024 : Float(1:4608, 6:768, 768:1) = aten::view(%4652, %4023) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:495:0 %4025 : (Float(1:4608, 6:768, 768:1)) = prim::TupleConstruct(%4024) return (%4025) Load model onto CPU graph(%self.1 : __torch__.transformers.modeling_openai.OpenAIGPTModel, %input_ids.1 : Tensor): %78 : Tensor = prim::Constant[value={0}]() # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:479:0 %47 : int = prim::Constant[value=0]() # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:438:0 %53 : int = prim::Constant[value=1]() # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:438:0 %61 : int = prim::Constant[value=-1]() # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:439:0 %3 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name="h"](%self.1) %4 : __torch__.transformers.modeling_openai.___torch_mangle_139.Block = prim::GetAttr[name="11"](%3) %6 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name="h"](%self.1) %7 : __torch__.transformers.modeling_openai.___torch_mangle_127.Block = prim::GetAttr[name="10"](%6) %9 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name="h"](%self.1) %10 : __torch__.transformers.modeling_openai.___torch_mangle_115.Block = prim::GetAttr[name="9"](%9) %12 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name="h"](%self.1) %13 : __torch__.transformers.modeling_openai.___torch_mangle_103.Block = prim::GetAttr[name="8"](%12) %15 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name="h"](%self.1) %16 : __torch__.transformers.modeling_openai.___torch_mangle_91.Block = prim::GetAttr[name="7"](%15) %18 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name="h"](%self.1) %19 : __torch__.transformers.modeling_openai.___torch_mangle_79.Block = prim::GetAttr[name="6"](%18) %21 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name="h"](%self.1) %22 : __torch__.transformers.modeling_openai.___torch_mangle_67.Block = prim::GetAttr[name="5"](%21) %24 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name="h"](%self.1) %25 : __torch__.transformers.modeling_openai.___torch_mangle_55.Block = prim::GetAttr[name="4"](%24) %27 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name="h"](%self.1) %28 : __torch__.transformers.modeling_openai.___torch_mangle_43.Block = prim::GetAttr[name="3"](%27) %30 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name="h"](%self.1) %31 : __torch__.transformers.modeling_openai.___torch_mangle_31.Block = prim::GetAttr[name="2"](%30) %33 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name="h"](%self.1) %34 : __torch__.transformers.modeling_openai.___torch_mangle_19.Block = prim::GetAttr[name="1"](%33) %36 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name="h"](%self.1) %37 : __torch__.transformers.modeling_openai.Block = prim::GetAttr[name="0"](%36) %39 : __torch__.torch.nn.modules.dropout.Dropout = prim::GetAttr[name="drop"](%self.1) %41 : __torch__.torch.nn.modules.sparse.___torch_mangle_0.Embedding = prim::GetAttr[name="positions_embed"](%self.1) %43 : __torch__.torch.nn.modules.sparse.Embedding = prim::GetAttr[name="tokens_embed"](%self.1) %45 : Tensor = prim::GetAttr[name="position_ids"](%self.1) %48 : int = aten::size(%input_ids.1, %47) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:438:0 %49 : Tensor = prim::NumToTensor(%48) # :0:0 %51 : int = aten::Int(%49) %54 : int = aten::size(%input_ids.1, %53) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:438:0 %55 : Tensor = prim::NumToTensor(%54) # :0:0 %57 : int = aten::Int(%55) %59 : int = aten::Int(%55) %63 : int = aten::Int(%55) %64 : int[] = prim::ListConstruct(%61, %63) %input.1 : Tensor = aten::view(%input_ids.1, %64) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:439:0 %67 : Tensor = aten::unsqueeze(%45, %47) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:447:0 %input0.1 : Tensor = aten::select(%67, %53, %59) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:447:0 %72 : Tensor = prim::CallMethod[name="forward"](%43, %input.1) # :0:0 %75 : Tensor = prim::CallMethod[name="forward"](%41, %input0.1) # :0:0 %76 : Tensor = aten::add(%72, %75, %53) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:479:0 %input1.1 : Tensor = aten::add(%76, %78, %53) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:479:0 %82 : Tensor = prim::CallMethod[name="forward"](%39, %input1.1) # :0:0 %84 : int = aten::size(%82, %61) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:482:0 %85 : Tensor = prim::NumToTensor(%84) # :0:0 %87 : int = aten::Int(%85) %91 : Tensor = prim::CallMethod[name="forward"](%37, %82) # :0:0 %92 : Tensor = prim::CallMethod[name="forward"](%34, %91) # :0:0 %97 : Tensor = prim::CallMethod[name="forward"](%31, %92) # :0:0 %98 : Tensor = prim::CallMethod[name="forward"](%28, %97) # :0:0 %99 : Tensor = prim::CallMethod[name="forward"](%25, %98) # :0:0 %104 : Tensor = prim::CallMethod[name="forward"](%22, %99) # :0:0 %105 : Tensor = prim::CallMethod[name="forward"](%19, %104) # :0:0 %106 : Tensor = prim::CallMethod[name="forward"](%16, %105) # :0:0 %111 : Tensor = prim::CallMethod[name="forward"](%13, %106) # :0:0 %112 : Tensor = prim::CallMethod[name="forward"](%10, %111) # :0:0 %113 : Tensor = prim::CallMethod[name="forward"](%7, %112) # :0:0 %116 : Tensor = prim::CallMethod[name="forward"](%4, %113) # :0:0 %120 : int[] = prim::ListConstruct(%51, %57, %87) %121 : Tensor = aten::view(%116, %120) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:495:0 %123 : (Tensor) = prim::TupleConstruct(%121) return (%123) (tensor([[[ 7.3003e-02, -1.2431e+00, 7.9122e-01, ..., 1.6806e+00, -4.3945e-01, 1.1449e+00], [-3.6239e-01, -8.3647e-01, 1.2019e+00, ..., 1.5575e+00, -8.4937e-04, 1.0779e+00], [-1.0138e+00, -7.1013e-01, 6.3510e-01, ..., 1.6684e+00, -4.6459e-01, 1.5093e+00], [-6.1989e-01, -2.9499e-01, 9.9504e-01, ..., 2.0421e+00, 4.2680e-01, 2.1920e+00], [-5.2932e-01, -1.7599e-02, 7.4836e-01, ..., 2.2980e+00, 3.4806e-01, 2.7045e+00], [-1.4679e-01, -9.8562e-02, 1.3909e+00, ..., 1.9108e+00, 6.0796e-01, 2.1617e+00]]], grad_fn=<ViewBackward>),) ```<|||||>That's very interesting @mfuntowicz ! I think we will probably have multiple of such failures - I would guess for all models that use `position_ids`. Also as a rule, should one never create a tensor on the fly, but always register a buffer for that? @mfuntowicz @sshleifer <|||||>Awesome, thanks for fixing this. I will test this fix. What release of transformers will this change be reflected in? I was testing with transformers 3.0.2. @patrickvonplaten : I think you are right. I remember seeing this with bert base uncased as well. It would definitely be useful to have this fix across all models. <|||||>Looking for easier solutions than changing all the code: 1) Can we just trace the thing correctly by passing `traced_model = torch.jit.trace(model, (inputs,position_ids))`? and then document the correct way to trace (maybe we can add `jit_inputs` or reuse `dummy_inputs`?) 2) how much faster is the model afterwards? 3) We should add a save/load test to test_modeling_common.py if we want to support. The current test_torch_script just traces and then runs forward, and can clearly pass with many unregistered buffers.<|||||>@sshleifer : Thanks for the response. In the example script I pasted above, do you see any errors in the way I am tracing and using the traced model? Please let me know if that needs to be changed. <|||||>@vdantu I don't know a ton about jit, but you could try: ```python traced_model = torch.jit.trace(model, (inputs,position_ids)) ``` and see if that fixes the error.<|||||>@mfuntowicz : Which version of transformers (pypi package) will these changes be available with? I am used to testing the models throug pypi package or through torch.hub. What is the recommended way to get and test these fixes?<|||||>We'll release a new version in the coming weeks, in the meantime you can install from source: `pip install git+https://github.com/huggingface/transformers`<|||||>PyTorch doesn't currently support tracing passed devices correctly: https://github.com/pytorch/pytorch/issues/31141#issuecomment-675506630 I stumbled on two problematic lines while tracing GPT-2: 1. https://github.com/huggingface/transformers/blob/1d6e71e1167dea9e026391ec5a1a2d7ec33d22af/src/transformers/modeling_gpt2.py#L554 2. https://github.com/huggingface/transformers/blob/1d6e71e1167dea9e026391ec5a1a2d7ec33d22af/src/transformers/modeling_gpt2.py#L582 Because of this, tracing the model on one device and then using it on another device doesn't work<|||||>I'm also seeing this issue when trying to trace DistilBert - https://github.com/huggingface/transformers/blob/eb3bd73ce35bfef56eeb722d697f2d39a06a8f8d/src/transformers/modeling_distilbert.py#L105 Looks to be the same issue.<|||||>Observing the same issue while trying to load traced DistilBERT on cpu.<|||||>@eugeneware @kavinsabharwal I managed to get DistilBert work through making similar changes with the PR listed in this issue. However, there is a torch script warning saying seq_length set as constant after doing the trace. Add this in the constructor of embedding ``` # position_ids (1, len position emb) is contiguous in memory and exported when serialized self.register_buffer("position_ids", torch.arange(config.max_position_embeddings).expand((1, -1))) ``` and change `position_ids` as following: ``` position_ids = self.position_ids[:, :seq_length] ``` To save the sake of accuracy, I finally decided to add `position_ids` as one of the inputs that passed to the model. And everything seemed working now. Just a workkaround to this problem. This change is verified working on the ``` model = DistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased', return_dict=False, torchscript=True) ``` @vdantu I ran some training test on the model and it seemed performing fine. The way making position_ids as input should be a safe bet to get away from the warning.<|||||>This issue has been stale for 1 month.
transformers
5,663
closed
Request for Support to Adapt a Model (Human Dignity Observatory: Non-Profit Project)
Dear @huggingface community from the Knowmad Institut we need your contribution and support for the Observatory of Human Dignity that we have developed. You can access to the observatory here: (https://bit.ly/MQHHRRES) We need your support to adapt the hugging face tools. We need to get the sentiments out of the content of the tweets and also (based on the data we have filtered out one by one) filter out the ones that are really human rights violations, like classifying from 1 to 5 with 1 being the one that looks the least like a human rights violation and 5 being the one that most matches the data we already have. We thank the community and the @huggingface team for their support in disseminating this request for support. |<center>[![FB18.png](https://knowmadinstitut.org/wp-content/uploads/2020/04/Black-and-Red-Geometric-Technology-Keynote-Presentation-1.png)](https://bit.ly/MQHHRRES) </center>| <div class="center"> <blockquote class="twitter-tweet"><p lang="en" dir="ltr">Dear <a href="https://twitter.com/huggingface?ref_src=twsrc%5Etfw">@huggingface</a> community from the Knowmad Institut we need your contribution and support for the Observatory of Human Dignity that we have developed.<br><br>You can access to the observatory here: <a href="https://t.co/3TgKwjs3PJ">https://t.co/3TgKwjs3PJ</a><br><br>1/3 <a href="https://t.co/ly4Z139qHb">pic.twitter.com/ly4Z139qHb</a></p>&mdash; Knowmad Institut (@KnowmadInstitut) <a href="https://twitter.com/KnowmadInstitut/status/1280904300831612935?ref_src=twsrc%5Etfw">July 8, 2020</a></blockquote><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> </div>
07-10-2020 15:34:59
07-10-2020 15:34:59
Hi @KnowmadInstitut you should post this on the forum as well at https://discuss.huggingface.co/<|||||>> Hi @KnowmadInstitut you should post this on the forum as well at https://discuss.huggingface.co/ Thank you so much for the guidance, I'll get right on it. :) <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,662
closed
[WIP - don't merge][TF generate] Make tf generate compatible with tf.function
The tf generate function shoud be cleaned up so that it can be used with tf.function.
07-10-2020 15:19:04
07-10-2020 15:19:04
is this still being worked on?<|||||>I won't be able to take a look in the next ~2 weeks. Feel free to continue the PR :-) <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@patrickvonplaten can I ask whether this change is still working on? Will we be able to get one example for greedy search with tf function compatible? Thanks.<|||||>Thank you for all the amazing work! This library is too good to be true and this would be a really good feature to have if possible and when possible!
transformers
5,661
closed
Create Model card for RoBERTa-hindi-guj-san
07-10-2020 14:18:23
07-10-2020 14:18:23
transformers
5,660
closed
"How to train a new language model from scratch" colab stuck at training
Hello, I am following the tutorial : https://huggingface.co/blog/how-to-train At command : trainer.train() It gets stuck (Nothing displayed except "Using deprecated `--per_gpu_train_batch_size` argument"). Any idea ?
07-10-2020 13:49:22
07-10-2020 13:49:22
Hi @iggygeek not sure what the exact problem is, can you provide exact details, env info, transformers and torch version and probably your code (script or colab).<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,659
closed
[Longformer] fix longformer global attention output
This PR fixes the attention probs that are outputted when longformer uses global attention and sets `output_attention=True`. Thanks a million to @k141303 for very clean issue + perfect proposed solution in https://github.com/huggingface/transformers/issues/5646 .
07-10-2020 13:31:21
07-10-2020 13:31:21
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5659?src=pr&el=h1) Report > Merging [#5659](https://codecov.io/gh/huggingface/transformers/pull/5659?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0267668c3d648c6e41afda97f5df8671ee880ac3&el=desc) will **decrease** coverage by `0.13%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5659/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5659?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5659 +/- ## ========================================== - Coverage 77.01% 76.87% -0.14% ========================================== Files 128 145 +17 Lines 21615 25369 +3754 ========================================== + Hits 16646 19502 +2856 - Misses 4969 5867 +898 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5659?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/5659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.24% <ø> (+0.11%)` | :arrow_up: | | [src/transformers/benchmark/benchmark.py](https://codecov.io/gh/huggingface/transformers/pull/5659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrLnB5) | `74.01% <ø> (+5.16%)` | :arrow_up: | | [src/transformers/benchmark/benchmark\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3MucHk=) | `86.04% <ø> (+0.68%)` | :arrow_up: | | [src/transformers/benchmark/benchmark\_args\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdGYucHk=) | `87.50% <ø> (ø)` | | | [src/transformers/benchmark/benchmark\_args\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdXRpbHMucHk=) | `89.13% <ø> (-7.75%)` | :arrow_down: | | [src/transformers/benchmark/benchmark\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3RmLnB5) | `61.53% <ø> (ø)` | | | [src/transformers/benchmark/benchmark\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `69.84% <ø> (-3.26%)` | :arrow_down: | | [src/transformers/configuration\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100.00% <ø> (ø)` | | | [src/transformers/configuration\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `93.18% <ø> (+0.32%)` | :arrow_up: | | [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `93.75% <ø> (+0.41%)` | :arrow_up: | | ... and [118 more](https://codecov.io/gh/huggingface/transformers/pull/5659/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5659?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5659?src=pr&el=footer). Last update [bfacb2e...ee88c2f](https://codecov.io/gh/huggingface/transformers/pull/5659?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Global and Local attention probs have now always the same output shape. This is both more consistent in terms of the output signature for the user and solves the multi-gpu issue.<|||||>Pinging @thomwolf @sshleifer @LysandreJik @sgugger for notification -> more details can be found in issue: https://github.com/huggingface/transformers/issues/5646.
transformers
5,658
closed
Create README.md - Model card
Model card for sentence-transformers/bert-base-nli-max-tokens
07-10-2020 11:50:49
07-10-2020 11:50:49
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5658?src=pr&el=h1) Report > Merging [#5658](https://codecov.io/gh/huggingface/transformers/pull/5658?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2e6bb0e9c37655a03adaa3238dd6d4645fba8dc1&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5658/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5658?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5658 +/- ## ======================================= Coverage 78.26% 78.26% ======================================= Files 145 145 Lines 25366 25366 ======================================= Hits 19852 19852 Misses 5514 5514 ``` ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5658?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5658?src=pr&el=footer). Last update [2e6bb0e...8bdceb7](https://codecov.io/gh/huggingface/transformers/pull/5658?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,657
closed
Create README.md - Model card
Model card for sentence-transformers/bert-base-nli-cls-token
07-10-2020 11:37:46
07-10-2020 11:37:46
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5657?src=pr&el=h1) Report > Merging [#5657](https://codecov.io/gh/huggingface/transformers/pull/5657?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2e6bb0e9c37655a03adaa3238dd6d4645fba8dc1&el=desc) will **decrease** coverage by `1.37%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5657/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5657?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5657 +/- ## ========================================== - Coverage 78.26% 76.88% -1.38% ========================================== Files 145 145 Lines 25366 25366 ========================================== - Hits 19852 19503 -349 - Misses 5514 5863 +349 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5657?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5657/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5657/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5657/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5657/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.75%)` | :arrow_up: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5657/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.32% <0.00%> (+31.77%)` | :arrow_up: | | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5657/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5657?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5657?src=pr&el=footer). Last update [2e6bb0e...e839f19](https://codecov.io/gh/huggingface/transformers/pull/5657?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,656
closed
Truncated Outputs by t5 fine-tuned models
I fine-tuned t5-small over CNN/DM dataset using the finetune_t5.sh script. The outputs produced by the saved fine-tuned model is okayish but it's getting cut i.e., producing incomplete sentence at the end. Example : Artcile: (CNN)The only thing crazier than a guy in snowbound Massachusetts boxing up the powdery white stuff and offering it for sale online? People are actually buying it. For $89, self-styled entrepreneur Kyle Waring will ship you 6 pounds of Boston-area snow in an insulated Styrofoam box -- enough for 10 to 15 snowballs, he says.Kyle Waring died last week. But not if you live in New England or surrounding states. "We will not ship snow to any states in the northeast!" says Waring's website, ShipSnowYo.com. "We're in the business of expunging snow!" His website and social media accounts claim to have filled more than 133 orders for snow -- more than 30 on Tuesday alone, his busiest day yet. With more than 45 total inches, Boston has set a record this winter for the snowiest month in its history. Most residents see the huge piles of snow choking their yards and sidewalks as a nuisance, but Waring saw an opportunity. According to Boston.com, it all started a few weeks ago, when Waring and his wife were shoveling deep snow from their yard in Manchester-by-the-Sea, a coastal suburb north of Boston. He joked about shipping the stuff to friends and family in warmer states, and an idea was born. His business slogan: "Our nightmare is your dream!" At first, ShipSnowYo sold snow packed into empty 16.9-ounce water bottles for $19.99, but the snow usually melted before it reached its destination. So this week, Waring began shipping larger amounts in the Styrofoam cubes, which he promises will arrive anywhere in the U.S. in less than 20 hours. He also has begun selling a 10-pound box of snow for $119. Many of his customers appear to be companies in warm-weather states who are buying the snow as a gag, he said. Whether Waring can sustain his gimmicky venture into the spring remains to be seen. But he has no shortage of product. "At this rate, it's going to be July until the snow melts," he told Boston.com. "But I've thought about taking this idea and running with it for other seasonal items. Maybe I'll ship some fall foliage." Summary produced by t5-small fine-tuned over CNN/DM : Kyle Waring will ship you 6 pounds of snow in an insulated Styrofoam box for $89 . The self-styled entrepreneur says he will not ship snow to any states in the northeast . Waring's website and social media accounts claim to have filled more than 133 orders for snow . "We're in the business of expunging snow!" Waring says . He has begun selling a 10-pound box of snow for $119 . His business slogan: "Our nightmare is your At first I thought this might be because the model hasn't converged as I just ran for 1 epoch but it's producing similar truncated outputs even for t5-small fine-tuned over cnn/dm for 5 epochs.Also this problem is not related to min_length or max_length parameters I think, as it produced similar outputs for all combinations of those two parameters. Tried changing --max_source_length, --max_target_length, --val_max_target_length, --test_max_target_length(these 4 parameters are present in finetune.py) parameter's values too from their default values before fine-tuning but no use. What might be the reason for this truncation? Is this a problem of the fine-tuning code used to fine-tune pretrained models as pre-trained models don't produce this kind of outputs.
07-10-2020 10:50:31
07-10-2020 10:50:31
Hi, what are the arguments for `.generate` method ? you can control the generation length using `max_length` and `min_length` parameter. And if you want to see if there's something wrong with the fine-tuning code, then take the default t5-small model (it's already trained for summerization) and generate summaries using it and compare with your model. This should give you some idea.<|||||>The arguments for the .generate method are (input_ids=input_ids, attention_mask=attention_mask, early_stopping= True, length_penalty = 2.0, max_length = 142, min_length = 56, no_repeat_ngram_size = 3, num_beams = 4), where input_ids and attention_mask are the corresponding tensors obtained through tokenizer.encode(). The config.json file of my saved t5-small model fine-tuned on cnn/dm is same as the default t5-small model(I cross-checked that). I set the parameters I'm passing while running run_eval.py the same for both fine-tuned t5-small and default t5-small but the former produces truncated outputs whereas the later produces complete outputs. Yeah I can control the generation length by the min_length and max_length parameters but in default t5-small model whatever be the above two parameters it always produced complete sentences whose length are within that range but in case of the fine-tuned model its giving truncated outputs for all combinations of these two parameters. That's why I strongly felt that there was some problem with the fine-tuning code.<|||||>cc @sshleifer <|||||>Since the default --max_source_length is 1024 and some articles in CNN are bigger than that, thought that the truncation of the input sentences was messing up the fine-tuned model and tried fine-tuning t5-small over xsum. The xsum articles are relatively smaller and none of them exceeds 1024 tokens. Used --max_target_length=60 -- val_max_target_length=60 --test_max_target_length=100 in finetune.py as they are mentioned as reasonable setting for XSUM. Ran the script finetune_t5.sh for xsum, i.e., python finetune.py \ --data_dir=xsum \ --model_name_or_path=t5-small \ --learning_rate=3e-5 \ --train_batch_size=8 \ --eval_batch_size=4 \ --output_dir=xsum_results \ --max_source_length=1024 \ --val_check_interval=0.1 --n_val=200 \ --do_train --do_predict \ $@ The outputs produced by the best_tfmr model for the test.souce dataset of xsum is still truncated as given by the test_generations.txt Eg. Article : (1st article of test.source dataset of xsum) The London trio are up for best UK act and best album, as well as getting two nominations in the best song category."We got told like this morning 'Oh I think you're nominated'", said Dappy."And I was like 'Oh yeah, which one?' And now we've got nominated for four awards. I mean, wow!"Bandmate Fazer added: "We thought it's best of us to come down and mingle with everyone and say hello to the cameras. And now we find we've got four nominations."The band have two shots at the best song prize, getting the nod for their Tynchy Stryder collaboration Number One, and single Strong Again.Their album Uncle B will also go up against records by the likes of Beyonce and Kanye West.N-Dubz picked up the best newcomer Mobo in 2007, but female member Tulisa said they wouldn't be too disappointed if they didn't win this time around."At the end of the day we're grateful to be where we are in our careers."If it don't happen then it don't happen - live to fight another day and keep on making albums and hits for the fans."Dappy also revealed they could be performing live several times on the night.The group will be doing Number One and also a possible rendition of the War Child single, I Got Soul.The charity song is a re-working of The Killers' All These Things That I've Done and is set to feature artists like Chipmunk, Ironik and Pixie Lott.This year's Mobos will be held outside of London for the first time, in Glasgow on 30 September.N-Dubz said they were looking forward to performing for their Scottish fans and boasted about their recent shows north of the border."We just done Edinburgh the other day," said Dappy."We smashed up an N-Dubz show over there. We done Aberdeen about three or four months ago - we smashed up that show over there! Everywhere we go we smash it up!" Output :(output produced by the best_tfmr for the 1st article of test.source dataset of xsum) N-Dubz have announced they have been nominated for the UK's best song prize. They have been told 'Oh yeah, which one?' - and now they've got four nominations. "We're going to be the best newcomer Similarly, all the others outputs too are truncated at the end similar to the previous case(fine-tuned over CNN/DM) . Once the default model goes under fine-tuning, it's unable to finish the summary with EOS token but getting cut abruptly. <|||||>I think part of the problem may be that t5 tokenizer is not adding EOS token. @patrickvonplaten ```python ipdb> tok_bart = AutoTokenizer.from_pretrained('facebook/bart-large-cnn') ipdb> tok_bart('sentence') {'input_ids': [0, 19530, 4086, 2], 'attention_mask': [1, 1, 1, 1]} ipdb> tok_t5 = AutoTokenizer.from_pretrained('t5-small') ipdb> tok_t5('sentence') {'input_ids': [7142], 'attention_mask': [1]} ``` So maybe the model is training on targets without EOS, and eventually learns to stop generating it?<|||||>Thanks a lot for checking this @sshleifer! Yeah, I agree - I think T5 should add the EOS token to the end. Is there a reason why T5 does not add the EOS token? @thomwolf @mfuntowicz @n1t0 ?<|||||>Yes, in case of T5 we manually need to add ` </s>` at the end of text. I think this same issue is causing [this](https://discuss.huggingface.co/t/generate-very-short-summaries/277/5) <|||||>@patil-suraj /others Have you ran clean experiments with and without adding `<s>`? I don't want to merge #5866 this without more evidence that it is helpful, and [my first experiment](https://github.com/huggingface/transformers/pull/5866) did not result in any change. To those of you on many of these related issues, sorry for spamming. <|||||>Hi @sshleifer, in all of my T5 experiments I didn't use the bos token `<s>` at all, all of those experiments gave expected results (even better in some cases). But `</s>` is very important, without it the model generates really weird text, and its very easy to forget. So adding `</s>` automatically is really important. `<s>` won't matter<|||||>Ok, I'll merge the change. You won't need to add it anymore.<|||||>@tromedlov22 Did you ever figure out what the issue was? I have the same problem, doesn't seem to be an issue with the tokenizer adding eos since it's doing that. <|||||>#5866 This solved my issue. If you still facing the issue, post your sample output maybe along with the input and the hyper-params you using.<|||||>hey guys, I am facing the same issue with truncation . My input: When I first entered high school I was very nervous as it was a new school for me and it was a big adjustment</s>. I was overwhelmed with work and mentally wasn't staying optimistic as I found it hard to manage my time and make friends. I felt like I wasn't good enough, and this caused me to treat myself like I wasn't worthy of being at such a place</s>. In terms of behavior to others, I would say it made me more shy while still adapting to the new environment</s>. Output: when I first entered high school I was very nervous as it was a new school for me. I felt like I wasn't good enough to manage my time and make friends. it made me more shy while still adapting to Generate tokens_input, min_length= 0, max_length=50, num_beams=4, early_stopping=True, no_repeat_ngram_size=3, num_return_sequences=2,
transformers
5,655
closed
Create model card
Create model card for T5-small fine-tuned on SQUAD v2
07-10-2020 10:35:52
07-10-2020 10:35:52
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5655?src=pr&el=h1) Report > Merging [#5655](https://codecov.io/gh/huggingface/transformers/pull/5655?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2e6bb0e9c37655a03adaa3238dd6d4645fba8dc1&el=desc) will **decrease** coverage by `0.46%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5655/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5655?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5655 +/- ## ========================================== - Coverage 78.26% 77.80% -0.47% ========================================== Files 145 145 Lines 25366 25366 ========================================== - Hits 19852 19735 -117 - Misses 5514 5631 +117 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5655?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5655/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5655/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.72% <0.00%> (-1.38%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5655/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5655/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+0.50%)` | :arrow_up: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5655/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.32% <0.00%> (+31.77%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5655?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5655?src=pr&el=footer). Last update [2e6bb0e...98c16fd](https://codecov.io/gh/huggingface/transformers/pull/5655?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,654
closed
❓ Difficulties to reproduce BART results on CNN/DM by fine-tuning bart-large
# ❓ Help I'm trying to fine-tune BART on CNN/DM by myself (so, starting from `facebook/bart-large` checkpoint). However I can't reproduce the results so far... BART authors report a R1 score of `44.16` in their paper, but my best checkpoint so far is only `42.53`. It's not an issue with the eval script, as I can reproduce the authors' results from the checkpoint `facebook/bart-large-cnn`. I get a score of `44.09` using this checkpoint. I tried several hyper-parameters : the ones provided in the example folder, but also the ones used in fairseq repo. It doesn't change anything... --- I'm a bit at loss on how to reproduce these fine-tuning score... Could anyone fine-tune BART successfully using `transformers` repo ? If yes, can you share your parameters ? Any help would be greatly appreciated ! @sshleifer
07-10-2020 10:33:19
07-10-2020 10:33:19
Are the outputs produced by your best-checkpoint after fine-tuning producing proper outputs? or are the truncated at the end? I did fine-tune t5-small on CNN/DM but the best-checkpoint was producing outputs which were truncated in the end(for sample output, I just raised an issue, refer to that) and this was leading to reduced R1 scores too. Just wanted to know if you faced the same issue or if not what might be the reason for it, as I couldn't find why. Thanks.<|||||>@cola I haven't tried finetuning bart-large. Could take a pass if you have a command you are running that I can reproduce. Without code, I can speculate on ideas but I can't check if you are already doing them, so sorry if this is useless: (1) @tromedlov22 's idea reminds me that you should make sure you set config.task_specific_params ```python def use_task_specific_params(model, task): # update config with summarization specific params task_specific_params = model.config.task_specific_params if task_specific_params is not None: model.config.update(task_specific_params.get(task, {})) use_task_specific_params(model, 'summarization') ``` (2) Another idea, I suspect the authors checked rouge every epoch and stopped at the best validation rouge, (roughly what `finetune.py`) and that might help results. For reference, the params I see are: ``` {'early_stopping': True, 'length_penalty': 2.0, 'max_length': 142, 'min_length': 56, 'no_repeat_ngram_size': 3, 'num_beams': 4}} ``` (3) IIRC, authors use `label_smoothing_cross_entropy` do you? (4) for cnn, truncation parameters matter on the target side. (5) if you are purely interested in reproducing finetuning performance, I would experiment with xsum since it trains 30% faster than cnn (shorter targets). (and make sure to use `AutoConfig.from_pretrained('facebook/bart-large-xsum')` params) You could also use wandb and then share your logs, which would allow me to give better advice.<|||||>@tromedlov22 Thanks for the answer. I checked but the answer seems fine, not truncated at the end. I guess we are having different problem. @sshleifer Thanks for the very detailed answer ! I can't give you a one-command for reproducing, I modified the example code to add missing details from the Fairseq repo, such as `label-smoothing` ! --- > (3) IIRC, authors use label_smoothing_cross_entropy do you? Yes I do > Another idea, I suspect the authors checked rouge every epoch and stopped at the best validation rouge, (roughly what finetune.py) and that might help results. Indeed I'm saving only at the end of training. I will try that. > (5) if you are purely interested in reproducing finetuning performance, I would experiment with xsum since it trains 30% faster than cnn (shorter targets). (and make sure to use AutoConfig.from_pretrained('facebook/bart-large-xsum') params) You could also use wandb and then share your logs, which would allow me to give better advice. Thanks for the advice ! > (4) for cnn, truncation parameters matter on the target side. What do you mean ?<|||||>That would be a very useful PR @cola ! <|||||>I could improve a my results by using early-stopping, thank you very much for the idea @sshleifer ! Now I have **43.68** as R1. Almost 44.16 from the paper ! I'm trying to find what can cause this small difference, and I would love to hear your opinion about this : I'm training with batch-size 1 (I can't fit more in my 16Gb memory). The authors fine-tuned it with batch-size 2 (with 32Gb memory). Can it come from here ? Does the layer batch-normalization act differently with single-samples batch for example ?<|||||>I'm in a similar place with machine translation. The things I know to be different from fairseq are: - [ ] (probably only matters for MT) their dataloader creates 1 batch for every N tokens. - [ ] dropout, attention_dropout (need to be set through config) - [ ] weight_decay = 0.1 - [ ] adam_betas - [ ] lr_scheduler=polynomial_decay - [ ] warmup_updates - [ ] Did you figure out whether update_freq is the same as `gradient_accumulation_steps`? if you have all those squared away, the only other thing I can think of is that the embeddings (we use `model.model.shared` , they don't) somehow become untied or get different gradients. Let me know if any of these have mattered, cause I'm trying to prioritize what to implement in `transformers`<|||||>Here is what I did so far : - [ ] (probably only matters for MT) their dataloader creates 1 batch for every N tokens. - [x] dropout, attention_dropout (need to be set through config) - [x] weight_decay = 0.1 - [ ] adam_betas - [x] lr_scheduler=polynomial_decay - [x] warmup_updates - [ ] Did you figure out whether update_freq is the same as gradient_accumulation_steps? Implementing the first one seems complicated, so I didn't try. Thanks for the help, the detailed list of things to try is awesome ! So far I'm satisfied with the results, it's really close to the paper's results. Maybe some tiny difference in the code is responsible for the difference ? If I have more time I will try the other things I didn't try so far :)<|||||>I am having similar problems with this myself. @Colanim do you know which if your above changes had the largest impact so I can begin with those? @sshleifer I think there is a bug with `label_smoothed_nll_loss`. I have tried using it with current master and I am getting infinite losses because the `bs` term is zero and this is the denominator in line 45 (`return loss / bs, nll_loss / bs`). <|||||>wowo great catch this line I wrote is broken in so many ways: ```python bs = pad_mask.long().sum() # pad mask has 1 where labels.eq(pad_token_id). This is num pad tokens in the batch.... ``` I would delete the denominator if I were you. In my experience: warmup_updates can help a lot, as well as playing with gradient_accumulation_batches. (more for MT, lower -> better). But interested in @Colanim 's experience. BTW, thanks to @stas00 you can now pass `--dropout`, `--attention_dropout`, `--decoder_layerdrop`, and `--encoder_layerdrop` through the command line. <|||||>@Colanim can you rerun evaluation on your 43.68 R1 model? I hope that #6526 might have helped close the gap! It doesn't help for bart-large-cnn, but it does help bart-large-xsum.<|||||>Will try as soon as I can ! I have to find my checkpoint... ^^<|||||>What command are you using @Colanim ? I get OOM even with BS=1 on a 32GB v100 GPU. @sshleifer ``` python finetune.py \ --data_dir=data/cnn_dm/ \ --output_dir=${RESULTS_DIR} \ --learning_rate=3e-5 \ --fp16 \ --gpus 8 \ --do_train \ --do_predict \ --n_val 1000 \ --val_check_interval 0.1 \ --train_batch_size=1 --gradient_accumulation_steps=4 \ --eval_batch_size=1 \ --max_steps 20000 --warmup_steps=500 \ --eval_max_gen_length=142 --max_source_length=1042 --max_target_length=56 \ --sortish_sampler \ --lr_scheduler polynomial \ --label_smoothing 0.1 \ --weight_decay 0.01 \ --dropout 0.1 --attention_dropout 0.1 --gradient_clip_val=0.1 --early_stop_callback=1 ``` and initializing model without autoconfig as ``` config = BartConfig(**json.load(open(args.config_path, "r"))) model = BartForConditionalGeneration(config) tokenizer = BartTokenizer.from_pretrained( 'facebook/bart-large-cnn') # Downloads vocab and merges file automatically ```<|||||>+ `Try --num_sanity_val_steps=0 --eval_beams 2` + Cola is starting with `model= BartForConditionalGeneration.from_pretrained('facebook/bart-large')` this will do better than random init.<|||||>That works initially but fails after ~15k steps - what eval_max_gen_length are you using? not sure if you froze embeds as mentioned in #6711 for BART CNN/DM as well. ``` Traceback (most recent call last): File "finetune.py", line 446, in <module> main(args) File "finetune.py", line 421, in main logger=logger, File "/workspace/bart/lightning_base.py", line 369, in generic_train trainer.fit(model) File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/states.py", line 48, in wrapped_fn result = fn(self, *args, **kwargs) File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1058, in fit results = self.accelerator_backend.spawn_ddp_children(model) File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/accelerators/ddp_backend.py", line 123, in spawn_ddp_children results = self.ddp_train(local_rank, mp_queue=None, model=model, is_master=True) File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/accelerators/ddp_backend.py", line 224, in ddp_train results = self.trainer.run_pretrain_routine(model) File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1239, in run_pretrain_routine self.train() File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 394, in train self.run_training_epoch() File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 516, in run_training_epoch self.run_evaluation(test_mode=False) File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 582, in run_evaluation eval_results = self._evaluate(self.model, dataloaders, max_batches, test_mode) File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 331, in _evaluate output = self.evaluation_forward(model, batch, batch_idx, dataloader_idx, test_mode) File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 661, in evaluation_forward output = model(*args) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 577, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/overrides/data_parallel.py", line 174, in forward output = self.module.validation_step(*inputs[0], **kwargs[0]) File "finetune.py", line 175, in validation_step return self._generative_step(batch) File "finetune.py", line 218, in _generative_step max_length=self.eval_max_length, File "/opt/conda/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context return func(*args, **kwargs) File "/workspace/bart/generation_utils.py", line 469, in generate model_specific_kwargs=model_specific_kwargs, File "/workspace/bart/generation_utils.py", line 648, in _generate_beam_search outputs = self(**model_inputs) # (batch_size * num_beams, cur_len, vocab_size) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 577, in __call__ result = self.forward(*input, **kwargs) File "/workspace/bart/modeling_bart.py", line 1037, in forward return_dict=return_dict, File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 577, in __call__ result = self.forward(*input, **kwargs) File "/workspace/bart/modeling_bart.py", line 909, in forward return_dict=return_dict, File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 577, in __call__ result = self.forward(*input, **kwargs) File "/workspace/bart/modeling_bart.py", line 570, in forward output_attentions=output_attentions, File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 577, in __call__ result = self.forward(*input, **kwargs) File "/workspace/bart/modeling_bart.py", line 443, in forward x = self.activation_fn(self.fc1(x)) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 577, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 87, in forward return F.linear(input, self.weight, self.bias) File "/opt/conda/lib/python3.6/site-packages/torch/nn/functional.py", line 1676, in linear output = input.matmul(weight.t()) RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 15.78 GiB total capacity; 14.56 GiB already allocated; 11.44 MiB free; 14.79 GiB reserved in total by PyTorch) ```<|||||>Definitely use `--freeze_embeds`. I have never seen it hurt metrics. I have actually never tried to finetune on cnn_dm, but interested to hear your results!<|||||>Still OOMs even with eval_beams=1. #7004 works for me, will confirm when I have it working e2e <|||||>Unfortunately I'm not working with BART anymore these days... I didn't try more experiments<|||||>Hi, @Colanim , could you share you eval script that get a score of 44.09 with facebook/bart-large-cnn? Thanks!<|||||>Basically I use `nlp` package to get the `cnn_dm` data, then run generation with : ``` preds = model.generate(samples['article'], num_beams=4, length_penalty=2, max_length=142, min_length=56, early_stopping=True, no_repeat_ngram_size=3) ``` and save the predictions and gold in text files. Then use the `files2rouge` package to get ROUGE scores. Also don't forget to tokenize the predictions and gold with `StanFord CoreNLP` !<|||||>Hi, @Colanim I tried to reproduce the paper's results from the checkpoint facebook/bart-large-cnn, but somehow my rouge1 score is only 42.62. I tried the following steps, could you help me to find out what's wrong? Thanks! **infer:** ``` from transformers import BartTokenizer, BartForConditionalGeneration tokenizer = BartTokenizer.from_pretrained('facebook/bart-large-cnn') model = BartForConditionalGeneration.from_pretrained('facebook/bart-large-cnn') source_pwd='./test.source' input_sents=open(source_pwd,'r',encoding='utf8').readlines() with open('./test.pred','w',encoding='utf8') as out: inputs = tokenizer(input_sents, max_length=1024, return_tensors='pt',truncation=True,padding=True) summary_ids = model.generate(inputs['input_ids'], num_beams=4, length_penalty=2,max_length=142, min_length=56,early_stopping=True,no_repeat_ngram_size=3) for summary_id in summary_ids: out.write(tokenizer.decode(summary_id, skip_special_tokens=True, clean_up_tokenization_spaces=False).strip()+'\n') ``` **eval:** cat test.target | java edu.stanford.nlp.process.PTBTokenizer -ioFileList -preserveLines > test.target.tokenized cat test.pred | java edu.stanford.nlp.process.PTBTokenizer -ioFileList -preserveLines > test.pred.tokenized files2rouge test.pred.tokenized test.target.tokenized <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>> @Cola I haven't tried finetuning bart-large. Could take a pass if you have a command you are running that I can reproduce. Without code, I can speculate on ideas but I can't check if you are already doing them, so sorry if this is useless: > > (1) > @tromedlov22 's idea reminds me that you should make sure you set config.task_specific_params > > ```python > def use_task_specific_params(model, task): > # update config with summarization specific params > task_specific_params = model.config.task_specific_params > if task_specific_params is not None: > model.config.update(task_specific_params.get(task, {})) > use_task_specific_params(model, 'summarization') > ``` > > (2) > Another idea, I suspect the authors checked rouge every epoch and stopped at the best validation rouge, (roughly what `finetune.py`) and that might help results. > > For reference, the params I see are: > > ``` > {'early_stopping': True, > 'length_penalty': 2.0, > 'max_length': 142, > 'min_length': 56, > 'no_repeat_ngram_size': 3, > 'num_beams': 4}} > ``` > > (3) IIRC, authors use `label_smoothing_cross_entropy` do you? > (4) for cnn, truncation parameters matter on the target side. > (5) if you are purely interested in reproducing finetuning performance, I would experiment with xsum since it trains 30% faster than cnn (shorter targets). (and make sure to use `AutoConfig.from_pretrained('facebook/bart-large-xsum')` params) You could also use wandb and then share your logs, which would allow me to give better advice. Hi @sshleifer, I'm trying to test the best fine-tuned SUMM model on CNNDM dataset. But seems like I need to use args.use_task_specific_params, but can't use it by simply add --task_specific_params. Is there a solution for that?
transformers
5,653
closed
AutoTokenizer.from_pretrained("hfl/chinese-roberta-wwm-ext")
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details ``` > from transformers import AutoTokenizer, AutoModelWithLMHead > tokenizer = AutoTokenizer.from_pretrained("hfl/chinese-roberta-wwm-ext") I0710 17:52:53.548153 139925919450880 tokenization_utils_base.py:1167] Model name 'hfl/chinese-roberta-wwm-ext' not found in model shortcut name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). Assuming 'hfl/chinese-roberta-wwm-ext' is a path, a model identifier, or url to a directory containing tokenizer files. I0710 17:52:59.942922 139925919450880 tokenization_utils_base.py:1254] loading file https://s3.amazonaws.com/models.huggingface.co/bert/hfl/chinese-roberta-wwm-ext/vocab.json from cache at None I0710 17:52:59.943219 139925919450880 tokenization_utils_base.py:1254] loading file https://s3.amazonaws.com/models.huggingface.co/bert/hfl/chinese-roberta-wwm-ext/merges.txt from cache at None I0710 17:52:59.943420 139925919450880 tokenization_utils_base.py:1254] loading file https://s3.amazonaws.com/models.huggingface.co/bert/hfl/chinese-roberta-wwm-ext/added_tokens.json from cache at /home/ubuntu/.cache/torch/transformers/23740a16768d945f44a24590dc8f5e572773b1b2868c5e58f7ff4fae2a721c49.3889713104075cfee9e96090bcdd0dc753733b3db9da20d1dd8b2cd1030536a2 I0710 17:52:59.943602 139925919450880 tokenization_utils_base.py:1254] loading file https://s3.amazonaws.com/models.huggingface.co/bert/hfl/chinese-roberta-wwm-ext/special_tokens_map.json from cache at /home/ubuntu/.cache/torch/transformers/6f13f9fe28f96dd7be36b84708332115ef90b3b310918502c13a8f719a225de2.275045728fbf41c11d3dae08b8742c054377e18d92cc7b72b6351152a99b64e4 I0710 17:52:59.943761 139925919450880 tokenization_utils_base.py:1254] loading file https://s3.amazonaws.com/models.huggingface.co/bert/hfl/chinese-roberta-wwm-ext/tokenizer_config.json from cache at /home/ubuntu/.cache/torch/transformers/5bb5761fdb6c8f42bf7705c27c48cffd8b40afa8278fa035bc81bf288f108af9.1ade4e0ac224a06d83f2cb9821a6656b6b59974d6552e8c728f2657e4ba445d9 I0710 17:52:59.943786 139925919450880 tokenization_utils_base.py:1254] loading file https://s3.amazonaws.com/models.huggingface.co/bert/hfl/chinese-roberta-wwm-ext/tokenizer.json from cache at None Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/ubuntu/anaconda3/envs/deeplearning/lib/python3.6/site-packages/transformers/tokenization_auto.py", line 217, in from_pretrained return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/home/ubuntu/anaconda3/envs/deeplearning/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 1140, in from_pretrained return cls._from_pretrained(*inputs, **kwargs) File "/home/ubuntu/anaconda3/envs/deeplearning/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 1288, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/home/ubuntu/anaconda3/envs/deeplearning/lib/python3.6/site-packages/transformers/tokenization_roberta.py", line 171, in __init__ **kwargs, File "/home/ubuntu/anaconda3/envs/deeplearning/lib/python3.6/site-packages/transformers/tokenization_gpt2.py", line 167, in __init__ with open(vocab_file, encoding="utf-8") as vocab_handle: TypeError: expected str, bytes or os.PathLike object, not NoneType ``` Does it support `hfl/chinese-roberta-wwm-ext` now? Or what should i do. Hope for help, thx! @julien-c <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
07-10-2020 10:11:08
07-10-2020 10:11:08
I also got the same issue. Maybe you can try `BertTokenizer.from_pretrained("hfl/chinese-roberta-wwm-ext")` It works for me. <|||||>> I also got the same issue. > Maybe you can try `BertTokenizer.from_pretrained("hfl/chinese-roberta-wwm-ext")` > It works for me. Yes!! I succeeded, thank you very much for your help!
transformers
5,652
closed
Create README.md - Model card for sentence-transformers/bert-base-nli-mean-tokens
Model card for https://huggingface.co/sentence-transformers/bert-base-nli-mean-tokens
07-10-2020 09:21:41
07-10-2020 09:21:41
Thanks for sharing! Note that we don't currently have automated deployment on ExBERT (cc @bhoov) ➡️ [model page](https://huggingface.co/sentence-transformers/bert-base-nli-mean-tokens)
transformers
5,651
closed
T5 fp16 overflow in forward (T5DenseReluDense)
# 🐛 Bug Using `AutoModelWithLMHead.from_pretrained("t5-base")` for fine-tuning, after 34 iterations I get nan loss from the forward method. After debugging it, I found that the source of the nan is due to an overflow that happens in `T5DenseReluDense`, when running `h = self.wo(h)`. The result of this forward is a tensor that has `inf` in one of its values, which later on causes the nan loss. I looked into this calculation with fp32 and I saw that his `inf` is caused due to a value of 66246.3906, which is over the maximum value of 65504 in fp16. This issue only happens with fp16 (opt_level="O1"), with opt_level="O0" everything is fine. ## Information Model I am using (Bert, XLNet ...): T5 Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce I don't have step by step instructions, because I will need to upload my entire dataset for that. I have a pickle for the vector `h` and the weights of `self.wo` that causes the overflow in `T5DenseReluDense`, I can upload it if it might help. ## Expected behavior get a numeric loss ## Environment info - `transformers` version: 3.0.2 - Platform: Linux-5.3.0-1030-aws-x86_64-with-debian-buster-sid - Python version: 3.6.10 - PyTorch version (GPU?): 1.5.1 (True) - Tensorflow version (GPU?): not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no
07-10-2020 07:14:12
07-10-2020 07:14:12
See: #4586
transformers
5,650
closed
Wrong answers from Longformer model even on simple questions
I am using the pretrained model `allenai/longformer-large-4096-finetuned-triviaqa`, however upon inspecting it on my system and the demo on the huggingface website, the outputs seem off even for very simple examples and samples from the dataset. 1. [Example 1](https://www.dropbox.com/s/9h3dcqpwq0n1b05/download%20%283%29.png?dl=0) 2. [Example 2](https://www.dropbox.com/s/40e93m2odix8x1p/download%20%284%29.png?dl=0) 3. [Example 3](https://www.dropbox.com/s/s5t1k6jluyzfs33/download%20%286%29.png?dl=0) 4. [Example 4](https://www.dropbox.com/s/oyps5a5gr2e4c25/download%20%287%29.png?dl=0) Other models for QA (like `bert-large-uncased-whole-word-masking`) get such simple examples right.
07-10-2020 04:57:46
07-10-2020 04:57:46
Yeah this is related to a bug, see: https://github.com/huggingface/transformers/pull/4615 cc @mfuntowicz @julien-c - we should refactor the squad preprocessing in pipelines to make longformer work.<|||||>Hi @patrickvonplaten: are there are any updates with respect to this?<|||||>We will probably start working on a fix in ~2 weeks.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,649
closed
Bugs due to design choices in LongformerTokenizer
I am using the transformers library from source (version: 3.0.2). ```python import transformers from transformers import * longformer_tokenizer = LongformerTokenizer.from_pretrained("allenai/longformer-base-4096") longformer_tokenizer.tokenize("This is a sample sentence for the tokenizer.") ``` The output I get is ``` ['This', 'Ġis', 'Ġa', 'Ġsample', 'Ġsentence', 'Ġfor', 'Ġthe', 'Ġtoken', 'izer']``` The design choice here is to use the `Ġ` as a start of every new word (except for the first word). This is in contrast with other tokenizers which insert `##` tokens for suffixes of broken words. Due to this slightly different tokenization quirk, many things could break, one of which is the following piece of code in `squad.py`: https://github.com/huggingface/transformers/blob/02a0b43014ac333a169e99d76aaba023a316e384/src/transformers/data/processors/squad.py#L106-L112 The `doc_tokens` from the processor are whitespace separated tokens, which are to be further tokenized using this code. But since each word is treated individually, and `LongformerTokenizer` doesn't insert `Ġ` for the first token, there is a problem. The resulting `all_doc_tokens` can not be correctly converted to original string using `tokenizer.convert_tokens_to_string` because it is missing the `Ġ` at the start.
07-10-2020 03:48:56
07-10-2020 03:48:56
Thanks for this issue. Longformer was trained on trivia_qa by default and not squad, so the model is not by default compatible with `squad` and needs some special post processing as shown in the example of this model, here: https://huggingface.co/transformers/model_doc/longformer.html#longformerforquestionanswering This is also related to: https://github.com/huggingface/transformers/pull/4615 <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,648
closed
Classification accuracy on validation set didn't improve while fine-tuning BERT
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> I was fine-tuning BERT with BertForSequenceClassification on my own dataset. 0.8 of the whole dataset was used as training set and the others as validation set. The training loss decreased during the process. However, the accuracy on validation set was always around 0.5, which is similar to random guessing. And the accuracy on validation didn't improve a lot after fine-tuning. For example, from epochs 1-3, accuracy was from 0.48-0.52. So I was wondering whether this problem was caused by my dataset itself or if I did something wrong while fine-tuning it? Does anybody have any ideas on this? By the way, before this, I was fine-tuning BERT on another dataset and it did improve the classification accuracy a lot . <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
07-10-2020 03:12:27
07-10-2020 03:12:27
Hello! This is an interesting question, but is kind of out of scope for the Github issues. We just opened a forum at [discuss.huggingface.co](https://discuss.huggingface.co). Do you think you could ask your question over there? Thank you.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,647
closed
T5 TorchScript (Trace) Conversion
# ❓ Questions & Help How can we correctly set inputs for t5 TorchScript? ## Details <!-- Description of your issue --> ```python from transformers import T5Model import torch tokens_tensor = torch.ones(1, 10, dtype=torch.long) model = T5Model.from_pretrained("t5-small", torchscript=True) model.eval() scripted_model = torch.jit.trace(model, (tokens_tensor, tokens_tensor)) ``` Error: ``` ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds ```
07-10-2020 03:09:26
07-10-2020 03:09:26
Hey @gyin-ai - can you specify your version? I cannot reproduce the error on master.<|||||>In master, the above example works for me but it doesn't work for T5ForConditionalGeneration ``` from transformers import T5ForConditionalGeneration import torch tokens_tensor = torch.ones(1, 10, dtype=torch.long) model = T5ForConditionalGeneration.from_pretrained("t5-small", torchscript=True) model.eval() scripted_model = torch.jit.trace(model, (tokens_tensor, tokens_tensor)) ``` It fails with the same error ```ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds```<|||||>Sadly, I don't have a good answer here :-/ The problem is that `decoder_input_ids` is not the second argument -> so that's why your function does not work. This PR would make it possible to run your code: #6268 , but it does not really solve the problem because one might want to use `input_embeds` instead of `input_ids` and she/he would run into the same problem. It would allow for torchtrace for the most general case though... i guess since usually one passes `input_ids` and `decoder_input_ids`, we could merge the PR...What do you think? @LysandreJik <|||||>``` import torch from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-large", torchscript=True) model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large", torchscript=True) tokenized_dict = tokenizer( ["please answer the following question: what is the boiling point of nitrogen",], ["-320.4F",], return_tensors="pt" ) input_tuple = (tokenized_dict['input_ids'], tokenized_dict['attention_mask']) traced_model = torch.jit.trace(model, input_tuple) torch.jit.save(traced_model, "flan-t5-large.pt") ``` I was trying to trace `google/flan-t5-large` model in torchScript. But I'm facing following exception: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Input In [29], in <cell line: 13>() 7 tokenized_dict = tokenizer( 8 ["please answer the following question: what is the boiling point of nitrogen",], ["-320.4F",], 9 return_tensors="pt" 10 ) 11 input_tuple = (tokenized_dict['input_ids'], tokenized_dict['attention_mask']) ---> 13 traced_model = torch.jit.trace(model, input_tuple) 14 torch.jit.save(traced_model, "flan-t5-large.pt") File ~/Library/Python/3.9/lib/python/site-packages/torch/jit/_trace.py:759, in trace(func, example_inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit) 756 return func 758 if isinstance(func, torch.nn.Module): --> 759 return trace_module( 760 func, 761 {"forward": example_inputs}, 762 None, 763 check_trace, 764 wrap_check_inputs(check_inputs), 765 check_tolerance, 766 strict, 767 _force_outplace, 768 _module_class, 769 ) 771 if ( 772 hasattr(func, "__self__") 773 and isinstance(func.__self__, torch.nn.Module) 774 and func.__name__ == "forward" 775 ): 776 return trace_module( 777 func.__self__, 778 {"forward": example_inputs}, (...) 785 _module_class, 786 ) File ~/Library/Python/3.9/lib/python/site-packages/torch/jit/_trace.py:976, in trace_module(mod, inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit) 972 argument_names = get_callable_argument_names(func) 974 example_inputs = make_tuple(example_inputs) --> 976 module._c._create_method_from_trace( 977 method_name, 978 func, 979 example_inputs, 980 var_lookup_fn, 981 strict, 982 _force_outplace, 983 argument_names, 984 ) 985 check_trace_method = module._c._get_method(method_name) 987 # Check the trace against new traces created from user-specified inputs File ~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs) 1190 # If we don't have any hooks, we want to skip the rest of the logic in 1191 # this function, and just call forward. 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] File ~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py:1182, in Module._slow_forward(self, *input, **kwargs) 1180 recording_scopes = False 1181 try: -> 1182 result = self.forward(*input, **kwargs) 1183 finally: 1184 if recording_scopes: File ~/Library/Python/3.9/lib/python/site-packages/transformers/models/t5/modeling_t5.py:1660, in T5ForConditionalGeneration.forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict) 1657 decoder_attention_mask = decoder_attention_mask.to(self.decoder.first_device) 1659 # Decode -> 1660 decoder_outputs = self.decoder( 1661 input_ids=decoder_input_ids, 1662 attention_mask=decoder_attention_mask, 1663 inputs_embeds=decoder_inputs_embeds, 1664 past_key_values=past_key_values, 1665 encoder_hidden_states=hidden_states, 1666 encoder_attention_mask=attention_mask, 1667 head_mask=decoder_head_mask, 1668 cross_attn_head_mask=cross_attn_head_mask, 1669 use_cache=use_cache, 1670 output_attentions=output_attentions, 1671 output_hidden_states=output_hidden_states, 1672 return_dict=return_dict, 1673 ) 1675 sequence_output = decoder_outputs[0] 1677 # Set device for model parallelism File ~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs) 1190 # If we don't have any hooks, we want to skip the rest of the logic in 1191 # this function, and just call forward. 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] File ~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py:1182, in Module._slow_forward(self, *input, **kwargs) 1180 recording_scopes = False 1181 try: -> 1182 result = self.forward(*input, **kwargs) 1183 finally: 1184 if recording_scopes: File ~/Library/Python/3.9/lib/python/site-packages/transformers/models/t5/modeling_t5.py:949, in T5Stack.forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, cross_attn_head_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict) 947 else: 948 err_msg_prefix = "decoder_" if self.is_decoder else "" --> 949 raise ValueError(f"You have to specify either {err_msg_prefix}input_ids or {err_msg_prefix}inputs_embeds") 951 if inputs_embeds is None: 952 assert self.embed_tokens is not None, "You have to initialize the model with valid token embeddings" ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds How should I trace t5 model? Can you provide any example? Thanks ```<|||||>@dhrubo-os have fixed it, am also seeing the same issue<|||||>@dhrubo-os can be fixed we just need to pass as below ``` traced_token_predictor = torch.jit.trace(model, [ input_ids["input_ids"], input_ids["attention_mask"], decoder_input_ids["input_ids"] ]) ``` since model second argument is attention_mask its taking decoder_input_ids as None
transformers
5,646
closed
Can't get (global) attention probs using Longformer
# 🐛 Bug ## Information Model I am using **Longformer**: Language I am using the model on Japanese: The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Set config.output_attentions=True 2. Use global attention (sum(global_attention_mask)>0) The following is the minimum code to reproduce the error. ~~~python3:test.py import torch from transformers import AutoModel, AutoTokenizer, AutoConfig if __name__ == '__main__': config = AutoConfig.from_pretrained("allenai/longformer-base-4096", output_attentions=True) model = AutoModel.from_pretrained("allenai/longformer-base-4096", config=config) tokenizer = AutoTokenizer.from_pretrained("allenai/longformer-base-4096") token_ids = [[ tokenizer.cls_token_id, 10, 11, 12, tokenizer.sep_token_id, 21, 22, 23, tokenizer.sep_token_id ]] global_attention_mask = [[1,1,1,1,1,0,0,0,0]] logit, *_, attention_probs = model( torch.LongTensor(token_ids), global_attention_mask=torch.LongTensor(global_attention_mask) ) print(attention_probs[0].size()) ~~~ ~~~bash $ python3 test.py Traceback (most recent call last): File "test_longformer.py", line 16, in <module> global_attention_mask=torch.LongTensor(global_attention_mask) File "/uge_mnt/home/username/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/uge_mnt/home/username/.local/lib/python3.6/site-packages/transformers/modeling_longformer.py", line 1004, in forward output_hidden_states=output_hidden_states, File "/uge_mnt/home/username/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/uge_mnt/home/username/.local/lib/python3.6/site-packages/transformers/modeling_longformer.py", line 695, in forward layer_outputs = layer_module(hidden_states, attention_mask, output_attentions,) File "/uge_mnt/home/username/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/uge_mnt/home/username/.local/lib/python3.6/site-packages/transformers/modeling_longformer.py", line 658, in forward self_attn_outputs = self.attention(hidden_states, attention_mask, output_attentions=output_attentions,) File "/uge_mnt/home/username/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/uge_mnt/home/username/.local/lib/python3.6/site-packages/transformers/modeling_longformer.py", line 642, in forward self_outputs = self.self(hidden_states, attention_mask, output_attentions,) File "/uge_mnt/home/username/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/uge_mnt/home/username/.local/lib/python3.6/site-packages/transformers/modeling_longformer.py", line 435, in forward attn_probs = attn_probs.view(batch_size, self.num_heads, max_num_global_attn_indices, seq_len) RuntimeError: shape '[1, 12, 5, 512]' is invalid for input of size 3182592 ~~~ <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> The model can output attention probs for each attention head. ~~~bash $ python3 test.py torch.Size([1, 12, 4096, 5]) ~~~ It would seem to work if I rewrite the target line as follows. https://github.com/huggingface/transformers/blob/02a0b43014ac333a169e99d76aaba023a316e384/src/transformers/modeling_longformer.py#L435 ~~~python3 #attn_probs = attn_probs.view(batch_size, self.num_heads, max_num_global_attn_indices, seq_len) attn_probs = attn_probs[:,:,:,:max_num_global_attn_indices] attn_probs = attn_probs.permute(0, 2, 1, 3) ~~~ ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version:3.0.2 - Platform:Ubuntu 18.04.4 LTS - Python version:Python 3.6.9 :: Anaconda, Inc. - PyTorch version (GPU?):1.5.1 (Yes) - Tensorflow version (GPU?): - Using GPU in script?:Yes - Using distributed or parallel set-up in script?:Yes
07-10-2020 02:12:45
07-10-2020 02:12:45
Hey @k141303, Thanks a lot for the issue - I can reproduce!<|||||>Thanks a lot for your very clean issue + proposed solution. It makes it very easy to find the error and fix it :-) BTW, in cases like this issue when you see a clear fix to the bug, Pull Requests are very welcome as well!<|||||>Hi, @patrickvonplaten I also thought this was the solution, but it turned out to create a new bug. ## To reproduce Steps to reproduce the behavior: 1. Set config.output_attentions=True 1. Use global attention (sum(global_attention_mask)>0) 1. **Use multiple GPUs** 1. **`max_num_global_attn_indices` is different in the batch** I confirmed it with the following code. (Apply the above solution by overriding.) ~~~python import math import torch from torch.nn import functional as F from transformers import LongformerModel, AutoTokenizer, AutoConfig from transformers.modeling_longformer import LongformerSelfAttention class MyLongformerSelfAttention(LongformerSelfAttention): def forward( self, hidden_states, attention_mask=None, output_attentions=False, ): attention_mask = attention_mask.squeeze(dim=2).squeeze(dim=1) # is index masked or global attention is_index_masked = attention_mask < 0 is_index_global_attn = attention_mask > 0 is_global_attn = any(is_index_global_attn.flatten()) hidden_states = hidden_states.transpose(0, 1) # project hidden states query_vectors = self.query(hidden_states) key_vectors = self.key(hidden_states) value_vectors = self.value(hidden_states) seq_len, batch_size, embed_dim = hidden_states.size() assert ( embed_dim == self.embed_dim ), f"hidden_states should have embed_dim = {self.embed_dim}, but has {embed_dim}" # normalize query query_vectors /= math.sqrt(self.head_dim) query_vectors = query_vectors.view(seq_len, batch_size, self.num_heads, self.head_dim).transpose(0, 1) key_vectors = key_vectors.view(seq_len, batch_size, self.num_heads, self.head_dim).transpose(0, 1) # attn_probs = (batch_size, seq_len, num_heads, window*2+1) attn_scores = self._sliding_chunks_query_key_matmul( query_vectors, key_vectors, self.one_sided_attn_window_size ) # values to pad for attention probs remove_from_windowed_attention_mask = (attention_mask != 0).unsqueeze(dim=-1).unsqueeze(dim=-1) # cast to fp32/fp16 then replace 1's with -inf float_mask = remove_from_windowed_attention_mask.type_as(query_vectors).masked_fill( remove_from_windowed_attention_mask, -10000.0 ) # diagonal mask with zeros everywhere and -inf inplace of padding diagonal_mask = self._sliding_chunks_query_key_matmul( float_mask.new_ones(size=float_mask.size()), float_mask, self.one_sided_attn_window_size ) # pad local attention probs attn_scores += diagonal_mask assert list(attn_scores.size()) == [ batch_size, seq_len, self.num_heads, self.one_sided_attn_window_size * 2 + 1, ], f"attn_probs should be of size ({batch_size}, {seq_len}, {self.num_heads}, {self.one_sided_attn_window_size * 2 + 1}), but is of size {attn_scores.size()}" # compute local attention probs from global attention keys and contact over window dim if is_global_attn: # compute global attn indices required through out forward fn ( max_num_global_attn_indices, is_index_global_attn_nonzero, is_local_index_global_attn_nonzero, is_local_index_no_global_attn_nonzero, ) = self._get_global_attn_indices(is_index_global_attn) # calculate global attn probs from global key global_key_attn_scores = self._concat_with_global_key_attn_probs( query_vectors=query_vectors, key_vectors=key_vectors, max_num_global_attn_indices=max_num_global_attn_indices, is_index_global_attn_nonzero=is_index_global_attn_nonzero, is_local_index_global_attn_nonzero=is_local_index_global_attn_nonzero, is_local_index_no_global_attn_nonzero=is_local_index_no_global_attn_nonzero, ) # concat to attn_probs # (batch_size, seq_len, num_heads, extra attention count + 2*window+1) attn_scores = torch.cat((global_key_attn_scores, attn_scores), dim=-1) # free memory del global_key_attn_scores attn_probs_fp32 = F.softmax(attn_scores, dim=-1, dtype=torch.float32) # use fp32 for numerical stability attn_probs = attn_probs_fp32.type_as(attn_scores) # free memory del attn_probs_fp32 # softmax sometimes inserts NaN if all positions are masked, replace them with 0 attn_probs = torch.masked_fill(attn_probs, is_index_masked.unsqueeze(-1).unsqueeze(-1), 0.0) # apply dropout attn_probs = F.dropout(attn_probs, p=self.dropout, training=self.training) value_vectors = value_vectors.view(seq_len, batch_size, self.num_heads, self.head_dim).transpose(0, 1) # compute local attention output with global attention value and add if is_global_attn: # compute sum of global and local attn attn_output = self._compute_attn_output_with_global_indices( value_vectors=value_vectors, attn_probs=attn_probs, max_num_global_attn_indices=max_num_global_attn_indices, is_index_global_attn_nonzero=is_index_global_attn_nonzero, is_local_index_global_attn_nonzero=is_local_index_global_attn_nonzero, ) else: # compute local attn only attn_output = self._sliding_chunks_matmul_attn_probs_value( attn_probs, value_vectors, self.one_sided_attn_window_size ) assert attn_output.size() == (batch_size, seq_len, self.num_heads, self.head_dim), "Unexpected size" attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() # compute value for global attention and overwrite to attention output # TODO: remove the redundant computation if is_global_attn: global_attn_output = self._compute_global_attn_output_from_hidden( hidden_states=hidden_states, max_num_global_attn_indices=max_num_global_attn_indices, is_local_index_global_attn_nonzero=is_local_index_global_attn_nonzero, is_index_global_attn_nonzero=is_index_global_attn_nonzero, is_local_index_no_global_attn_nonzero=is_local_index_no_global_attn_nonzero, is_index_masked=is_index_masked, ) # get only non zero global attn output nonzero_global_attn_output = global_attn_output[ is_local_index_global_attn_nonzero[0], :, is_local_index_global_attn_nonzero[1] ] # overwrite values with global attention attn_output[is_index_global_attn_nonzero[::-1]] = nonzero_global_attn_output.view( len(is_local_index_global_attn_nonzero[0]), -1 ) attn_output = attn_output.transpose(0, 1) if output_attentions: if is_global_attn: # With global attention, return global attention probabilities only # batch_size x num_heads x max_num_global_attention_tokens x sequence_length # which is the attention weights from tokens with global attention to all tokens # It doesn't not return local attention # In case of variable number of global attantion in the rows of a batch, # attn_probs are padded with -10000.0 attention scores #attn_probs = attn_probs.view(batch_size, self.num_heads, max_num_global_attn_indices, seq_len) attn_probs = attn_probs[:,:,:,:max_num_global_attn_indices] attn_probs = attn_probs.permute(0, 2, 1, 3) else: # without global attention, return local attention probabilities # batch_size x num_heads x sequence_length x window_size # which is the attention weights of every token attending to its neighbours attn_probs = attn_probs.permute(0, 2, 1, 3) outputs = (attn_output, attn_probs) if output_attentions else (attn_output,) return outputs class MyLongformerModel(LongformerModel): def __init__(self, config): super().__init__(config) for i, layer in enumerate(self.encoder.layer): layer.attention.self = MyLongformerSelfAttention(config, i) self.init_weights() if __name__ == '__main__': config = AutoConfig.from_pretrained("allenai/longformer-base-4096", output_attentions=True) model = MyLongformerModel.from_pretrained("allenai/longformer-base-4096", config=config) tokenizer = AutoTokenizer.from_pretrained("allenai/longformer-base-4096") token_ids = [[ tokenizer.cls_token_id, 10, 11, 12, tokenizer.sep_token_id, 21, 22, 23, tokenizer.sep_token_id ]]*2 global_attention_mask = [[1,1,1,1,1,0,0,0,0], [1,1,1,1,1,1,1,0,0]] device = torch.device("cuda" if torch.cuda.is_available() else "cpu") n_gpu = torch.cuda.device_count() model.to(device) if n_gpu > 1: model = torch.nn.DataParallel(model) print(f"DEVICE:{device} N_GPU:{n_gpu}") logit, *_, attention_probs = model( torch.LongTensor(token_ids), global_attention_mask=torch.LongTensor(global_attention_mask) ) print(attention_probs[0].size()) ~~~ ~~~bash username@34dcdd033731:~/Python/temp$ python3 test_longformer.py DEVICE:cuda N_GPU:4 Traceback (most recent call last): File "test_longformer.py", line 194, in <module> global_attention_mask=torch.LongTensor(global_attention_mask) File "/uge_mnt/home/username/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/uge_mnt/home/username/.local/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 156, in forward return self.gather(outputs, self.output_device) File "/uge_mnt/home/username/.local/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 168, in gather return gather(outputs, output_device, dim=self.dim) File "/uge_mnt/home/username/.local/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 68, in gather res = gather_map(outputs) File "/uge_mnt/home/username/.local/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 63, in gather_map return type(out)(map(gather_map, zip(*outputs))) File "/uge_mnt/home/username/.local/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 63, in gather_map return type(out)(map(gather_map, zip(*outputs))) File "/uge_mnt/home/username/.local/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 55, in gather_map return Gather.apply(target_device, dim, *outputs) File "/uge_mnt/home/username/.local/lib/python3.6/site-packages/torch/nn/parallel/_functions.py", line 68, in forward return comm.gather(inputs, ctx.dim, ctx.target_device) File "/uge_mnt/home/username/.local/lib/python3.6/site-packages/torch/cuda/comm.py", line 165, in gather return torch._C._gather(tensors, dim, destination) RuntimeError: Gather got an input of invalid size: got [1, 12, 512, 7], but expected [1, 12, 512, 5] ~~~ I think there are some solutions. For example: - Share `max_num_global_attn_indices` between GPUs. - Define `max_num_global_attn_indices` in config. I'm sorry I can't suggest a specific solution.<|||||>Thanks for the notification - will take a look next week :-) <|||||>Sorry, I forgot that today is Friday. Have a good weekend :-) ## For those facing the same problem. The following is an idea for a temporary solution to the problem. It might be helpful. https://github.com/huggingface/transformers/blob/02a0b43014ac333a169e99d76aaba023a316e384/src/transformers/modeling_longformer.py#L435 ↓↓↓ ~~~python #attn_probs = attn_probs.view(batch_size, self.num_heads, max_num_global_attn_indices, seq_len) attn_probs = attn_probs[:,:,:,:max_num_global_attn_indices] attn_probs = F.pad( attn_probs, (0, seq_len-max_num_global_attn_indices), "constant", 0.0, ) attn_probs = attn_probs.permute(0, 2, 1, 3) ~~~ ~~~bash $ python3 test.py DEVICE:cuda N_GPU:4 torch.Size([2, 12, 512, 512]) ~~~<|||||>@k141303 - thanks a lot for your proposed solution. Padding to the sequence length is actually a very clean solution. Since we are only returning global attention probs, I think logically it makes also sense to pad the other values with 0.0 since they weren't attended to for global attention => so we'll go for this here. Instead of `seq_len` we will pad to `window_size` so that local and global attention always have the same output dimension. I think this has a slight advantage in that the output signature is more consistent. <|||||>So in this case the output would be: ```python torch.Size([1, 12, 512, 513]) ``` which is the same as if only local attention would have been used.<|||||>@patrickvonplaten It seems that the code causing the error in commit 02a0b43 (fixed by commit 7096e47) was reintroduced at some point. The code of current commit df53643 looks like 02a0b43 instead of 7096e47.<|||||>Also, I wonder if the output is correct. Add the following lines right after the minimum code of @k141303. print(attention_probs[0][0,0,:5,:].sum(dim=1)) print(attention_probs[0][0,0,:,:5].sum(dim=0)) This shows that: 1. For each head (showing only for the first), all the rows with global attention do not sum to 1. 1. For each head (showing only for the first), all the columns with global attention do not sum to 1. Therefore neither the rows nor the column of the attention matrices can be `the attention weights from tokens with global attention to all tokens`. As far as I understand from the code, the columns are actually the attention weights from all tokens to the tokens with global attention, but this is not really useful, is it? For instance, it would be more useful to know where `CLS` puts attention instead of knowing which tokens pay attention to `CLS`. <|||||>@patrickvonplaten I think that the global attention that should be returned is a computation intermediate of the function `_compute_global_attn_output_from_hidden`. It is called `global_attn_probs` (or `global_attn_probs_float` before the dropouts are applied). If only global attention is to be returned, you could consider returning this intermediate together with the attention output of `_compute_global_attn_output_from_hidden`. If you assign it to `attn_probs` in the function `forward` then you are almost done (otherwise you have to recompute it). The dimension of this intermediate are `(H,G,L)` where `H` is the number of attention heads, `G` is the number of tokens with global attention and `L` is the text length (a multiple of `attention_window`, which I will write `W` for short). If you want the output to have dimensions `(H,L,W)` to be congruent with the local attention, you would have to transpose it before padding. This may be very confusing because the rows of the local attention would sum to 1, whereas the the first `G` columns of the global attention would sum to 1 and all the others would sum to 0. Since the dimensions of global attention are intrinsically different from those of local attention, it's probably better to leave them as `(H,G,L)`. You could output a tuple with local attention `(H,L,W)` and global attention `(H,G,L)` instead of a single tensor. Unfortunately reconstituting full attention matrices `(H,L,L)` is a no go: you need Longformers precisely because this does not fit in memory.<|||||>Hey @gui11aume , good point! I guess, either way we do it, it's not perfect for Longformer....I think the cleanest solution would actually to add a new output type called `global_attentions` and output both `attentions` and `global_attentions`. This is more or less the same idea as outputting two tuples that you proposed. Opened an issue about it here: -> Feel free to open a PR if you want :-) It's not of very high prio for me at the moment - so I thought it might be a good issue to tackle for people that work with Longformer. If no one is interested in opening a PR, I'll eventually do it :-) <|||||>I didn't want to do a PR earlier because I wasn't sure about the interface you want. Having a separate field `global_attentions` is much cleaner. I should be able to propose something soon and I'll continue the discussion on issue #7514.
transformers
5,645
closed
enable easy checkout switch
allow having multiple repository checkouts and not needing to remember to rerun `pip install -e .[dev]` when switching between checkouts and running tests. This code will automatically do the right thing for the test suite. Note that `python -m pytest` automatically adds `.` to the path, so normally most packages get automatically tested against the local checkout. However, since this project is under a sub-dir `src/` this feature doesn't help.
07-10-2020 01:02:51
07-10-2020 01:02:51
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5645?src=pr&el=h1) Report > Merging [#5645](https://codecov.io/gh/huggingface/transformers/pull/5645?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/02a0b43014ac333a169e99d76aaba023a316e384&el=desc) will **increase** coverage by `0.97%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5645/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5645?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5645 +/- ## ========================================== + Coverage 78.17% 79.14% +0.97% ========================================== Files 145 145 Lines 25366 25366 ========================================== + Hits 19829 20076 +247 + Misses 5537 5290 -247 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5645?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5645/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+6.51%)` | :arrow_up: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5645/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.32% <0.00%> (+31.77%)` | :arrow_up: | | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5645/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5645?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5645?src=pr&el=footer). Last update [02a0b43...a348729](https://codecov.io/gh/huggingface/transformers/pull/5645?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Not seeing any activity, I'm not sure the purpose of this feature is clear, so I'd try to clarify: If you just use one repo check out and then just switch git branches then running `pip install -e .[dev]` once is sufficient for an error-free work-flow. Problems: - if you also want to have a normal `transformers` installed and used - you can't, you will have to constantly rerun `pip install -e .` , unless you use a different virtual environment - this is very error-prone - forgetting to switch - you can't have more than one check out - you again have to re-run `pip install -e .` - so easy to forget and waste time figuring out why code modifications have no effect. Solution: - let's point python path to `/full/path/to/checktout-dir/src/`and now you will never again need to remember to run `pip install -e .` to run the test suite against. And you can still have "normal" `transformers` installed for normal use. It doesn't interfere with anybody's current work flow. I at the very least have two checkouts - one remote master, which I can run tests against any moment, after just `git pull` and then the forked master and its branches, where development is done. I typically have several check outs for different branches in my dev environments since I find it's often simpler to manage then switching branches all the time. Thank you. <|||||>And I see that `examples` needs the same solution (added). The other workaround is to run tests with: ``` PYTHONPATH=`pwd`/src:$PYTHONPATH python -m pytest ... ``` but this is far from being easy to be used often. And finally, removing the intermediary `src` dir and making `transformers` a top-level dir will fix this problem as well for the `python -m pytest` situation, but not in other kinds of invocation.<|||||>Could someone with write access rerun this CI check - the failure has nothing to do with my PR. https://app.circleci.com/pipelines/github/huggingface/transformers/9527/workflows/73306d70-4190-48cd-b24a-b73619cd2002/jobs/64665/steps Thank you. --- Thank you to the kind soul who triggered a re-run.
transformers
5,644
closed
FlaubertForTokenClassification
implement FlaubertForTokenClassification as a subclass of XLMForTokenClassification. Based on an item from https://github.com/huggingface/transformers/projects/17
07-10-2020 00:16:20
07-10-2020 00:16:20
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5644?src=pr&el=h1) Report > Merging [#5644](https://codecov.io/gh/huggingface/transformers/pull/5644?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0befb513278f6e42b722be340dbc667e0ba2718e&el=desc) will **decrease** coverage by `1.02%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5644/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5644?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5644 +/- ## ========================================== - Coverage 78.26% 77.24% -1.03% ========================================== Files 146 146 Lines 25998 26005 +7 ========================================== - Hits 20348 20088 -260 - Misses 5650 5917 +267 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5644?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/5644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.24% <ø> (ø)` | | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `74.41% <ø> (ø)` | | | [src/transformers/modeling\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/5644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF1YmVydC5weQ==) | `85.18% <100.00%> (+0.81%)` | :arrow_up: | | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.49% <0.00%> (+0.29%)` | :arrow_up: | | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5644?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5644?src=pr&el=footer). Last update [0befb51...4bb7577](https://codecov.io/gh/huggingface/transformers/pull/5644?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thanks! Can you also add the model to the common tests (by adding it to the [all_model_classes](https://github.com/huggingface/transformers/blob/b2747af5434e5a5d8ab1d7e2789699d20d7a4ab8/tests/test_modeling_flaubert.py#L316)) and in the [documentation file](https://github.com/huggingface/transformers/blob/master/docs/source/model_doc/flaubert.rst)? This looks great to me otherwise.<|||||>> Can you also add the model to the common tests (by adding it to the [all_model_classes](https://github.com/huggingface/transformers/blob/b2747af5434e5a5d8ab1d7e2789699d20d7a4ab8/tests/test_modeling_flaubert.py#L316)) I tried that originally, but they don't support `*TokenClassification` - its outputs are different, so most tests break. Note that `XLMForTokenClassification` isn't being tested in the common tests. This PR was really monkeyseemonkeydo. Perhaps merging this and then work on `XLMForTokenClassification` common tests first? and then the subclass will be easy. > and in the documentation file? done.<|||||>I think there is a bug in `XLMForTokenClassification` - if I do this fix: ``` --- a/src/transformers/modeling_xlm.py +++ b/src/transformers/modeling_xlm.py @@ -1079,7 +1079,7 @@ class XLMForTokenClassification(XLMPreTrainedModel): sequence_output = self.dropout(sequence_output) logits = self.classifier(sequence_output) - outputs = (logits,) + outputs[2:] # add hidden states and attention if they are here + outputs = (logits,) + outputs[1:] # add hidden states and attention if they are here if labels is not None: loss_fct = CrossEntropyLoss() # Only keep active parts of the loss ``` I can now add `XLMForTokenClassification` to `all_model_classes` - and 99% of it now passes. It looks like that line of code was copied from `BertForTokenClassification`, but for XLM it appears to need to be `outputs[1:]` <|||||>Oh, my fork was outdated - I see you have just fixed this bug. OK, adding FlaubertForTokenClassification to all_model_classes should work now.
transformers
5,643
closed
Help with Using TFXLNet on custom embeddings
# 🐛 Bug ## Information Hi, I am working on implementing the multimodal bitransformer: https://arxiv.org/pdf/1909.02950.pdf I have already gotten this working using your implementation of TFBertModel, but I want to try using TFXLNetModel in place of BERT to see if it makes an improvement. What I've done for using TFBertModel is extract the word_embeddings and pass the word embeddings with token_type_ids = 0 and image embeddings with token_type_ids = 1. ## To reproduce I have the following definition of my model ` class BERT(transformers.TFXLNetModel): def __init__(self, config, *inputs, **kwargs): super(BERT, self).__init__(config, *inputs, **kwargs) self.call = tf.function(self.call) class MyModel(tf.keras.Model): def __init__(self): super(MyModel, self).__init__() self.resnet = tf.keras.applications.ResNet152V2(include_top=False, weights='imagenet', input_shape=(224, 224, 3)) self.bert = BERT.from_pretrained('xlnet-base-cased') self.text_embedding = self.bert.get_input_embeddings().weights[0] self.pooling = layers.AveragePooling2D(pool_size=(2, 2), padding='same') self.reshape = layers.Reshape((4 * 4, 2048)) # 4 is from 7//2 + 1 self.W_ns = [layers.Dense(self.bert.config.hidden_size) for _ in range(self.reshape.target_shape[0])] self.concat = layers.Concatenate(axis=1) self.dropout = layers.Dropout(0.1) self.denseout = layers.Dense(1, activation='sigmoid') def call(self, inputs): text, image = inputs # handle image image = tf.keras.applications.resnet_v2.preprocess_input(image) image_emb = self.resnet(image) image_emb = self.pooling(image_emb) image_emb = self.reshape(image_emb) image_embeds = [self.W_ns[i](image_emb[:, i]) for i in range(self.reshape.target_shape[0])] image_emb = tf.keras.backend.stack(image_embeds, axis=1) # handle text text_emb = tf.gather(self.text_embedding, text) # concat and feed to bert concat_emb = self.concat([text_emb, image_emb]) seg_ids = np.concatenate((np.zeros(max_len, dtype=np.int64), np.ones(self.reshape.target_shape[0], dtype=np.int64))) print('input shapes to xlnet', concat_emb.shape, seg_ids.shape) bert_encodings = self.bert(inputs={'inputs_embeds': concat_emb, 'token_type_ids': seg_ids})[0] doc_encoding = tf.squeeze(bert_encodings[:, 0:1, :], axis=1) doc_encoding = self.dropout(doc_encoding) output = self.denseout(doc_encoding) return output ` In the line that prints "input shapes to xlnet", I get (None, 116, 768) for the inputs_embeds and (116,) for the token_type_ids, which I expect because I have 100 word embeddings and 16 image embeddings. When I call fit() on this model, it gives the error: > ValueError: in converted code: > > <ipython-input-12-b6bccd3c83e0>:50 call * > bert_encodings = self.bert(inputs={'inputs_embeds': concat_emb, > /homes/awl27/python36env/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer.py:891 __call__ > outputs = self.call(cast_inputs, *args, **kwargs) > /homes/awl27/python36env/lib/python3.6/site-packages/transformers/modeling_tf_xlnet.py:824 call * > outputs = self.transformer(inputs, **kwargs) > /homes/awl27/python36env/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer.py:842 __call__ > outputs = call_fn(cast_inputs, *args, **kwargs) > /homes/awl27/python36env/lib/python3.6/site-packages/transformers/modeling_tf_xlnet.py:530 call * > token_type_ids = tf.transpose(token_type_ids, perm=(1, 0)) if token_type_ids is not None else None > /homes/awl27/python36env/lib/python3.6/site-packages/tensorflow_core/python/ops/array_ops.py:1780 transpose_v2 > return transpose(a=a, perm=perm, name=name, conjugate=conjugate) > /homes/awl27/python36env/lib/python3.6/site-packages/tensorflow_core/python/ops/array_ops.py:1870 transpose > ret = transpose_fn(a, perm, name=name) > /homes/awl27/python36env/lib/python3.6/site-packages/tensorflow_core/python/ops/gen_array_ops.py:11455 transpose > "Transpose", x=x, perm=perm, name=name) > /homes/awl27/python36env/lib/python3.6/site-packages/tensorflow_core/python/framework/op_def_library.py:793 _apply_op_helper > op_def=op_def) > /homes/awl27/python36env/lib/python3.6/site-packages/tensorflow_core/python/framework/func_graph.py:548 create_op > compute_device) > /homes/awl27/python36env/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py:3429 _create_op_internal > op_def=op_def) > /homes/awl27/python36env/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py:1773 __init__ > control_input_ops) > /homes/awl27/python36env/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py:1613 _create_c_op > raise ValueError(str(e)) > > ValueError: Dimension must be 1 but is 2 for 'transformer/transpose_1' (op: 'Transpose') with input shapes: [116], [2]. > ## Expected behavior I expected this to work just like TFBertModel did. If I just change the definition in the BERT class to use TFBertModel instead of TFXLNetModel, it works fine. ## Environment info PyTorch version: 1.5.0 Is debug build: No CUDA used to build PyTorch: 10.2 OS: Ubuntu 16.04.6 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609 CMake version: version 3.5.1 Python version: 3.6 Is CUDA available: No CUDA runtime version: 10.0.130 GPU models and configuration: GPU 0: Tesla K40c Nvidia driver version: 418.87.00 cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.3 Versions of relevant libraries: [pip3] numpy==1.16.6 [pip3] torch==1.5.0 [pip3] torchtext==0.5.0 [pip3] torchvision==0.6.0 [conda] Could not collect
07-09-2020 23:24:15
07-09-2020 23:24:15
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,642
closed
Improvements to PretrainedConfig documentation
Preview is [here](https://59194-155220641-gh.circle-artifacts.com/0/docs/_build/html/main_classes/configuration.html)
07-09-2020 22:08:17
07-09-2020 22:08:17
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5642?src=pr&el=h1) Report > Merging [#5642](https://codecov.io/gh/huggingface/transformers/pull/5642?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/760f726e516752d27142346d8552682d3f6f0532&el=desc) will **increase** coverage by `0.89%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5642/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5642?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5642 +/- ## ========================================== + Coverage 76.87% 77.77% +0.89% ========================================== Files 145 145 Lines 25364 25366 +2 ========================================== + Hits 19499 19728 +229 + Misses 5865 5638 -227 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5642?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5642/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.45% <100.00%> (+0.05%)` | :arrow_up: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5642/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: | | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5642/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5642/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.71% <0.00%> (-1.51%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5642/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.72% <0.00%> (-1.38%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5642/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: | | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5642/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5642?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5642?src=pr&el=footer). Last update [760f726...56b5942](https://codecov.io/gh/huggingface/transformers/pull/5642?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,641
closed
Multiple Mask Tokens
For those wishing to [MASK] several tokens, here this is. My question, however, relates to the output. I added "top_k" assuming I'd be able to return multiple sentences, but that was not the case. I am not sure how exactly I can achieve this. ``` import torch from transformers import BertTokenizer, BertModel,BertForMaskedLM tokenizer = BertTokenizer.from_pretrained('bert-base-cased') input_tx = "[CLS] [MASK] [MASK] [MASK] of the United States mismangement of the Coronavirus is its distrust of science. [SEP]" tokenized_text = tokenizer.tokenize(input_tx) indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text) top_k = 10 tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([[0]*25]) model = BertForMaskedLM.from_pretrained('bert-base-cased') outputs = model(tokens_tensor, token_type_ids=segments_tensors) predictions = outputs[0] predicted_index = [torch.argmax(predictions[0, i]).item() for i in range(0,24)] predicted_token = [tokenizer.convert_ids_to_tokens([predicted_index[x]])[0] for x in range(1,24)] print(predicted_token) ``` `Output: 'The', 'main', 'cause', 'of', 'the', 'United', 'States', 'mi', '##sman', '##gement', 'of', 'the', 'Co', '##rona', '##virus', 'is', 'its', 'di', '##st', '##rust', 'of', 'science', '`
07-09-2020 21:42:18
07-09-2020 21:42:18
Hi! This is a very good question. We just opened a forum on [discuss.huggingface.co](https://discuss.huggingface.co/) to discuss those kind of questions exactly. Do you think you could go over there and ask it? Thanks a lot!<|||||>Sure, I'll do that now! @LysandreJik <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,640
closed
Cleanup bart caching logic
Previously, we had a helper function that checked 4 possible cases to determine whether we should: (a) combine a cached attention mask with a new one. (b) just use the cached one (c) just use the new/passed one This consolidates that logic into 3 branches and deletes the helper func, which was only called once.
07-09-2020 21:17:10
07-09-2020 21:17:10
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5640?src=pr&el=h1) Report > Merging [#5640](https://codecov.io/gh/huggingface/transformers/pull/5640?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7fad617dc1fc681a7f5da5e0172c8b83f4bf0024&el=desc) will **decrease** coverage by `0.12%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5640/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5640?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5640 +/- ## ========================================== - Coverage 78.11% 77.99% -0.13% ========================================== Files 146 146 Lines 25983 25975 -8 ========================================== - Hits 20297 20259 -38 - Misses 5686 5716 +30 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5640?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5640/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.74% <100.00%> (-0.06%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5640/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: | | [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5640/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5640/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: | | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5640/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5640?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5640?src=pr&el=footer). Last update [7fad617...94aac5d](https://codecov.io/gh/huggingface/transformers/pull/5640?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,639
closed
test suite fails due to pytorch bug in torch.seed
# 🐛 Bug ## Information This is on a dual-gpu machine. Almost all tests/test_modeling_reformer.py sub-tests fail with: ``` def cb(): for i in range(device_count()): default_generator = torch.cuda.default_generators[i] > default_generator.manual_seed(seed) E RuntimeError: Overflow when unpacking long ``` when run after any test_multigpu_data_parallel_forward sub-test, e.g.: `python -m pytest -n 1 --dist=loadfile -v tests/test_modeling_electra.py::ElectraModelTest::test_multigpu_data_parallel_forward tests/test_modeling_reformer.py::ReformerLocalAttnModelTest::test_attention_outputs` **The failure gets triggered here:** ``` transformers/modeling_reformer.py:1102: in _init_attention_seed self.attention_seed = int((torch.seed() % sys.maxsize)) ``` I reduced the failing sequence of code to this: ``` # test.py import torch print(f"Torch version: {torch.__version__}") x = torch.tensor(data=[[1,2],[3,4]], dtype=torch.long, device=None) x = x.to('cuda:0') seed = torch.seed() ``` ``` $ python tests/test.py Torch version: 1.5.1 Traceback (most recent call last): File "tests/test.py", line 10, in <module> seed = torch.seed() File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/random.py", line 45, in seed torch.cuda.manual_seed_all(seed) File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/cuda/random.py", line 111, in manual_seed_all _lazy_call(cb) File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/cuda/__init__.py", line 99, in _lazy_call callable() File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/cuda/random.py", line 109, in cb default_generator.manual_seed(seed) RuntimeError: Overflow when unpacking long ``` It fails about 75% of time. It happens after moving the tensor to gpu. This seems to be related to this [pytorch bug](https://github.com/pytorch/pytorch/issues/33546), albeit somewhat different sequence of code. ## Environment info - `transformers` version: 3.0.2 - Platform: Linux-4.15.0-109-generic-x86_64-with-debian-buster-sid - Python version: 3.7.5 - PyTorch version (GPU?): 1.5.1 (True) - Tensorflow version (GPU?): 2.0.1 (False) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no Full trace of failing sub-tests (one of them): ``` python -m pytest -n 1 --dist=loadfile -v tests/test_modeling_electra.py::ElectraModelTest::test_multigpu_data_parallel_forward tests/test_modeling_reformer.py::ReformerLocalAttnModelTest::test_attention_outputs ====================================================================== test session starts ======================================================================= platform linux -- Python 3.7.5, pytest-5.4.3, py-1.9.0, pluggy-0.13.1 -- /home/stas/anaconda3/envs/main/bin/python cachedir: .pytest_cache hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/mnt/nvme1/code/huggingface/transformers-FlaubertForTokenClassification/.hypothesis/examples') rootdir: /mnt/nvme1/code/huggingface/transformers-FlaubertForTokenClassification plugins: hypothesis-5.5.4, filter-subpackage-0.1.1, arraydiff-0.3, flaky-3.6.1, ipynb-1.1.1.dev0, cov-2.10.0, astropy-header-0.1.2, forked-1.2.0, doctestplus-0.5.0, openfiles-0.4.0, remotedata-0.3.2, xdist-1.32.0 [gw0] linux Python 3.7.5 cwd: /mnt/nvme1/code/huggingface/transformers-FlaubertForTokenClassification [gw0] Python 3.7.5 (default, Oct 25 2019, 15:51:11) -- [GCC 7.3.0] gw0 [2] scheduling tests via LoadFileScheduling tests/test_modeling_electra.py::ElectraModelTest::test_multigpu_data_parallel_forward [gw0] [ 50%] PASSED tests/test_modeling_electra.py::ElectraModelTest::test_multigpu_data_parallel_forward tests/test_modeling_reformer.py::ReformerLocalAttnModelTest::test_attention_outputs [gw0] [100%] FAILED tests/test_modeling_reformer.py::ReformerLocalAttnModelTest::test_attention_outputs ============================================================================ FAILURES ============================================================================ _______________________________________________________ ReformerLocalAttnModelTest.test_attention_outputs ________________________________________________________ [gw0] linux -- Python 3.7.5 /home/stas/anaconda3/envs/main/bin/python self = <tests.test_modeling_reformer.ReformerLocalAttnModelTest testMethod=test_attention_outputs> def test_attention_outputs(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() seq_len = getattr(self.model_tester, "seq_length", None) decoder_seq_length = getattr(self.model_tester, "decoder_seq_length", seq_len) encoder_seq_length = getattr(self.model_tester, "encoder_seq_length", seq_len) decoder_key_length = getattr(self.model_tester, "key_length", decoder_seq_length) encoder_key_length = getattr(self.model_tester, "key_length", encoder_seq_length) chunk_length = getattr(self.model_tester, "chunk_length", None) if chunk_length is not None and hasattr(self.model_tester, "num_hashes"): encoder_seq_length = encoder_seq_length * self.model_tester.num_hashes for model_class in self.all_model_classes: inputs_dict["output_attentions"] = True inputs_dict["output_hidden_states"] = False model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): > outputs = model(**self._prepare_for_class(inputs_dict, model_class)) tests/test_modeling_common.py:149: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/nn/modules/module.py:550: in __call__ result = self.forward(*input, **kwargs) ../transformers-XLMForTokenClassification/src/transformers/modeling_reformer.py:1623: in forward output_attentions=output_attentions, /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/nn/modules/module.py:550: in __call__ result = self.forward(*input, **kwargs) ../transformers-XLMForTokenClassification/src/transformers/modeling_reformer.py:1371: in forward output_attentions, ../transformers-XLMForTokenClassification/src/transformers/modeling_reformer.py:1267: in forward output_attentions=output_attentions, /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/nn/modules/module.py:550: in __call__ result = self.forward(*input, **kwargs) ../transformers-XLMForTokenClassification/src/transformers/modeling_reformer.py:1138: in forward self._init_attention_seed() ../transformers-XLMForTokenClassification/src/transformers/modeling_reformer.py:1102: in _init_attention_seed self.attention_seed = int((torch.seed() % sys.maxsize)) /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/random.py:45: in seed torch.cuda.manual_seed_all(seed) /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/cuda/random.py:111: in manual_seed_all _lazy_call(cb) /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/cuda/__init__.py:99: in _lazy_call callable() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ def cb(): for i in range(device_count()): default_generator = torch.cuda.default_generators[i] > default_generator.manual_seed(seed) E RuntimeError: Overflow when unpacking long /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/cuda/random.py:109: RuntimeError ======================================================================== warnings summary ======================================================================== /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15 /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/graphql/type/directives.py:55 /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/graphql/type/directives.py:55: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3,and in 3.9 it will stop working assert isinstance(locations, collections.Iterable), 'Must provide locations for directive.' /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/graphql/type/directives.py:62 /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/graphql/type/directives.py:62: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3,and in 3.9 it will stop working assert isinstance(args, collections.Mapping), '{} args must be a dict with argument names as keys.'.format(name) /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/graphql/type/typemap.py:1 /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/graphql/type/typemap.py:1: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3,and in 3.9 it will stop working from collections import OrderedDict, Sequence, defaultdict -- Docs: https://docs.pytest.org/en/latest/warnings.html ==================================================================== short test summary info ===================================================================== FAILED tests/test_modeling_reformer.py::ReformerLocalAttnModelTest::test_attention_outputs - RuntimeError: Overflow when unpacking long ============================================================ 1 failed, 1 passed, 4 warnings in 5.57s ======================================= ``` ```
07-09-2020 19:16:36
07-09-2020 19:16:36
A fix has been just applied here: https://github.com/pytorch/pytorch/commit/5edd9aa95a8a73e940185f8448e7db05394ce6fe will re-test with the nightly<|||||>I confirmed that this pytorch bug has been fixed in nightly and the tests don't fail anymore.
transformers
5,638
closed
Create README.md
Create model card for T5-small fine-tuned on SQUAD v1.1
07-09-2020 19:04:33
07-09-2020 19:04:33
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5638?src=pr&el=h1) Report > Merging [#5638](https://codecov.io/gh/huggingface/transformers/pull/5638?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b25f7802de2749a5f8c3430437eceabf9e6384b8&el=desc) will **increase** coverage by `0.23%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5638/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5638?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5638 +/- ## ========================================== + Coverage 77.52% 77.75% +0.23% ========================================== Files 145 145 Lines 25364 25364 ========================================== + Hits 19663 19723 +60 + Misses 5701 5641 -60 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5638?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5638/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5638/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.70% <0.00%> (-2.26%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5638/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5638/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5638?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5638?src=pr&el=footer). Last update [b25f780...689146c](https://codecov.io/gh/huggingface/transformers/pull/5638?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,637
closed
Add forum link in the docs
07-09-2020 19:02:21
07-09-2020 19:02:21
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5637?src=pr&el=h1) Report > Merging [#5637](https://codecov.io/gh/huggingface/transformers/pull/5637?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b25f7802de2749a5f8c3430437eceabf9e6384b8&el=desc) will **decrease** coverage by `0.64%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5637/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5637?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5637 +/- ## ========================================== - Coverage 77.52% 76.88% -0.65% ========================================== Files 145 145 Lines 25364 25364 ========================================== - Hits 19663 19500 -163 - Misses 5701 5864 +163 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5637?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5637?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5637?src=pr&el=footer). Last update [b25f780...d6ab752](https://codecov.io/gh/huggingface/transformers/pull/5637?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,636
closed
Should check that torch TPU is available
fix #5634
07-09-2020 17:43:14
07-09-2020 17:43:14
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5636?src=pr&el=h1) Report > Merging [#5636](https://codecov.io/gh/huggingface/transformers/pull/5636?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b9d8af07e66764bbf4213e1ce443fcdfa927ca46&el=desc) will **not change** coverage. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5636/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5636?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5636 +/- ## ======================================= Coverage 77.66% 77.66% ======================================= Files 145 145 Lines 25364 25364 ======================================= Hits 19700 19700 Misses 5664 5664 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5636?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5636/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.60% <ø> (ø)` | | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5636/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `86.91% <100.00%> (ø)` | | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5636/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.70% <0.00%> (-2.26%)` | :arrow_down: | | [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5636/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `95.23% <0.00%> (+10.71%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5636?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5636?src=pr&el=footer). Last update [3cc23ee...a09ad90](https://codecov.io/gh/huggingface/transformers/pull/5636?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,635
closed
[WIP][Examples] Adding more examples and more introductory tutorials
07-09-2020 16:46:46
07-09-2020 16:46:46
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5635?src=pr&el=h1) Report > Merging [#5635](https://codecov.io/gh/huggingface/transformers/pull/5635?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8edfaaa81b9995cedea2f8805e4c18c2b6cb5bfc&el=desc) will **decrease** coverage by `0.58%`. > The diff coverage is `18.18%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5635/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5635?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5635 +/- ## ========================================== - Coverage 78.29% 77.71% -0.59% ========================================== Files 146 146 Lines 26607 26344 -263 ========================================== - Hits 20832 20473 -359 - Misses 5775 5871 +96 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5635?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5635/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `83.85% <18.18%> (-9.99%)` | :arrow_down: | | [...c/transformers/modeling\_tf\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/5635/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `9.90% <0.00%> (-76.24%)` | :arrow_down: | | [src/transformers/modeling\_tf\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5635/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `17.22% <0.00%> (-72.24%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/5635/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `25.82% <0.00%> (-62.38%)` | :arrow_down: | | [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5635/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-21.63%)` | :arrow_down: | | [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5635/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <0.00%> (-19.18%)` | :arrow_down: | | [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5635/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `80.98% <0.00%> (-11.99%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5635/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.42% <0.00%> (-6.24%)` | :arrow_down: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5635/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.73% <0.00%> (-3.76%)` | :arrow_down: | | ... and [49 more](https://codecov.io/gh/huggingface/transformers/pull/5635/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5635?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5635?src=pr&el=footer). Last update [8edfaaa...67859e5](https://codecov.io/gh/huggingface/transformers/pull/5635?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Ooops, the rebase made the diff unreadable on this PR. Opening a new PR from this branch.
transformers
5,634
closed
T5 has no module ```torch_xla``` when using T5 fine-tuned on SQUADv2
# 🐛 Bug ## Information I get this error: ``` ModuleNotFoundError: No module named 'torch_xla'``` Full error message: ``` 2 3 tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-squadv2") ----> 4 model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-squadv2") 5 6 def get_answer(question, context): 1 frames /usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 796 797 if hasattr(config, "xla_device") and config.xla_device: --> 798 import torch_xla.core.xla_model as xm 799 800 model = xm.send_cpu_data_to_device(model, xm.xla_device()) ModuleNotFoundError: No module named 'torch_xla' ``` ## To reproduce ```python from transformers import AutoModelWithLMHead, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-squadv2") model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-squadv2") def get_answer(question, context): input_text = "question: %s context: %s </s>" % (question, context) features = tokenizer([input_text], return_tensors='pt') output = model.generate(input_ids=features['input_ids'], attention_mask=features['attention_mask']) return tokenizer.decode(output[0]) context = "Manuel have created RuPERTa-base with the support of HF-Transformers and Google" question = "Who has supported Manuel?" get_answer(question, context) ``` I used this example code a few weeks ago and had no problem...
07-09-2020 16:32:26
07-09-2020 16:32:26
Indeed, it seems this model was once trained or initialized on TPU. Thanks for letting us know, I'm patching it in #5636.<|||||>Yes, it was trained on TPU<|||||>It should be fixed on master now, can you try pulling from master and running your code?<|||||>I already did it and it works!!! Thank you!!
transformers
5,633
closed
More explicit error when failing to tensorize overflowing tokens
07-09-2020 16:27:54
07-09-2020 16:27:54
transformers
5,632
closed
Fixed use of memories in XLNet (caching for language generation + warning when loading improper memoryless model)
The default XLNet model is loaded with 0 memory length, which is an issue both at training time (improper performance) and inference time (as there's no caching speed-up since it doesn't return former attentions). As discussed with @LysandreJik , this PR introduces a warning that in the future, the default XLNet model will have 1024 memory length, in accordance with [the original paper](https://arxiv.org/abs/1906.08237). It also fixes the re-use of cached memory, which was broken similarly to TransfoXL (#4752).
07-09-2020 15:59:50
07-09-2020 15:59:50
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5632?src=pr&el=h1) Report > Merging [#5632](https://codecov.io/gh/huggingface/transformers/pull/5632?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fa5423b1695cd24856bcff47214172e0f540d924&el=desc) will **decrease** coverage by `0.07%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5632/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5632?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5632 +/- ## ========================================== - Coverage 77.79% 77.72% -0.08% ========================================== Files 145 145 Lines 25355 25364 +9 ========================================== - Hits 19726 19715 -11 - Misses 5629 5649 +20 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5632?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/configuration\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5632/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3hsbmV0LnB5) | `94.33% <100.00%> (+0.33%)` | :arrow_up: | | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5632/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `91.39% <100.00%> (+0.04%)` | :arrow_up: | | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5632/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.96% <100.00%> (+0.10%)` | :arrow_up: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5632/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5632/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.50%)` | :arrow_up: | | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5632/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5632?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5632?src=pr&el=footer). Last update [fa5423b...25fae1b](https://codecov.io/gh/huggingface/transformers/pull/5632?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Also the slow test of XLNet will have to be adapted when doing this change no?<|||||>So in order: a) b) I would really like to to have proper GPT-2 style caching in XLNet. This would require changing the outputs of the XLNet `forward` to add `presents` outputs that contain the K/V pairs, like in GPT-2. That probably counts as BC-breaking, right @LysandreJik ? c) `offset = 2` is definitely a bit random. I was mistaken at first, as I thought it would be proper autoregressive generation, but `offset=1` is what should be used in this case. After manually checking outputs I had the impression that `offset = 2` was slightly better (mostly it goes less into repetitive generation loops) at a negligible computation time cost; but I agree that `offset = 1` is more principled and I don't have a strong opinion on that choice. d) The slow tests run with the default model, which still has `mem_length = 0` and no caching, so it doesn't make a difference yet. <|||||>> So in order: > > a) b) I would really like to to have proper GPT-2 style caching in XLNet. This would require changing the outputs of the XLNet `forward` to add `presents` outputs that contain the K/V pairs, like in GPT-2. That probably counts as BC-breaking, right @LysandreJik ? > > c) `offset = 2` is definitely a bit random. I was mistaken at first, as I thought it would be proper autoregressive generation, but `offset=1` is what should be used in this case. After manually checking outputs I had the impression that `offset = 2` was slightly better (mostly it goes less into repetitive generation loops) at a negligible computation time cost; but I agree that `offset = 1` is more principled and I don't have a strong opinion on that choice. > > d) The slow tests run with the default model, which still has `mem_length = 0` and no caching, so it doesn't make a difference yet. a)b) I think would count as a feature enhancement. We would make the `past` variable optional so no backward breaking here IMO. But I agree it would definitely be better to this in a new PR.<|||||>I've added a comment. I'm preparing another PR for proper caching and merging this one.
transformers
5,631
closed
Correct extension for model summary links
07-09-2020 14:50:01
07-09-2020 14:50:01
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5631?src=pr&el=h1) Report > Merging [#5631](https://codecov.io/gh/huggingface/transformers/pull/5631?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5c82bf6831b49e1e6029d09488081d5d98a272e9&el=desc) will **decrease** coverage by `0.18%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5631/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5631?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5631 +/- ## ========================================== - Coverage 77.05% 76.87% -0.19% ========================================== Files 145 145 Lines 25364 25364 ========================================== - Hits 19545 19499 -46 - Misses 5819 5865 +46 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5631?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5631/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5631/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5631/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+0.50%)` | :arrow_up: | | [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/5631/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `27.63% <0.00%> (+1.31%)` | :arrow_up: | | [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5631/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `90.09% <0.00%> (+1.80%)` | :arrow_up: | | [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5631/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+2.73%)` | :arrow_up: | | [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5631/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `77.55% <0.00%> (+11.22%)` | :arrow_up: | | [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5631/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `49.09% <0.00%> (+17.09%)` | :arrow_up: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5631/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.96% <0.00%> (+21.29%)` | :arrow_up: | | [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5631/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `85.71% <0.00%> (+25.71%)` | :arrow_up: | | ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/5631/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5631?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5631?src=pr&el=footer). Last update [5c82bf6...2bd8a57](https://codecov.io/gh/huggingface/transformers/pull/5631?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,630
closed
How can I fine-tune on custom model
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> I would like to use TFBert as encoder then I will add additional layer on-top of it (with custom model class) So, I would like to fine-tune all layer down to encoder layer Specifically, I would like to do BERT-BiLSTM-CRF for NER task Is there a way to do it? Thank you for your answer <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
07-09-2020 14:26:39
07-09-2020 14:26:39
You can try this: https://github.com/huggingface/transformers/pull/3009/commits/489dd7608c5b3d4acaf997a2b4fbccc3d7144cf3 but there is no bilstm layer. You can add it by yourself<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,629
closed
Fixed TextGenerationPipeline on torch + GPU
Fixes #5622 .
07-09-2020 13:48:38
07-09-2020 13:48:38
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5629?src=pr&el=h1) Report > Merging [#5629](https://codecov.io/gh/huggingface/transformers/pull/5629?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fa5423b1695cd24856bcff47214172e0f540d924&el=desc) will **decrease** coverage by `0.21%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5629/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5629?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5629 +/- ## ========================================== - Coverage 77.79% 77.58% -0.22% ========================================== Files 145 145 Lines 25355 25357 +2 ========================================== - Hits 19726 19672 -54 - Misses 5629 5685 +56 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5629?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `76.24% <100.00%> (+0.08%)` | :arrow_up: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: | | [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-23.43%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-2.01%)` | :arrow_down: | | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5629?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5629?src=pr&el=footer). Last update [fa5423b...dfeeffa](https://codecov.io/gh/huggingface/transformers/pull/5629?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>@LysandreJik if you wanna take a look, it's just a short bugfix (essentially adding the `if self.framework == "pt", generated_sequence = generated_sequence.cpu()` line)<|||||>LGTM!
transformers
5,628
closed
Support for Polyencoder and other retriever based models
# ❓ Questions & Help Is there any way I can load Polyencoder and other retriever based models from ParlAI in huggingface/transformers because as of now, there seems no support of loading huggingface/transformers models in ParlAI other than GPT. Polyencoder: https://arxiv.org/abs/1905.01969 ParlAI Implementation : https://github.com/facebookresearch/ParlAI/blob/master/parlai/agents/transformer/polyencoder.py
07-09-2020 13:26:14
07-09-2020 13:26:14
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,627
closed
Model doc failed
As title, model doc failed in page https://huggingface.co/transformers/model_summary.html, e.g. https://huggingface.co/model_doc/distilbert
07-09-2020 13:17:42
07-09-2020 13:17:42
This has been fixed in master, https://huggingface.co/transformers/master/model_summary.html will have the proper links.<|||||>This has now been fixed for the stable version as well!
transformers
5,626
closed
doc: fix apparent copy-paste error in docstring
07-09-2020 13:09:28
07-09-2020 13:09:28
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5626?src=pr&el=h1) Report > Merging [#5626](https://codecov.io/gh/huggingface/transformers/pull/5626?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fa5423b1695cd24856bcff47214172e0f540d924&el=desc) will **decrease** coverage by `0.23%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5626/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5626?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5626 +/- ## ========================================== - Coverage 77.79% 77.56% -0.24% ========================================== Files 145 145 Lines 25355 25355 ========================================== - Hits 19726 19667 -59 - Misses 5629 5688 +59 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5626?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/commands/train.py](https://codecov.io/gh/huggingface/transformers/pull/5626/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy90cmFpbi5weQ==) | `0.00% <ø> (ø)` | | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5626/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: | | [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5626/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-23.43%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5626/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.20% <0.00%> (-2.76%)` | :arrow_down: | | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5626/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5626?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5626?src=pr&el=footer). Last update [fa5423b...959b687](https://codecov.io/gh/huggingface/transformers/pull/5626?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,625
closed
Cannot reproduce roberta-large on SQuAD
## Information Using the same hyper-paramters as [paper](https://arxiv.org/abs/1907.11692), I fine-tund roberta-large on SQuAD1.1 resulting disappointing results as below. I am wondering that the reason of it might be the gradient noralization is different from the [official implementation](), although it hasn't been released yet. ``` {'exact': 0.21759697256385999, 'f1': 7.113439302309792, ' total': 10570, 'HasAns_exact': 0.21759697256385999, 'HasAns_f1': 7.113439302309792, 'HasAns_total': 10570, 'best_exact': 0.21759697256385999, 'best_exact_thresh': 0.0, 'best_f1': 7.113439302309792, 'best_f1_thresh': 0.0} ``` ## To reproduce ``` python3 -m torch.distributed.launch --nproc_per_node=4 ./examples/question-answering/run_squad.py \ --model_type roberta \ --model_name_or_path roberta-large \ --do_train \ --do_eval \ --train_file $SQUAD_DIR/train-v1.1.json \ --predict_file $SQUAD_DIR/dev-v1.1.json \ --learning_rate 1.5e-5 \ --weight_decay 0.01 \ --max_grad_norm 0.0 \ --num_train_epochs 2 \ --warmup_steps 222 \ --adam_betas '(0.9, 0.98)' \ --adam_epsilon 1e-6 \ --max_seq_length 512 \ --doc_stride 128 \ --output_dir ./examples/models/finetuned_squad1.1/ \ --per_gpu_eval_batch_size=8 \ --per_gpu_train_batch_size=2 \ --gradient_accumulation_steps=6 \ --threads 8 \ --overwrite_cache \ ```
07-09-2020 13:08:35
07-09-2020 13:08:35
And also, it seems that the glue results showcased in https://huggingface.co/roberta-large refers to the paper instead of fine-tuning from your own script.<|||||>@ZheyuYe Is this hyperparameter ` --adam_betas '(0.9, 0.98)' \` available in a modified Transformers v3.0.2 `run_squad.py`, or earlier version? It is not appearing as an available hyperparameter in the current master version. I was able to fine-tune RoBERTa large on Squad 2.0 as late as 29June20: ["ahotrod/roberta_large_squad2"](https://huggingface.co/ahotrod/roberta_large_squad2) with satisfactory results.<|||||>@ahotrod Since the `betas` is available in current Adamw, I added this flag `adam_betas` to match the optimizer hyper-parameters as https://github.com/ZheyuYe/transformers/blob/efc022060195dca384a95546c6134667696f957f/examples/question-answering/run_squad.py#L98-L101 Thanks for providing these useful hyperparameter, I am going to re-fune-tune `roberta-large` on Squad 2.0. I was noticed that `warmup_steps = 1642` was selected with total `Total optimization steps = 8211`, so that is `warmup_ratio = 0.2`? The other thing that confuses me is why would you choose `do_lower_case` when RoBERTa was pretrained with cased setting https://github.com/pytorch/fairseq/issues/1429.<|||||>@ZheyuYe I'm guessing the `do_lower_case` was overlooked when I started with script from another model fine-tuning. However, I believe I've read that `do_lower_case` has no effect with newer models running the latest `run_squad.py`. If I was fine-tuning RoBERTa_large again I'd leave it out. I have fine-tuned RoBERTa_large a bunch of times, varying the hyperparameters: #epochs, learning rate, warmup ratio, etc. Plus I switched mid-stream from using 2x NVIDIA 1080Ti GPUs to a single 24GB NVIDIA RTX Titan. Compared to the original RoBERTa paper's **Table 10 Hyperparameters**, for this particular run I bumped epochs from 2 to 3, and increased warmup ratio to 0.2 with good success. It may be a result of RoBERTa not being that dependent on warmup ratio used, I don't know for sure. With my configuration, this fine-tuning script produced the best results. Check-out the tensorboard loss & learning rate graphs, script, etc. at [https://huggingface.co/ahotrod/roberta_large_squad2#list-files](https://huggingface.co/ahotrod/roberta_large_squad2#list-files) As quoted many times, "Your mileage may vary" ;-] Have fun with it! Hope you beat my results.<|||||>I am closing this issue since I have got the competitive results although there is still a gap from the paper's
transformers
5,624
closed
Inference widgets for self-hosted models?
# 🚀 Feature request I'm loving the new [huggingface](https://huggingface.co/bert-base-uncased?) dataset browsing & hosted model interfaces 🤯 So firstly a huge thank you to everyone <3 This is a question/feature request ~ - Can we use inference widgets for self-hosted models? I see that there is a [serving.py](https://github.com/huggingface/transformers/blob/master/src/transformers/commands/serving.py) ( transformer-cli ) but nothing about widgets as far as I can see. If this is possible I would love an example on how; if not, will it be in the future? ## Motivation Inference widgets would be nice to have during model testing & demos ## Your contribution I was planning on doing something similar using `huggingface -> spacy(displacy) -> streamlit` (for NER)
07-09-2020 12:09:48
07-09-2020 12:09:48
do you mind sending an email to clement [at] huggingface [dot] co explaining your usecase?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>I have created a simple package that demos how to self-host Hugging Face NER models with an UI akin to the Inference api~ Self-Hosted inference for NER: ![image](https://user-images.githubusercontent.com/15624271/93560907-264e7580-f9be-11ea-8c57-8f878e95cb3a.png) available @ https://github.com/cceyda/lit-NER ❤️ using HuggingFace+Torchserve+Displacy+Streamlit <|||||>looks good @cceyda! Do you have a publicly hosted demo somewhere?<|||||>@julien-c my main objective was to provide an easy way to have an interface API&AI for self-hosted models vs public~ but why not do both 😄 so I put up a demo [here](https://share.streamlit.io/cceyda/lit-ner/public/lit_ner.py) (I recently got access to streamlit sharing beta 🥳 ) You can enter the model name of any NER model hosted at [huggingface ❤️ ](https://huggingface.co/models?filter=token-classification&search=ner) like so: ![image](https://user-images.githubusercontent.com/15624271/95163741-7f524200-07e3-11eb-8c41-7714b8ed3ac8.png) **_OR_** even your custom self hosted model by using the example torchserve recipe I made [lit_ner/serve.py](https://github.com/cceyda/lit-NER/blob/master/examples/serve.ipynb) (currently there is no security setup) BTW, There are some problems with the current NER pipeline (which I provide a fix PR for [here](https://github.com/huggingface/transformers/pull/5970)) Example error: ![image](https://user-images.githubusercontent.com/15624271/95163217-5aa99a80-07e2-11eb-8a85-4144f7deb636.png) At my local with changes from the PR: ![image](https://user-images.githubusercontent.com/15624271/95163363-ab20f800-07e2-11eb-9184-8c300ea7c46b.png) PS: this can be easily expanded to other pipelines and also highly customizable 😉 will polish it more as soon as I have some time<|||||>That's neat! Do you mind if I tweet it? (what's your Twitter handle :-)<|||||>@julien-c of course I would like it very much. I have also written a [blog post](https://cceyda.github.io/blog/huggingface/torchserve/streamlit/ner/2020/10/09/huggingface_streamlit_serve.html) about it, my first blog post! 🥳 My [twitter](https://twitter.com/ceyda_cinarel) is so unused it is embarrassing 😆 until now I used it just for following the news, but in the future I will be using it for sharing my blog post notifications. hoping that someone will be reading it 🤞 😄 <|||||>[Tweet is up](https://twitter.com/julien_c), I'll close this issue now, thanks again for sharing
transformers
5,623
closed
Predictor in Streamlit Docker eating all memory and OOM
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): DistilBert Language I am using the model on (English, Chinese ...): English The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Training on my datatset and creation of a model on Colab 2. Saving the said model locally 3. Loading the predictor in my app, with Streamlit library 4. Deployment of said app in a Docker container This is my app, which use a predictor with Streamlit. ``` import pprint import re import ktrain import numpy as np import pandas as pd import streamlit as st import trafilatura from googlesearch import search from ktrain import text, predictor st.title(':crystal_ball: schemaPredictor :crystal_ball:') add_selectbox = st.selectbox( 'How would you like to predict?', ('Text', 'Url')) predictor = ktrain.load_predictor('./tmp/schema_mapping') def get_prob(p): i = 0 for x in p: if x > i: i = x return i if add_selectbox == "Text": body = st.text_area('Insert your text here, as clean as possible.') if st.button("Predict"): st.success(":crystal_ball: " + predictor.predict(body) + " :crystal_ball:") st.success("With a probability of " + "{:.1%}".format(get_prob(predictor.predict_proba(body)))) elif add_selectbox == "Url": body = st.text_input('Insert your url here') if st.button("Predict"): page = body downloaded = trafilatura.fetch_url(page) result = trafilatura.extract(downloaded, include_tables=False, include_formatting=False, include_comments=False) st.success(":crystal_ball: " + predictor.predict(result) + " :crystal_ball:") st.success("With a probability of " + "{:.1%}".format(get_prob(predictor.predict_proba(result)))) ``` ## Expected behavior If I run this app locally, without a Docker container, but in a conda env it works differently, it still takes memory at each iteration, but when it gets to around 10/11gb it frees memory to use it again. That on my 12gb ram laptop. So I expected this to happen in my container too, but what happens in a container is that at each 'predict' it takes some memory, but it goes on untill it runs OOM. I tried with a Docker cointainer with 4 CPU, 12 gb of RAM and 1 gb SWAP ## Environment info Instructions for updating: Use `tf.config.list_physical_devices('GPU')` instead. 2020-07-09 11:56:21.400037: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2020-07-09 11:56:21.404641: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3800005000 Hz 2020-07-09 11:56:21.404943: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55dce5a7cca0 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-07-09 11:56:21.405087: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-07-09 11:56:21.406403: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory 2020-07-09 11:56:21.406441: E tensorflow/stream_executor/cuda/cuda_driver.cc:351] failed call to cuInit: UNKNOWN ERROR (303) 2020-07-09 11:56:21.406509: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (f8630e8d0e49): /proc/driver/nvidia/version does not exist Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 2.11.0 - Platform: Linux-4.19.76-linuxkit-x86_64-with-debian-10.4 - Python version: 3.7.7 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.1.0 (False) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> #
07-09-2020 11:57:58
07-09-2020 11:57:58
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,622
closed
TextGenerationPipeline breaks when used with device=0
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): model-agnostic (breaks with GPT2 and XLNet) Language I am using the model on (English, Chinese ...): English The problem arises when using: [x] my own modified scripts: (give details below) The tasks I am working on is: [x] my own task or dataset: plain old language generation ## To reproduce Steps to reproduce the behavior: ``` #!/usr/bin/env python3 import random from transformers import pipeline, XLNetLMHeadModel import torch import time random.seed(0) torch.manual_seed(0) generator = pipeline("text-generation", model="xlnet-base-cased", tokenizer="xlnet-base-cased", device=0) output_to_check = generator("Today is a beautiful day and I, ", offset=offset, do_sample=True, top_k=50, max_len=100) ``` ## Expected behavior What should happen : text generation What actually happens : ``` Traceback (most recent call last): File "/home/teven/dev_transformers/perso/transformers/generation_script.py", line 15, in <module> output_to_check = generator("Today is a beautiful day and I, ", offset=offset, do_sample=True, top_k=50, max_len=100) File "/home/teven/dev_transformers/perso/transformers/src/transformers/pipelines.py", line 692, in __call__ generated_sequence = generated_sequence.numpy().tolist() TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first. ``` Just missing a conversion before the `.numpy()` call ## Environment info - `transformers` version: 3.0.2 - Platform: Linux-5.3.0-62-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.5.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No
07-09-2020 09:46:42
07-09-2020 09:46:42
If that's not the case, we should make sure that the pipelines run on GPU in the GPU CI. (fast and slow), to catch things like this.
transformers
5,621
closed
Add freshly trained `codegram/calbert-base-uncased`
Trained from the rewrite mentioned in #5599, just finished training last night. The model card now reflects both models, with tested code examples and links to Exbert.
07-09-2020 07:44:44
07-09-2020 07:44:44
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5621?src=pr&el=h1) Report > Merging [#5621](https://codecov.io/gh/huggingface/transformers/pull/5621?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fa5423b1695cd24856bcff47214172e0f540d924&el=desc) will **decrease** coverage by `1.09%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5621/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5621?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5621 +/- ## ========================================== - Coverage 77.79% 76.70% -1.10% ========================================== Files 145 145 Lines 25355 25355 ========================================== - Hits 19726 19448 -278 - Misses 5629 5907 +278 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5621?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: | | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5621?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5621?src=pr&el=footer). Last update [fa5423b...72f9aec](https://codecov.io/gh/huggingface/transformers/pull/5621?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,620
closed
Fix re-tokenization (ignoring is_pretokenized=True) when passing a pretokenized batch to both batch_encode_plus and tokenizer.__call__ methods
# Bug > Fix unexpected behavior when passing already tokenized tokens (ignoring `is_pretokenized=True`) using `batch_encode_plus` and `self.__call__()` ```python tokenizer = BertTokenizer.from_pretrained("bert-base-cased") batch_sentences = ['The rapid expansion of the current COVID - 19 pandemic.', 'Hospitals around the globe have had to implement drastic changes'] batch_tokenized = [tokenizer.tokenize(x) for x in batch_sentences] ``` Correct output when passing either a single string or batch ✅ - Applies to types: - `str` : one sentence - `List[str]` : batch of sentences ```python # also applies when using batch_encode_plus inputs = tokenizer(batch_sentences, add_special_tokens=False) for ids in inputs['input_ids']: print(tokenizer.decode(ids)) ... "The rapid expansion of the current COVID - 19 pandemic." "Hospitals around the globe have had to implement drastic changes" ``` Incorrect output when passing either a sequence of tokens or in batch ❌ - Applies to types: - `List[str]` : one sequence of string tokens - `List[List[str]]` : batch of sequences of string tokens ```python # also applies when using batch_encode_plus inputs = tokenizer(batch_tokenized, add_special_tokens=False, is_pretokenized=True) for ids in inputs['input_ids']: print(tokenizer.decode(ids)) ... "The rapid expansion of the current CO # # VI # # D - 19 pan # # de # # mic." "Hospital # # s around the globe have had to implement drastic changes" ``` ## Cause of issue > The problem is when the sequences provided are a list of strings (pretokenized | batch of tokens) as we can observe in the output above, and the second condition in `get_input_ids()` completely disregards the truth that `is_pretokenized=True` by *re-tokenizing* a previously tokenized input! ```python ... def get_input_ids(text): if isinstance(text, str): tokens = self.tokenize(text, **kwargs) return self.convert_tokens_to_ids(tokens) elif isinstance(text, (list, tuple)) and len(text) > 0 and isinstance(text[0], str): if is_pretokenized: # If the user set is_pretokenized=True, then the input is a batch of token string sequences. # The expected behavior is then to convert tokens to ids and not to re-tokenize - ``self.tokenize()`` tokens = list(itertools.chain(*(self.tokenize(t, is_pretokenized=True, **kwargs) for t in text))) return self.convert_tokens_to_ids(tokens) else: return self.convert_tokens_to_ids(text) ... ``` ## Fix > All I needed to do is flip the behavior. Easy fix! ```python ... if is_pretokenized: # If already tokenized then, convert string tokens to token_ids return self.convert_tokens_to_ids(text) else: # Otherwise, tokenize to string tokens before converting to token_ids tokens = list(itertools.chain(*(self.tokenize(t, is_pretokenized=True, **kwargs) for t in text))) return self.convert_tokens_to_ids(tokens) ... ```
07-09-2020 07:32:29
07-09-2020 07:32:29
transformers
5,619
closed
Should t5-small generate coherent text as summaries without finetuning?
I am following the summarization example and if I run `run_eval.py` for `t5-small` and `xsum` without finetuning I still get coherent, new (similar to source but not the same) and meaningful texts as summaries. The doc does not mention that it was pretrained on any kind of summarization task.
07-09-2020 07:27:28
07-09-2020 07:27:28
Hi @marton-avrios can you share with me the example? Let me try them out as well and see if I found any edge cases where it is not coherent (subjectively). Can you also point out the dataset? Will be appreciated if you can share the example reference, dataset link, or your example source code.<|||||>just go to `examples/seq2seq` follow the instructions for obtaining the XSUM dataset and run ``` python run_eval.py t5-small xsum/val.source t5_val_generations.txt \ --reference_path xsum/val.target \ --score_path xsum_rouge.json \ --task summarization \ --n_obs 100 \ --device cuda \ --fp16 \ --bs 32 ``` it creates relatively coherent text in `t5_val_generations.txt` which I would not expect from a model without any finetuning.<|||||>Ahh that one. I think it's pretrained already although I'm not sure which pretraining dataset. I think your doubt is that we shoould need a bit of training iteration for different datasets to make the model good? Both are news dataset so I won't be too surprised that we don't need additional iteration. I think the XSum highlights on one sentenced and shorter summary than CNN/Daily Mail so the label is different.<|||||>Ah, so they are already finetuned versions. I thought that `t5-small` and the other `t5-*` models were only trained on denoising tasks.<|||||>I ran t5-small for the CNN/DM test dataset and the output produced are meaningful complete sentences but not close to summaries. Which is expected because they are just pre-trained on a large corpus and not fine-tuned on summarization datasets explicitly. No, they are just pretrained versions and not fine-tuned ones. In case if you want to fine-tune them on CNN/DM or X-Sum dataset, u can run the finetune_t5.sh script maybe and use the model saved to produce outputs again. you will surely find the fine-tuned ones performing better.<|||||>I see, that explains well the observation then. When you run the t5 model, you will be warned with the following: ``` Some weights of T5ForConditionalGeneration were not initialized from the model checkpoint at t5-base and are newly initialized: ['encoder.embed_tokens.weight', 'decoder.embed_tokens.weight', 'lm_head.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` So I guess you still need to train them for summarization task @marton-avrios <|||||>I also apologies for the misinformation. I thought they were pretrained on CNN/DailyMail dataset as that is the impression I get from this [doc](https://huggingface.co/transformers/task_summary.html)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,618
closed
Generate up to max_target_length sequences
* Modifies the generate() call to allow for generation of sequences up to and including max_target_length number of tokens. * Previous to this commit, implementation caps generation at 20 tokens and may result in poor performance. * See related recent generation_utils.py commit: https://github.com/huggingface/transformers/blob/c4d4e8bdbd25d9463d41de6398940329c89b7fb6/src/transformers/generation_utils.py#L139
07-09-2020 07:16:33
07-09-2020 07:16:33
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5618?src=pr&el=h1) Report > Merging [#5618](https://codecov.io/gh/huggingface/transformers/pull/5618?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fa5423b1695cd24856bcff47214172e0f540d924&el=desc) will **decrease** coverage by `0.91%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5618/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5618?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5618 +/- ## ========================================== - Coverage 77.79% 76.88% -0.92% ========================================== Files 145 145 Lines 25355 25355 ========================================== - Hits 19726 19495 -231 - Misses 5629 5860 +231 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5618?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5618/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5618/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5618/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.50%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5618/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: | | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5618/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5618/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5618?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5618?src=pr&el=footer). Last update [fa5423b...db13af1](https://codecov.io/gh/huggingface/transformers/pull/5618?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>@sshleifer <|||||>This should depend on `config.max_length`, no? Config is here: https://s3.amazonaws.com/models.huggingface.co/bert/t5-small-config.json and we have the line: https://github.com/huggingface/transformers/blob/353b8f1e7a7361c0afd9e391381bc226b4a5ca8f/examples/seq2seq/finetune.py#L66 so I think we should continue to let the config determine the generation length. The cost of this proposal is that people often set `max_target_length` shorter than optimal to make training run faster for training data, but leave `val_max_target_length` long to get a more accurate approximation of Rouge.<|||||>Were your summaries getting truncated like #5656 ?<|||||>That seems like a better solution. I had been using a different prefix, without an associated config, so the max_length must have defaulted to 20.
transformers
5,617
closed
Update README.md
07-09-2020 06:20:10
07-09-2020 06:20:10
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5617?src=pr&el=h1) Report > Merging [#5617](https://codecov.io/gh/huggingface/transformers/pull/5617?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fa5423b1695cd24856bcff47214172e0f540d924&el=desc) will **decrease** coverage by `0.31%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5617/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5617?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5617 +/- ## ========================================== - Coverage 77.79% 77.48% -0.32% ========================================== Files 145 145 Lines 25355 25355 ========================================== - Hits 19726 19647 -79 - Misses 5629 5708 +79 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5617?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5617/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `44.56% <0.00%> (-46.35%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5617/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `63.55% <0.00%> (-31.78%)` | :arrow_down: | | [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5617/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-23.43%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5617/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `79.44% <0.00%> (-6.52%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5617/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.92% <0.00%> (-1.97%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5617/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: | | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5617/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5617/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5617?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5617?src=pr&el=footer). Last update [fa5423b...b25f69f](https://codecov.io/gh/huggingface/transformers/pull/5617?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,616
closed
fix 404
07-09-2020 03:56:32
07-09-2020 03:56:32
transformers
5,615
closed
🐛 Bart Tokenization difference between 2.11.0 and 3.0.2
# 🐛 Bug Running this code : ```python from transformers import BartTokenizer tokenizer = BartTokenizer.from_pretrained("facebook/bart-large") print(tokenizer.batch_encode_plus(["This is an example"])) ``` in `transformers` `2.11.0` and `3.0.2` gives different results. `transformers` `2.11.0` : > {'input_ids': [[0, 152, 16, 41, 1246, 2]], 'attention_mask': [[1, 1, 1, 1, 1, 1]]} `transformers` `3.0.2` : > {'input_ids': [[0, 713, 16, 41, 1246, 2]], 'attention_mask': [[1, 1, 1, 1, 1, 1]]} --- Colab for reproducing : * [`2.11.0`](https://colab.research.google.com/drive/1qwYkcZoD1JtuLLDjABngJFoDUD06RiXm?usp=sharing) * [`3.0.2`](https://colab.research.google.com/drive/1qUWcCUYInpa9Lwy2Ur-t3N3hImF1grCT?usp=sharing) --- Is it from the refactoring of `generation_utils.py` ?
07-09-2020 01:48:34
07-09-2020 01:48:34
This line : https://github.com/huggingface/transformers/blob/b42586ea560a20dcadb78472a6b4596f579e9043/src/transformers/tokenization_utils.py#L1709 was changed to : https://github.com/huggingface/transformers/blob/b0892fa0e8df02d683e05e625b3903209bff362d/src/transformers/tokenization_utils.py#L505 --- In `2.11.0`, if `add_special_tokens` was `True` (which was the default value), then the RoBERTa tokenizer would add automatically the prefix space. In `3.0.2`, `add_special_tokens` is still `True` by default, but is not passed to `tokenize()` anymore. RoBERTa tokenizer does not add a prefix space, which lead to the difference observed. --- So the fixed code for `3.0.2` is : ```python from transformers import BartTokenizer tokenizer = BartTokenizer.from_pretrained("facebook/bart-large") print(tokenizer.batch_encode_plus(["This is an example"], add_prefix_space=True)) ``` --- _Not closing yet as I would like to know if this is an expected breaking changes or not._<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,614
closed
[WIP] Test TF Flaubert + Add {XLM, Flaubert}{TokenClassification, MultipleC…
…hoice} models and tests The remaining TF tests pass with TF2.3. Waiting to unpin TF before merge.
07-09-2020 00:30:02
07-09-2020 00:30:02
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5614?src=pr&el=h1) Report > Merging [#5614](https://codecov.io/gh/huggingface/transformers/pull/5614?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8a8ae27617e3c4dafb34bcbbaadf4ceee28583bd&el=desc) will **increase** coverage by `0.31%`. > The diff coverage is `98.61%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5614/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5614?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5614 +/- ## ========================================== + Coverage 78.49% 78.81% +0.31% ========================================== Files 146 146 Lines 26335 26396 +61 ========================================== + Hits 20671 20803 +132 + Misses 5664 5593 -71 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5614?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/5614/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.24% <ø> (ø)` | | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5614/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.48% <ø> (ø)` | | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/5614/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.19% <94.73%> (+62.37%)` | :arrow_up: | | [src/transformers/modeling\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/5614/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF1YmVydC5weQ==) | `86.61% <100.00%> (+1.43%)` | :arrow_up: | | [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5614/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `92.96% <100.00%> (+11.98%)` | :arrow_up: | | [src/transformers/modeling\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5614/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `91.00% <100.00%> (+0.98%)` | :arrow_up: | | [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5614/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5614/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.42% <0.00%> (-29.91%)` | :arrow_down: | | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/5614/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `68.14% <0.00%> (-25.67%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5614/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `79.94% <0.00%> (-3.76%)` | :arrow_down: | | ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/5614/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5614?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5614?src=pr&el=footer). Last update [8a8ae27...dd85766](https://codecov.io/gh/huggingface/transformers/pull/5614?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,613
closed
doc fixes
a few minor doc improvements.
07-08-2020 23:39:54
07-08-2020 23:39:54
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5613?src=pr&el=h1) Report > Merging [#5613](https://codecov.io/gh/huggingface/transformers/pull/5613?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7d0ef0042042899b67867a4e2962d8e97fb5c6f5&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5613/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5613?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5613 +/- ## ======================================= Coverage 76.88% 76.88% ======================================= Files 145 145 Lines 25355 25355 ======================================= + Hits 19494 19495 +1 + Misses 5861 5860 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5613?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5613/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5613?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5613?src=pr&el=footer). Last update [7d0ef00...f052965](https://codecov.io/gh/huggingface/transformers/pull/5613?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,612
closed
Did the run_language_model support TPU?
# ❓ Questions & Help I try to using own dataset on tpu with running run_language_model.py, the command is I use below: python examples/xla_spawn.py --num_cores 8 examples/language-modeling/run_language_modeling.py --model_name_or_path hfl/chinese-bert-wwm --output_dir model/tpu --train_data_file /Language_masked_model/data/toy_MLM_data.txt --line_by_line --mlm --block_size 512 --do_train --evaluate_during_training --per_device_train_batch_size 10 --tpu_num_cores 8 --debug --num_train_epochs 1 --save_steps 20000 **No errors but I assume it not use TPU,** I mentor the usage of TPU, get info below: Cloud TPU Monitoring Results (Sample 20 ): TPU type: TPU v3 Utilization of TPU Matrix Units (higher is better): 0.000% Cloud TPU Monitoring Results (Sample 21 ): TPU type: TPU v3 Utilization of TPU Matrix Units (higher is better): 0.000% Cloud TPU Monitoring Results (Sample 22 ): TPU type: TPU v3 Number of TPU cores: 1 (Replica count = 8, num cores per replica = 1) TPU idle time (lower is better): 0.027% Utilization of TPU Matrix Units (higher is better): 0.039% Step time: 11.1ms (avg), 11.1ms (min), 11.1ms (max) Infeed percentage: 0.000% (avg), 0.000% (min), 0.000% (max) Cloud TPU Monitoring Results (Sample 23 ): TPU type: TPU v3 Utilization of TPU Matrix Units (higher is better): 0.000% Cloud TPU Monitoring Results (Sample 24 ): TPU type: TPU v3 Utilization of TPU Matrix Units (higher is better): 0.000% **My question is did run_language_model.py support TPU?** tpu: V3.8 on Google Cloud Platform tensorflow==2.2.0 torch==1.7.0a0+12b5bdc torch-xla==1.6+5430aca I use offical docker on XLA (gcr.io/tpu-pytorch/xla:nightly_3.6) repo
07-08-2020 21:32:15
07-08-2020 21:32:15
I totally agree with you that the transformers team needs to address this issue from a long time ago. I am also struggle to run token classification using TPU. Google gives TPUv3-8 as a part of google collab for only 9$ which equivalent to 8xV100 GPU. Yet until now, we can't run transformers using TPU. This should be a top priority for the transformers team. at least we need only one running example using token classification NER. I managed to do it using XLA but its nowhere near TPU performance.<|||||>Hi, thank you for opening this issue. @lai-agent-t, did you complete training on the TPU, or did you stop beforehand? If you stopped, was the tokenization process already finished? @NLPPower, three NER scripts are available in this repository: NER with Trainer, with TFTrainer, and with Pytorch Lightning. All three support TPU. Did you get bad performance/slow training when using those scripts?<|||||>I'm NOT stop beforehand, I updated the num_train_epochs latter to 10 and I trained 6 epochs and it takes me almost 2 hours with only 3000 sentences <|||||>I see, thanks. In your TPU environment, do you mind running the following (please make sure you have transformers installed from source)? ```py from transformers.file_utils import is_torch_tpu_available print(is_torch_tpu_available()) ``` Thank you! <|||||>> Hi, thank you for opening this issue. > > @lai-agent-t, did you complete training on the TPU, or did you stop beforehand? If you stopped, was the tokenization process already finished? > > @NLPPower, three NER scripts are available in this repository: NER with Trainer, with TFTrainer, and with Pytorch Lightning. All three support TPU. Did you get bad performance/slow training when using those scripts? I struggled to run NER classifier using ALBERT model in TPU using TensorFlow . XLA with PyTorch will not give you a great performance compared to pure TF. Plus it doesn't support fp16 which could cut the fine-tuning time by 4x times . I tested fp16 using v100 and i was able to exceed the performance of PyTorch using TPU where i used docker and TF nightly. to confirm my finding please have a look at the Performance Evaluation table at the bottom of this page. https://github.com/allenai/tpu_pretrain you can see that TPU in TF is almost 4x-6x faster than Pytorhc + XLA in TPU. If you can just create a simple example in google colab where transformer was able to run in TPU in TF for token classification task ( NER ) i will be more than happy, because i struggled to do it since two weeks and there is also couple of folks here who struggled to do it. This should be high priority for transformer team because TPU access can give researcher a powerful resource for almost free using kaggle and google colab. Please have a look also at this project which is the closest thing i could find to run NER in TPU using distributed strategy in top of keras. https://github.com/kyzhouhzau/NLPGNN/tree/master/tests/NER/NER_EN<|||||>I'm sure my tork_tpu is available, because I test the example case you put on the tpu case: python examples/xla_spawn.py --num_cores 8 \ examples/text-classification/run_glue.py --model_name_or_path bert-base-cased \ --task_name mnli \ --data_dir ./data/glue_data/MNLI \ --output_dir ./models/tpu \ --overwrite_output_dir \ --do_train \ --do_eval \ --num_train_epochs 1 \ --save_steps 20000 it works without any error, but the Utilization of TPU Matrix Units (higher is better) is 5% and it stable So, I'm feel confuse is run_language_model.py support TPU?<|||||>same here, is there any update?<|||||>> same here, is there any update? I have change to tensorflow 2.0 instead of pytorch ...<|||||>any updates on pytorch?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,611
closed
IndexError: index out of range in self
# 🐛 Bug ## Information The model I am using Bert ('bert-large-uncased') and I am facing two issues related to this model The language I am using the model on English The problem arises when using: When I am trying to encode a large sentence ( sentence length 500 words ), I am getting this error : `IndexError: index out of range in self` I tried to set max_words length as 400, still getting same error : Data I am using can be downloaded like this : ``` from sklearn.datasets import fetch_20newsgroups import re categories = ['alt.atheism', 'soc.religion.christian','comp.graphics', 'sci.med'] twenty_train = fetch_20newsgroups(subset='train',categories=categories, shuffle=True, random_state=42) print("\n".join(twenty_train.data[0].split("\n")[:3])) X_tratado = [] for email in range(0, len(twenty_train.data)): # Remover caracteres especiais texto = re.sub(r'\\r\\n', ' ', str(twenty_train.data[email])) texto = re.sub(r'\W', ' ', texto) # Remove caracteres simples de uma letra texto = re.sub(r'\s+[a-zA-Z]\s+', ' ', texto) texto = re.sub(r'\^[a-zA-Z]\s+', ' ', texto) # Substitui multiplos espaços por um unico espaço texto = re.sub(r'\s+', ' ', texto, flags=re.I) # Remove o 'b' que aparece no começo texto = re.sub(r'^b\s+', '', texto) # Converte para minúsculo texto = texto.lower() X_tratado.append(texto) dr = {} dr ['text'] = X_tratado dr ['labels'] = twenty_train.target ``` Now I am using bert model to encode the sentences : ``` from transformers import BertModel, BertConfig, BertTokenizer import torch tokenizer = BertTokenizer.from_pretrained('bert-large-uncased') model = BertModel.from_pretrained('bert-large-uncased') inputs = tokenizer(datar[7], return_tensors="pt") outputs = model(**inputs) features = outputs[0][:,0,:].detach().numpy().squeeze() ``` Which is giving this error : ``` --------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-41-5dcf440b245f> in <module> 5 model = BertModel.from_pretrained('bert-large-uncased') 6 inputs = tokenizer(datar[7], return_tensors="pt") ----> 7 outputs = model(**inputs) 8 features = outputs[0][:,0,:].detach().numpy().squeeze() ~/tfproject/tfenv/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 548 result = self._slow_forward(*input, **kwargs) 549 else: --> 550 result = self.forward(*input, **kwargs) 551 for hook in self._forward_hooks.values(): 552 hook_result = hook(self, input, result) ~/tfproject/tfenv/lib/python3.7/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, output_attentions, output_hidden_states) 751 752 embedding_output = self.embeddings( --> 753 input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds 754 ) 755 encoder_outputs = self.encoder( ~/tfproject/tfenv/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 548 result = self._slow_forward(*input, **kwargs) 549 else: --> 550 result = self.forward(*input, **kwargs) 551 for hook in self._forward_hooks.values(): 552 hook_result = hook(self, input, result) ~/tfproject/tfenv/lib/python3.7/site-packages/transformers/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds) 177 if inputs_embeds is None: 178 inputs_embeds = self.word_embeddings(input_ids) --> 179 position_embeddings = self.position_embeddings(position_ids) 180 token_type_embeddings = self.token_type_embeddings(token_type_ids) 181 ~/tfproject/tfenv/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 548 result = self._slow_forward(*input, **kwargs) 549 else: --> 550 result = self.forward(*input, **kwargs) 551 for hook in self._forward_hooks.values(): 552 hook_result = hook(self, input, result) ~/tfproject/tfenv/lib/python3.7/site-packages/torch/nn/modules/sparse.py in forward(self, input) 112 return F.embedding( 113 input, self.weight, self.padding_idx, self.max_norm, --> 114 self.norm_type, self.scale_grad_by_freq, self.sparse) 115 116 def extra_repr(self): ~/tfproject/tfenv/lib/python3.7/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 1722 # remove once script supports set_grad_enabled 1723 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 1724 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 1725 1726 IndexError: index out of range in self ``` The second issue I am facing, When I am using this bert model to encode many sentences, It seems Bert is not using GPU : ![Screenshot 2020-07-09 at 12 45 14 AM](https://user-images.githubusercontent.com/17107749/86960748-9c905980-c17d-11ea-8d1e-bb72141cbf37.png) How to accelerate GPU while using bert model? ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: '3.0.0' - Platform: Ubuntu 18.04.4 LTS - Python version: python3.7 - PyTorch version (GPU?): - Tensorflow version (GPU?): '2.2.0 - Using GPU in script?: - Using distributed or parallel set-up in script?:
07-08-2020 19:19:57
07-08-2020 19:19:57
how did you solve this problem. Can you share your solution.<|||||>Most likely there is mismatch between vocabulary size of tokenizer and bert model ( in bert config). Try setting vocab size of your tokenizer in bert config while initializing your model.<|||||>@zhunipingan I had to trim the length of the sentence to 200 After it worked fine.<|||||>HI @monk1337, the error here is because you've called the model with a sequence that is longer than 512 tokens. BERT-like models have a fixed limit in sequence length, which is often 512 or 1024. For your second question, indeed your model is not on your GPU. With PyTorch, you have to cast your model to the device you want it to run it, so you would have to do something like: ```py from transformers import BertModel, BertConfig, BertTokenizer import torch tokenizer = BertTokenizer.from_pretrained('bert-large-uncased') model = BertModel.from_pretrained('bert-large-uncased') inputs = tokenizer(datar[7], return_tensors="pt") model.cuda() inputs = {k: v.cuda() for k, v in inputs.items()} outputs = model(**inputs) features = outputs[0][:,0,:].detach().numpy().squeeze() ``` Please note I've also cast the input tensors on GPU, as the model inputs need to be on the same device as the model. I recommend looking at the[ CUDA part of the 60 minute blitz tutorial for PyTorch on the PyTorch website ](https://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html#cuda-tensors)to get an understanding of the CUDA semantics. Closing this for now, let me know if you have other issues.<|||||>Anyone can help? I’m not sure this is a bug or not. I need to deploy the AWS elastic inference for our service. The Elastic Inference requires using CPU to load and run models. but our code runs well on GPUs, but CPU. as the simple code below ``` ###CPUs returns index out of range in self error import numpy as np import torch import torch.nn as nn sinusoid_table = torch.FloatTensor(torch.Size([50 + 1, 512])) pos_emb = nn.Embedding.from_pretrained(sinusoid_table, freeze=True) positions = torch.arange(200).expand(1, 200).contiguous()+1 positions=positions a= pos_emb(positions) print(a) ###on GPUs this runs well import torch import torch.nn as nn device = torch.device(‘cuda:0’) sinusoid_table = torch.FloatTensor(torch.Size([50 + 1, 512])).to(device) pos_emb = nn.Embedding.from_pretrained(sinusoid_table, freeze=True).to(device) positions = torch.arange(200).expand(1, 200).contiguous()+1 positions=positions.to(device) a= pos_emb(positions) print(a) ``` I highly appreciate your helps. Thank you.<|||||>This doesn't seem like a `transformers` issue, but more of a PyTorch issue? You're not using `transformers` in your script.<|||||>> Most likely there is mismatch between vocabulary size of tokenizer and bert model ( in bert config). Try setting vocab size of your tokenizer in bert config while initializing your model. Thanks very much. It works for me after making vocab_size larger in bert config.<|||||>Thanks a lot for your help here...I am still having troubles running a similar code. Did you managed to run it in the end? Would you mind sharing how you embedded the vocab_size part? ``` classifier = pipeline('sentiment-analysis', model = "cardiffnlp/twitter-roberta-base-sentiment") df = ( df .assign(sentiment = lambda x: x['Content'].apply(lambda s: classifier(s))) .assign( label = lambda x: x['sentiment'].apply(lambda s: (s[0]['label'])), score = lambda x: x['sentiment'].apply(lambda s: (s[0]['score'])) ) ) ```<|||||>>Most likely there is mismatch between vocabulary size of tokenizer and bert model ( in bert config). Try setting vocab size of your tokenizer in bert config while initializing your model. Do you know how can I do this? I tried by using: configuration = BertConfig(vocab_size=30_522) BertModel(config=configuration).from_pretrained('bert-base-cased') but it does not work ... I am a bit confused since it looks to me that my model is not accepting values higher than 29000... How is this possible? <|||||>> Hi, I met the same problem as you did. You can try `model.config.vocab_size` to find the vacob_size of your model. If your pretrained model is 'bert-base-cased', vacob_size will be 28996. But for 'bert-base-uncased', it's 30522. I'm not sure if it will work for you. (I don't think we can reset vocab_size for pretrained model.<|||||>Thanks, that's It actually. I Also realised It too late... So much time Lost :-D<|||||>> Most likely there is mismatch between vocabulary size of tokenizer and bert model ( in bert config). Try setting vocab size of your tokenizer in bert config while initializing your model. Thanks for pointing out so precisely, though I am wondering how you came to know, I mean the process... Did you debugged in the stack trace till its root or you are contributor to transformers or torch libraries, so it came naturally to you? I faced this issue while implementing XLM-RoBERTa. Here is how I fixed this: xlmr_tokenizer = XLMRobertaTokenizer.from_pretrained('xlm-roberta-large') config = XLMRobertaConfig() config.vocab_size = xlmr_tokenizer.vocab_size # setting both to have same vocab size<|||||>please how do i set the vocab set to exceed 1024<|||||>> HI @monk1337, the error here is because you've called the model with a sequence that is longer than 512 tokens. BERT-like models have a fixed limit in sequence length, which is often 512 or 1024. @LysandreJik Is there anyway we can change the limit? Trying to process a large document. I am using `facebook/bart-large-cnn` Thanks.<|||||>> > HI @monk1337, the error here is because you've called the model with a sequence that is longer than 512 tokens. BERT-like models have a fixed limit in sequence length, which is often 512 or 1024. > > @LysandreJik Is there anyway we can change the limit? Trying to process a large document. I am using `facebook/bart-large-cnn` > > Thanks. Try using the Longformer transformer. The pre-trained ones on huggingface can process up to 16k tokens. I used it for my dissertation where I was processing large documents<|||||>> > > HI @monk1337, the error here is because you've called the model with a sequence that is longer than 512 tokens. BERT-like models have a fixed limit in sequence length, which is often 512 or 1024. > > > > > > @LysandreJik Is there anyway we can change the limit? Trying to process a large document. I am using `facebook/bart-large-cnn` > > Thanks. > > Try using the Longformer transformer. The pre-trained ones on huggingface can process up to 16k tokens. I used it for my dissertation where I was processing large documents Ah, thanks! Will try it.
transformers
5,610
closed
create model cards for qg models
cc @julien-c , @danielduckworth
07-08-2020 17:45:57
07-08-2020 17:45:57
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5610?src=pr&el=h1) Report > Merging [#5610](https://codecov.io/gh/huggingface/transformers/pull/5610?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/40d98ebf50c4662bcd6dce6395bbed0b2142ea52&el=desc) will **increase** coverage by `1.23%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5610/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5610?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5610 +/- ## ========================================== + Coverage 76.88% 78.11% +1.23% ========================================== Files 145 145 Lines 25351 25351 ========================================== + Hits 19491 19804 +313 + Misses 5860 5547 -313 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5610?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5610/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5610/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: | | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5610/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5610?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5610?src=pr&el=footer). Last update [40d98eb...d2b586a](https://codecov.io/gh/huggingface/transformers/pull/5610?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Looks great!<|||||>This is really excellent work @patil-suraj and thanks for the thorough documentation.
transformers
5,609
closed
Duplicate grouped entities when using 'ner' pipeline
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): 'ner' pipeline Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Have transformers 3.0.2 installed 2. Run the below code ```python from transformers import pipeline nlp = pipeline('ner', grouped_entities=True) nlp('Welcome to New York') ``` ## Expected behavior We should receive `[{'entity_group': 'I-LOC', 'score': 0.9984402656555176, 'word': 'New York'}`, but instead the output has duplicated 'New York': `[{'entity_group': 'I-LOC', 'score': 0.9984402656555176, 'word': 'New York'}, {'entity_group': 'I-LOC', 'score': 0.9984402656555176, 'word': 'New York'}]`. ### The Cause of the Issue According to Me After reading 3.0.2, I noticed that lines 1047-1049 were added. I think this was done to fix a prior issue that caused the last named entity in the sequence to be occasionally omitted when `grouped_entities=True`. Long story short, I think this snippet was a patch that only shifted the problem from being an occasional named entity omission to an occasional named entity duplicate. The for-loop that precedes this snippet is inconsistent in that sometimes the last named entity gets successfully added anyway (e.g. if the `if` clause on 1025 (first iteration) or 1032 is entered on the last iteration). In this case, there is a duplicate entry upon the calling of the new code at 1047. On the converse, the last named entity won’t be added if the `else` clause in line 1041 is entered on the last iteration. In this case, the final named entity correctly gets added after the new code snippet is run. In short, there is a duplicate (I think) if (i) there is only one recognized named entity or (ii) the last named entity is one such that the tokenizer cut it up into multiple tokens. Otherwise, there is no duplicate. nlp(‘Welcome to Dallas’) -> duplicate 'Dallas' because 'Dallas' is the only named entity nlp(‘HuggingFace is not located in Dallas’) -> no duplicate because there are multiple entities and the final one 'Dallas' is not tokenized into multiple tokens nlp(‘HuggingFace is located in New York City’) -> duplicate ‘New York City’ because the final named entity 'New York City' is tokenized into multiple tokens ## Environment info - `transformers` version: 3.0.2 - Platform: Linux-5.3.0-1031-azure-x86_64-with-glibc2.10 - Python version: 3.8.1 - PyTorch version (GPU?): 1.5.1 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no
07-08-2020 16:21:09
07-08-2020 16:21:09
Can you check whether this still occurs after recently merged #4987? <|||||>Thanks for the response. Is there a special repo I have to pull from or can I just update transformers. Assuming the latter, I just re-ran `pip install --upgrade transformers`. After doing this, the bug persists.<|||||>No, you would have to install from source as explained in the readme.<|||||>Just cloned the repo (as directed in readme) and noticed that the issue was resolved! Any estimation when the next update will be released?<|||||>I was still having problems similar to issues #5077 #4816 #5377 After some debugging these are the possible reasons & fixes for wrong groupings: Looking for feedback from maintainers on my [WIP] PR https://github.com/huggingface/transformers/pull/5970 - [ ] [Bug Fix] add an option `ignore_subwords` to ignore subsequent ##wordpieces in predictions. Because some models train on only the first token of a word and not on the subsequent wordpieces (BERT NER default). So it makes sense doing the same thing at inference time. - The simplest fix is to just group the subwords with the first wordpiece. - [TODO] how to handle ignored scores? just set them to 0 and calculate zero invariant mean ? - [TODO] handle different wordpiece_prefix ## ? possible approaches: get it from tokenizer? but currently most tokenizers dont have a wordpiece_prefix property? have an _is_subword(token) - [ ] [Bug Fix] Shouldn't group entities that are both 'B' even if they are same type - (B-type1 B-type1) != (B-type1 I-type1) - [ ] [Feature add] added option to `skip_special_tokens`. Cause It was harder to remove them after grouping. - [ ] [Additional Changes] remove B/I prefix on returned grouped_entities - [ ] [Feature Request/TODO] Return indexes? - [ ] [Bug TODO] can't use fast tokenizer with grouped_entities ('BertTokenizerFast' object has no attribute 'convert_tokens_to_string') <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,608
closed
Is there an implementation of BERT architecture in PyTorch that I can modify here?
Hello Team, Firstly, thanks for this amazing repo. I am doing my own research and I want access to a native implementation of BERT in PyTorch so I can modify the architecture and play with it by including a few of my own modules. Is that possible with the codebase in HuggingFace repo here? Thanks
07-08-2020 16:20:11
07-08-2020 16:20:11
Hi! Yes, you can modify the BERT architecture as you please, it's self contained. It's in the [modeling_bert.py](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py) file.<|||||>Thanks @LysandreJik , can you also confirm if this implementation supports multi gpu training? <|||||>This implementation is a PyTorch model, so it supports everything a PyTorch model can do :) GPU, Multi-GPU, TPU, you name it.<|||||>Hello @LysandreJik and team, I am looking at the script `run_language_modeling.py` at https://github.com/huggingface/transformers/tree/master/examples/language-modeling . I saw that the example uses WikiText-2 dataset for example. If I want to fine-tune BERT on my own dataset, how should the dataset be structured? Should I mask the words myself or is there some DataLoader that will do that? I downloaded the WikiText data and I saw an example chunk of text is ``` = Robert <unk> = Robert <unk> is an English film , television and theatre actor . He had a guest @-@ starring role on the television series The Bill in 2000 . This was followed by a starring role in the play Herons written by Simon Stephens , which was performed in 2001 at the Royal Court Theatre . He had a guest role in the television series Judge John <unk> in 2002 . In 2004 <unk> landed a role as " Craig " in the episode " Teddy 's Story " of the television series The Long Firm ; he starred alongside actors Mark Strong and Derek Jacobi . He was cast in the 2005 theatre productions of the Philip Ridley play Mercury Fur , which was performed at the Drum Theatre in Plymouth and the <unk> <unk> Factory in London . He was directed by John <unk> and starred alongside Ben <unk> , Shane <unk> , Harry Kent , Fraser <unk> , Sophie Stanton and Dominic Hall . In 2006 , <unk> starred alongside <unk> in the play <unk> written by Mark <unk> . He appeared on a 2006 episode of the television series , Doctors , followed by a role in the 2007 theatre production of How to Curse directed by <unk> <unk> . How to Curse was performed at Bush Theatre in the London Borough of <unk> and Fulham . <unk> starred in two films in 2008 , <unk> <unk> by filmmaker Paris <unk> , and <unk> Punch directed by <unk> Blackburn . In May 2008 , <unk> made a guest appearance on a two @-@ part episode arc of the television series Waking the Dead , followed by an appearance on the television series <unk> in November 2008 . He had a recurring role in ten episodes of the television series <unk> in 2010 , as " <unk> Fletcher " . <unk> starred in the 2011 film <unk> directed by Paris <unk> . = = Career = = ``` In my case I have a large set of text files. Just text files with free text inside it. Can someone point me to a document/resource that lets me understand how should the input be for masked language modelling pretraining using BERT? I plan to use the `https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py` file contents, modify the layers a bit based on my architecture decisions and train it on my own dataset using masked language modeling where random words are masked and I predict them back. Any help is appreciated. Thanks<|||||>Hi @abhisheksgumadi, this is a very interesting and rather broad question. Could you ask it on the forums over on https://discuss.huggingface.co? Thanks a lot!
transformers
5,607
closed
docs(wandb): explain how to use W&B integration
Documentation on how to use W&B integration has been added to clear up confusion on how to customize logging. fix #5262
07-08-2020 16:06:59
07-08-2020 16:06:59
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5607?src=pr&el=h1) Report > Merging [#5607](https://codecov.io/gh/huggingface/transformers/pull/5607?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/40d98ebf50c4662bcd6dce6395bbed0b2142ea52&el=desc) will **increase** coverage by `1.23%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5607/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5607?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5607 +/- ## ========================================== + Coverage 76.88% 78.11% +1.23% ========================================== Files 145 145 Lines 25351 25351 ========================================== + Hits 19491 19804 +313 + Misses 5860 5547 -313 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5607?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5607/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5607/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: | | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5607/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5607?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5607?src=pr&el=footer). Last update [40d98eb...f38d33b](https://codecov.io/gh/huggingface/transformers/pull/5607?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,606
closed
OSError using FlauBERT
Hello everyone, I'm trying to run Flaubert model on my data using ktrain. # 🐛 Bug ## Information Model I am using (Bert, XLNet ...): flaubert/flaubert_base_cased Language I am using the model on (English, Chinese ...): French The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) I'm using ktrain to load my model. After preprocess my data when i want to get the classifier with function get_classifier() i get this error : --------------------------------------------------------------------------- ``` OSError Traceback (most recent call last) ~\Anaconda3\lib\site-packages\transformers\modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 461 if resolved_archive_file is None: --> 462 raise EnvironmentError 463 except EnvironmentError: OSError: During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) ~\Anaconda3\lib\site-packages\ktrain\text\preprocessor.py in _load_pretrained(self, mname, num_labels) 958 try: --> 959 model = self.model_type.from_pretrained(mname, config=self.config) 960 except: ~\Anaconda3\lib\site-packages\transformers\modeling_tf_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 1046 if isinstance(config, config_class): -> 1047 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs) 1048 raise ValueError( ~\Anaconda3\lib\site-packages\transformers\modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 468 ) --> 469 raise EnvironmentError(msg) 470 if resolved_archive_file == archive_file: OSError: Can't load weights for 'flaubert/flaubert_base_cased'. Make sure that: - 'flaubert/flaubert_base_cased' is a correct model identifier listed on 'https://huggingface.co/models' - or 'flaubert/flaubert_base_cased' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin. During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) ~\Anaconda3\lib\site-packages\ktrain\text\preprocessor.py in _load_pretrained(self, mname, num_labels) 961 try: --> 962 model = self.model_type.from_pretrained(mname, config=self.config, from_pt=True) 963 except: ~\Anaconda3\lib\site-packages\transformers\modeling_tf_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 1046 if isinstance(config, config_class): -> 1047 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs) 1048 raise ValueError( ~\Anaconda3\lib\site-packages\transformers\modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 481 # Load from a PyTorch checkpoint --> 482 return load_pytorch_checkpoint_in_tf2_model(model, resolved_archive_file, allow_missing_keys=True) 483 ~\Anaconda3\lib\site-packages\transformers\modeling_tf_pytorch_utils.py in load_pytorch_checkpoint_in_tf2_model(tf_model, pytorch_checkpoint_path, tf_inputs, allow_missing_keys) 92 return load_pytorch_weights_in_tf2_model( ---> 93 tf_model, pt_state_dict, tf_inputs=tf_inputs, allow_missing_keys=allow_missing_keys 94 ) ~\Anaconda3\lib\site-packages\transformers\modeling_tf_pytorch_utils.py in load_pytorch_weights_in_tf2_model(tf_model, pt_state_dict, tf_inputs, allow_missing_keys) 124 if tf_inputs is not None: --> 125 tf_model(tf_inputs, training=False) # Make sure model is built 126 ~\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py in __call__(self, inputs, *args, **kwargs) 821 self._compute_dtype): --> 822 outputs = self.call(cast_inputs, *args, **kwargs) 823 self._handle_activity_regularization(inputs, outputs) ~\Anaconda3\lib\site-packages\transformers\modeling_tf_xlm.py in call(self, inputs, attention_mask, langs, token_type_ids, position_ids, lengths, cache, head_mask, inputs_embeds, output_attentions, output_hidden_states, labels, training) 803 output_hidden_states=output_hidden_states, --> 804 training=training, 805 ) ~\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py in __call__(self, inputs, *args, **kwargs) 821 self._compute_dtype): --> 822 outputs = self.call(cast_inputs, *args, **kwargs) 823 self._handle_activity_regularization(inputs, outputs) ~\Anaconda3\lib\site-packages\transformers\modeling_tf_flaubert.py in call(self, inputs, attention_mask, langs, token_type_ids, position_ids, lengths, cache, head_mask, inputs_embeds, training, output_attentions, output_hidden_states) 259 if not self.pre_norm: --> 260 attn_outputs = self.attentions[i]([tensor, attn_mask, None, cache, head_mask[i]], training=training) 261 attn = attn_outputs[0] ~\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py in __call__(self, inputs, *args, **kwargs) 821 self._compute_dtype): --> 822 outputs = self.call(cast_inputs, *args, **kwargs) 823 self._handle_activity_regularization(inputs, outputs) ~\Anaconda3\lib\site-packages\transformers\modeling_tf_xlm.py in call(self, inputs, training) 140 """ --> 141 input, mask, kv, cache, head_mask, output_attentions = inputs 142 # Input is (bs, qlen, dim) ValueError: not enough values to unpack (expected 6, got 5) During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) <ipython-input-38-dc13d8280fd1> in <module> ----> 1 model = t.get_classifier() ~\Anaconda3\lib\site-packages\ktrain\text\preprocessor.py in get_classifier(self, fpath, multilabel, metrics) 997 num_labels = len(self.get_classes()) 998 mname = fpath if fpath is not None else self.model_name --> 999 model = self._load_pretrained(mname, num_labels) 1000 if multilabel: 1001 loss_fn = keras.losses.BinaryCrossentropy(from_logits=True) ~\Anaconda3\lib\site-packages\ktrain\text\preprocessor.py in _load_pretrained(self, mname, num_labels) 962 model = self.model_type.from_pretrained(mname, config=self.config, from_pt=True) 963 except: --> 964 raise ValueError('could not load pretrained model %s using both from_pt=False and from_pt=True' % (mname)) 965 else: 966 model = self.model_type.from_pretrained(mname, num_labels=num_labels) ValueError: could not load pretrained model flaubert/flaubert_base_cased using both from_pt=False and from_pt=True ``` ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: windows 10 - Python version: 3.8 - PyTorch version (No GPU): 1.0.0 - Tensorflow version (No GPU): 2.1.0 - Using GPU in script?: No - Using distributed or parallel set-up in script?: Thank you a lot for your help
07-08-2020 14:55:44
07-08-2020 14:55:44
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,605
closed
Here maybe a bug, when we load staged checkpoint
ERROR: type should be string, got "https://github.com/huggingface/transformers/blob/40d98ebf50c4662bcd6dce6395bbed0b2142ea52/src/transformers/trainer.py#L458\r\n\r\nI met this bug when I used the setting below:\r\n\r\nglobal_steps = 2748\r\nlen(train_dataloader) = 27484\r\ngradient_accumulation_steps = 4\r\n\r\nIn the original code, \"steps_trained_in_current_epoch\" will be 2748 ! BUT this variable should be 2748*4 = 10,992\r\n\r\nthe code I suggested is below:\r\n\r\n```\r\nepochs_trained = (self.global_step * self.args.gradient_accumulation_steps) // len(train_dataloader)\r\nsteps_trained_in_current_epoch = (self.global_step * self.args.gradient_accumulation_steps) % len(train_dataloader)\r\n```"
07-08-2020 14:51:28
07-08-2020 14:51:28
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>I'm also puzzled by this. The calculations here seems incorrect.<|||||>To me these calculations are not incorrect if we take `step` as optimization steps, however `steps_trained_in_current_epoch` is wrongly used to skip training batches without considering gradient accumulation. +1 for the proposed calculation for `steps_trained_in_current_epoch` as the number of batches to be skipped.<|||||>@sgugger might be interested in this.<|||||>There is indeed a problem, but only with `steps_trained_in_current_epoch`. The `global_step` variable represents the number of optimization steps, not the number of batches seen. The variable `num_update_steps_per_epoch` take this into account so `epochs_trained` is correct. `steps_trained_in_current_epoch` represents the number of update steps to skip but is used as the number of batches to skip, so either need to multiply it by the `gradient_accumulation_steps` (and rename it for clarity) or skip `gradient_accumulation_steps` batches before subtracting 1 to it later in the loop. This also shows that we direly miss a test to check resuming training works with gradient accumulation. I can look into this when I have a bit of time, but will be fairly busy with the preparation for v4.
transformers
5,604
closed
[Benchmark] TFGPT2LMHeadModel is five times slower than GPT2LMHeadModel
Here are two scripts I ran. ```python from time import time from transformers import TFGPT2LMHeadModel, GPT2Tokenizer import tensorflow as tf tokenizer = GPT2Tokenizer.from_pretrained("gpt2") model = TFGPT2LMHeadModel.from_pretrained('gpt2') text = "What are you doing after you have finished working?" generated = tokenizer.encode(text) context = tf.constant([generated]) past = None start = time() for i in range(100): output, past = model(context, past = past) logits = output[0, -1, :] tok = tf.argmax(logits) generated.append(tok.numpy()) context = tf.expand_dims(tf.expand_dims(tok, 0), 0) sequence = tokenizer.decode(generated) print(time() - start, sequence) ``` and ```python from time import time from transformers import GPT2LMHeadModel, GPT2Tokenizer import torch tokenizer = GPT2Tokenizer.from_pretrained("gpt2") model = GPT2LMHeadModel.from_pretrained('gpt2') text = "What are you doing after you have finished working?" generated = tokenizer.encode(text) context = torch.tensor([generated]) past = None start = time() for i in range(100): output, past = model(context, past=past) token = torch.argmax(output[..., -1, :]) generated += [token.tolist()] context = token.unsqueeze(0) sequence = tokenizer.decode(generated) print(time() - start, sequence) ``` On my computer with the models running on the cpu, the PyTorch version finishes in about six seconds while the TensorFlow version takes 30 seconds. So something must be wrong with the TF implementation because it shouldn't be that much slower.
07-08-2020 14:12:18
07-08-2020 14:12:18
That's probably because you're in eager mode in your TensorFlow script. You can read about eager mode [here](https://www.tensorflow.org/guide/eager). [Here's](https://docs.google.com/spreadsheets/d/1sryqufw2D0XlUH4sq3e9Wnxu5EAQkaohzrJbd5HdQ_w/edit#gid=0) a spreadsheet showcasing several model performances, you can check it out for GPT-2.<|||||>The script runs even slower in graph execution mode.<|||||>Same thing here: ```python from time import time from transformers import (TFGPT2LMHeadModel, GPT2Tokenizer, GPT2LMHeadModel, pipeline) seed = "What are you doing after you have finished working?" model = GPT2LMHeadModel.from_pretrained('gpt2') tokenizer = GPT2Tokenizer.from_pretrained("gpt2") gen = pipeline('text-generation', model = model, tokenizer = tokenizer) start = time() out = gen(seed, max_length = 100, num_return_sequences = 1) print(time() - start, out) ``` Just changing `GPT2LMHeadModel` to `TFGPT2LMHeadModel` makes the program run 5 times slower.<|||||>Oh, I see. Thanks for opening an issue, we're looking into it now.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>same question<|||||>with @gante's new TF generate method this should be much faster now no? :-)<|||||>Hi @shiyongde 👋 Yeah, we have just released a much faster TF generation. Check our blog post [here](https://huggingface.co/blog/tf-xla-generate). Note that it is not yet compatible with `pipeline` (it is in our TODO list)
transformers
5,603
closed
Update benchmark notebook
Small update
07-08-2020 13:58:38
07-08-2020 13:58:38
transformers
5,602
closed
MarianMT: "CUDA out of memory" when translating many times with the MarianMT Model
# 🐛 Bug RuntimeError('CUDA out of memory. Tried to allocate 26.00 MiB (GPU 0; 4.00 GiB total capacity; 3.03 GiB already allocated; 4.72 MiB free; 3.06 GiB reserved in total by PyTorch)') ## Information I wrote a python notebook to translate datasets using MarianMT. Therefore I wrote a function, that gets called a couple of thousand times in this translation process. The function looks like this: ``` def translate(data): batch = tok.prepare_translation_batch(data).to('cuda') gen = model.generate(**batch).to('cuda') data: List[str] = tok.batch_decode(gen, skip_special_tokens=True) return data ``` After about 1000 function calls with a size of data of about 300 words, the error occurs. I tried freeing the GPU memory with torch.cuda.empty_cache() and calling the garbage collector with gc.collect(), but nothing changes. To my understanding I need to use 'torch.no_grad()' but thats already implemented in the generate function of the model. Model I am using (Bert, XLNet ...): MarianMT Language I am using the model on (English to German): Helsinki-NLP/opus-mt-en-de ## To reproduce Steps to reproduce the behavior: 1. Translate an english text using the translate function provided below a couple of thousand times on a cuda enabled device 2. The error occurs corresponding to your gpu after some time ``` src = 'en' # source language trg = 'de' # target language mname = f'Helsinki-NLP/opus-mt-{src}-{trg}' model = MarianMTModel.from_pretrained(mname) tok = MarianTokenizer.from_pretrained(mname) model.to('cuda') def translate(data): batch = tok.prepare_translation_batch(data).to('cuda') gen = model.generate(**batch).to('cuda') data: List[str] = tok.batch_decode(gen, skip_special_tokens=True) return data ``` ## Expected behavior No Cuda out of memory error. The Cuda memory gets cleared after each translation process is done. ## Environment info - `transformers` version: 3.0.0 - Platform: Windows-10-10.0.18362-SP0 - Python version: 3.7.6 - PyTorch version (GPU?): 1.5.1 (True) - Tensorflow version (GPU?): 2.2.0 (True) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no Thank you in advance for your help, I´m struggling with this error for a while!
07-08-2020 13:26:13
07-08-2020 13:26:13
@sshleifer The Documentation page said to assign you, but I can only mention you. <|||||>how big are your batches? ```python src = 'en' # source language trg = 'de' # target language device='cuda' mname = f'Helsinki-NLP/opus-mt-{src}-{trg}' model = MarianMTModel.from_pretrained(mname).to(device).half() # fp16 should save lots of memory tok = MarianTokenizer.from_pretrained(mname) translations = [] for src_text_list in chunks(data, 8): # copy paste chunks fn from run_eval.py, consider wrapping tqdm_notebook batch = tok.prepare_translation_batch(src_text_list).to(device) gen = model.generate(**batch) german: List[str] = tok.batch_decode(gen, skip_special_tokens=True) translations.extend(german) ```<|||||>This is an example of a batch. They are all in this size. Thanks in advance for your help! ` ['Architecturally, the school has a Catholic character. Atop the Main Building\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend "Venite Ad Me Omnes". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.', 'To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?', 'Saint Bernadette Soubirous', 'What is in front of the Notre Dame Main Building?', 'a copper statue of Christ', 'The Basilica of the Sacred heart at Notre Dame is beside to which structure?', 'the Main Building', 'What is the Grotto at Notre Dame?', 'a Marian place of prayer and reflection', 'What sits on top of the Main Building at Notre Dame?', 'a golden statue of the Virgin Mary'] `<|||||>Did my code work? Consider passing `max_length` to `prepare_translation_batch` if it doesn't.<|||||>@sshleifer It worked 👍 I used this fix in 2000 repetitions of the batch size at a time for a few times now and no error occured. Thank you very much for your help!
transformers
5,601
closed
Create README.md
07-08-2020 13:05:12
07-08-2020 13:05:12
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5601?src=pr&el=h1) Report > Merging [#5601](https://codecov.io/gh/huggingface/transformers/pull/5601?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f82a2a5e8e6827343322a4a9831924c5bb9bd2b2&el=desc) will **increase** coverage by `0.18%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5601/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5601?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5601 +/- ## ========================================== + Coverage 76.69% 76.88% +0.18% ========================================== Files 145 145 Lines 25351 25351 ========================================== + Hits 19444 19490 +46 + Misses 5907 5861 -46 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5601?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5601/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5601/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5601?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5601?src=pr&el=footer). Last update [f82a2a5...a3f37fa](https://codecov.io/gh/huggingface/transformers/pull/5601?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thanks! image link seems broken, feel free to update in a subsequent PR.