repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 5,000 | closed | Accessing scores for the entire vocabulary in GPT2 | I want to access the prediciton scores from the GPT2 model.
Follwoing the example given in the docstring, I am using the code
```
output = model(input_ids,labels=input_ids)
logit = output[1][0][-1,:].detach().numpy()
```
Is it correct ? I am expecting logit variable to be the same size as the vocabulary with corresponding scores. | 06-15-2020 08:28:26 | 06-15-2020 08:28:26 | You would need to use `GPT2LMHeadModel` for that, and use a softmax layer to have scores. Here's an example:
```py
from transformers import GPT2LMHeadModel, GPT2Tokenizer
import torch
model = GPT2LMHeadModel.from_pretrained("gpt2")
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
inputs = tokenizer.encode("This is just", return_tensors="pt")
output = model(inputs)
model_output = output[0]
last_token_prediction = model_output[:, -1]
last_token_softmax = torch.softmax(last_token_prediction, dim=-1).squeeze()
n = 10
top_n_values = last_token_softmax.topk(n)
for index, value in zip(top_n_values.indices, top_n_values.values):
print("Score: ", value.tolist())
print("This is just" + tokenizer.decode(index.tolist()))
``` |
transformers | 4,999 | closed | Improve ONNX logging | 06-15-2020 07:58:34 | 06-15-2020 07:58:34 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4999?src=pr&el=h1) Report
> Merging [#4999](https://codecov.io/gh/huggingface/transformers/pull/4999?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9931f817b75ecb2c8bb08b6e9d4cbec4b0933935&el=desc) will **increase** coverage by `0.31%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4999?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4999 +/- ##
==========================================
+ Coverage 76.89% 77.20% +0.31%
==========================================
Files 128 128
Lines 21854 21854
==========================================
+ Hits 16804 16872 +68
+ Misses 5050 4982 -68
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4999?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `76.35% <0.00%> (-2.30%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (+0.11%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/4999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `75.85% <0.00%> (+19.75%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4999?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4999?src=pr&el=footer). Last update [9931f81...a7a5c18](https://codecov.io/gh/huggingface/transformers/pull/4999?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 4,998 | closed | Remove deprecation warning TF2.2 | `strategy.experimental_run_v2()` is deprecated in favor of `strategy.run()`
https://github.com/tensorflow/tensorflow/blob/2b96f3662bd776e277f86997659e61046b56c315/tensorflow/python/distribute/distribute_lib.py#L953-L957
Fix #4992 | 06-15-2020 07:45:24 | 06-15-2020 07:45:24 | The only CI message is :
```
would reformat /home/circleci/transformers/src/transformers/trainer_tf.py
Oh no! 💥 💔 💥
1 file would be reformatted, 299 files would be left unchanged.
```
How can I see what to change to fit the code style ?<|||||>Thanks for the PR, but merging this, will remove the trainer from being compliant with TF <= 2.1. Which is not what we want yet. But we keep it here when we will be ready to do so ;) |
transformers | 4,997 | closed | Fix importing transformers on Windows - SIGKILL not defined | On Windows platform there is no such `signal.SIGKILL`, we need to send `CTRL_C_EVENT` which is `Ctrl+C`.
This PR aims at:
1. Not importing the `signal.SIGKILL` on Windows platforms as it's undefined
2. Using the right `signal.CTRL_C_EVENT` on Windows platforms and `signal.SIGKILL` on Unix*. | 06-15-2020 07:29:23 | 06-15-2020 07:29:23 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4997?src=pr&el=h1) Report
> Merging [#4997](https://codecov.io/gh/huggingface/transformers/pull/4997?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9931f817b75ecb2c8bb08b6e9d4cbec4b0933935&el=desc) will **increase** coverage by `0.72%`.
> The diff coverage is `66.66%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4997?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4997 +/- ##
==========================================
+ Coverage 76.89% 77.61% +0.72%
==========================================
Files 128 128
Lines 21854 21856 +2
==========================================
+ Hits 16804 16964 +160
+ Misses 5050 4892 -158
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4997?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/benchmark/benchmark\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4997/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `72.96% <66.66%> (-0.14%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4997/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.71% <0.00%> (-1.56%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/4997/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `75.85% <0.00%> (+19.75%)` | :arrow_up: |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4997/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `56.68% <0.00%> (+28.02%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4997?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4997?src=pr&el=footer). Last update [9931f81...d6f30c5](https://codecov.io/gh/huggingface/transformers/pull/4997?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Great, thanks @mfuntowicz |
transformers | 4,996 | closed | ❓ How to use TFTrainer on TPU ? Unable to destroy remote tensor handles | # ❓ Questions & Help
I'm trying to train my model on TPU using `TFTrainer`. Training starts fine, but after a few training steps, I'm having this error :
> Unable to destroy remote tensor handles. If you are running a tf.function, it usually indicates some op in the graph gets an error: 4 root error(s) found.
I don't know what I am doing wrong, any help is greatly appreciated.
---
Here is the full stacktrace :
```
2020/06/15 05:14:31 - INFO - transformers.trainer_tf - Epoch 2 Step 1500 Train Loss 3.0905
2020/06/15 05:17:33 - INFO - transformers.trainer_tf - Epoch 2 Step 2000 Train Loss 2.8351
2020/06/15 05:20:35 - INFO - transformers.trainer_tf - Epoch 2 Step 2500 Train Loss 3.1043
2020-06-15 05:22:53.919745: W tensorflow/core/distributed_runtime/eager/remote_tensor_handle_data.cc:76] Unable to destroy remote tensor handles. If you are running a tf.function, it usually indicates some op in the graph gets an error: 4 root error(s) found.
(0) Cancelled: {{function_node __inference__accumulate_next_440382}} Function was cancelled before it was started
(1) Cancelled: {{function_node __inference__accumulate_next_440382}} Function was cancelled before it was started
(2) Cancelled: {{function_node __inference__accumulate_next_440382}} Function was cancelled before it was started
(3) Out of range: {{function_node __inference__accumulate_next_440382}} End of sequence
[[{{node IteratorGetNext_6}}]]
0 successful operations.
5 derived errors ignored.
Traceback (most recent call last):
File "train.py", line 136, in <module>
main()
File "train.py", line 112, in main
trainer.train()
File "/home/me/.venv/x/lib/python3.6/site-packages/transformers/trainer_tf.py", line 274, in train
for training_loss in self._training_steps(train_ds, optimizer):
File "/home/me/.venv/x/lib/python3.6/site-packages/transformers/trainer_tf.py", line 319, in _training_steps
self._apply_gradients(optimizer)
File "/home/me/.venv/x/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 580, in __call__
result = self._call(*args, **kwds)
File "/home/me/.venv/x/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 611, in _call
return self._stateless_fn(*args, **kwds) # pylint: disable=not-callable
File "/home/me/.venv/x/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 2420, in __call__
return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access
File "/home/me/.venv/x/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1665, in _filtered_call
self.captured_inputs)
File "/home/me/.venv/x/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1746, in _call_flat
ctx, args, cancellation_manager=cancellation_manager))
File "/home/me/.venv/x/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 598, in call
ctx=ctx)
File "/home/me/.venv/x/lib/python3.6/site-packages/tensorflow/python/eager/execute.py", line 60, in quick_execute
inputs, attrs, num_outputs)
tensorflow.python.framework.errors_impl.OutOfRangeError: 4 root error(s) found.
(0) Cancelled: {{function_node __inference__accumulate_next_440382}} Function was cancelled before it was started
(1) Cancelled: {{function_node __inference__accumulate_next_440382}} Function was cancelled before it was started
(2) Cancelled: {{function_node __inference__accumulate_next_440382}} Function was cancelled before it was started
(3) Out of range: {{function_node __inference__accumulate_next_440382}} End of sequence
[[{{node IteratorGetNext_6}}]]
0 successful operations.
5 derived errors ignored. [Op:__inference__apply_gradients_373775]
``` | 06-15-2020 05:45:32 | 06-15-2020 05:45:32 | It seems to be related to the size of dataset : if I use `--training_steps` instead of `--num_epochs` I don't have the error.
---
But a similar error appear at evaluation time :
```
tensorflow.python.framework.errors_impl.InvalidArgumentError: {{function_node __inference__evaluate_steps_517418}} Compilation failure: Output shapes of then and else branches do not match: (pred[1]) vs. (pred[])
[[{{node lossed_bart/model/decoder/cond}}]]
TPU compilation failed
[[tpu_compile_succeeded_assert/_17738495357405513889/_9]]
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,995 | closed | append keyword arguments to the output | prepare_inputs_for_generation function receives keyword arguments (model_specific_kwargs)
However, these arguments are not ignored.
I fix this codes to return arguments including these keyword arguments.
Actually, I don't have an idea about the purpose of prepare_logits_for_generation function, which only returns the logit values. | 06-15-2020 05:34:00 | 06-15-2020 05:34:00 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,994 | closed | 🐛 TFTrainer not working on TPU (TF2.2) | # 🐛 Bug
## Information
The problem arises when using:
* [ ] the official example scripts
* [x] my own modified scripts
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: CNN/DM
* [ ] my own task or dataset
## To reproduce
Steps to reproduce the behavior:
1. Install `transformers` from `master`
2. Run TPU training using `TFTrainer`
I get the following error :
>TypeError: Failed to convert object of type <class 'transformers.optimization_tf.AdamWeightDecay'> to Tensor. Contents: <transformers.optimization_tf.AdamWeightDecay object at 0x7faddc7cfe80>. Consider casting elements to a supported type.
---
Here :
https://github.com/huggingface/transformers/blob/9931f817b75ecb2c8bb08b6e9d4cbec4b0933935/src/transformers/trainer_tf.py#L324
we pass `optimizer` as arguments.
But according to the documentation in TF :
https://github.com/tensorflow/tensorflow/blob/2b96f3662bd776e277f86997659e61046b56c315/tensorflow/python/distribute/distribute_lib.py#L890-L891
>All arguments in `args` or `kwargs` should either be nest of tensors or
`tf.distribute.DistributedValues` containing tensors or composite tensors.
## Environment info
- `transformers` version: 2.11.0
- Platform: Linux-4.9.0-9-amd64-x86_64-with-debian-9.12
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.0 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: TPU training
| 06-15-2020 03:09:14 | 06-15-2020 03:09:14 | Currently as a work-around I set the optimizer as an attribute and remove the argument :
After this line :
https://github.com/huggingface/transformers/blob/9931f817b75ecb2c8bb08b6e9d4cbec4b0933935/src/transformers/trainer_tf.py#L235
I add :
```python
self.optimizer = optimizer
```
And replace the argument optimizer :
https://github.com/huggingface/transformers/blob/9931f817b75ecb2c8bb08b6e9d4cbec4b0933935/src/transformers/trainer_tf.py#L326-L335
```python
def _step(self):
"""Applies gradients and resets accumulation."""
gradient_scale = self.gradient_accumulator.step * self.args.strategy.num_replicas_in_sync
gradients = [
gradient / tf.cast(gradient_scale, gradient.dtype) for gradient in self.gradient_accumulator.gradients
]
gradients = [(tf.clip_by_value(grad, -self.args.max_grad_norm, self.args.max_grad_norm)) for grad in gradients]
self.optimizer.apply_gradients(list(zip(gradients, self.model.trainable_variables)))
self.gradient_accumulator.reset()
```
And finally replace the call :
https://github.com/huggingface/transformers/blob/9931f817b75ecb2c8bb08b6e9d4cbec4b0933935/src/transformers/trainer_tf.py#L324
```python
self.args.strategy.experimental_run_v2(self._step)
```
---
Not closing as it's only a work-around. Any cleaner solution to put in a PR ?<|||||>Hello !
Nice finding! TPUs with TF Trainer is currently under developement and not works for several cases. If you really need to train your model with TPUs I suggest you to use the PyTorch version of the trainer. Full support of TPUs for the TF Trainer will arrive I hope this month.
But if you are ready to make PRs, you are welcome to do so :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,993 | closed | Importing transformers causes segmentation fault when setting cuda device | # 🐛 Bug
## Information
The problem arises when using:
my own modified scripts: (give details below)
## To reproduce
```
import torch
import transformers
def main(local_rank):
torch.cuda.set_device(local_rank)
device = torch.device('cuda', local_rank)
if __name__ == "__main__":
print (torch.__version__)
print (transformers.__version__)
print (torch.cuda.is_available())
main(0)
```
## Expected behavior
```
1.4.0
2.11.0
True
Segmentation fault (core dumped)
```
if commenting out `import transformers`, everything will be fine.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: linux
- Python version: 3.6.8
- PyTorch version (GPU?): 1.4.0 GPU
- Tensorflow version (GPU?): None
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| 06-15-2020 03:02:35 | 06-15-2020 03:02:35 | Hey @jcyk,
when trying to reproduce this error with PyTorch 1.5.0, there is no problem.
However, when I run your code with PyTorch 1.4.0 (as you did), I get the following error:
```python
1.4.0
2.11.0
False
Traceback (most recent call last):
File "./bug_4993.py", line 16, in <module>
main(0)
File "./bug_4993.py", line 8, in main
torch.cuda.set_device(local_rank)
File "/home/patrick/anaconda3/envs/pytorch_1_4/lib/python3.8/site-packages/torch/cuda/__init__.py", line 292, in set_device
torch._C._cuda_setDevice(device)
AttributeError: module 'torch._C' has no attribute '_cuda_setDevice'
```
, which is not related to `transformers`.
Also when going to PyTorch 1.4 documentation: https://pytorch.org/docs/1.4.0/cuda.html#torch.cuda.set_device
You can see that `set_device` is not recommended and that you should use https://pytorch.org/docs/1.4.0/cuda.html#torch.cuda.device instead.
Could you try using this function instead and see what happens? Also, it's very hard to trace back
`Segmentation fault (core dumped)` errors. Can you try getting a more explicit error message?<|||||>hi @patrickvonplaten , thanks for your reply.
Please notice that if i remove the line `import transformers`, the problem will disappear. That is why I suspect there is a problem with `transformers`. Please see the following two examples.
code0
```
import torch
#import transformers
def main(local_rank):
device = torch.device('cuda', local_rank)
x = torch.tensor([1,2,3], device=device)
if __name__ == "__main__":
print (torch.__version__)
#print (transformers.__version__)
print (torch.cuda.is_available())
main(0)
```
output0
```
1.4.0
True
```
code1
```
import torch
import transformers
def main(local_rank):
device = torch.device('cuda', local_rank)
x = torch.tensor([1,2,3], device=device)
if __name__ == "__main__":
print (torch.__version__)
print (transformers.__version__)
print (torch.cuda.is_available())
main(0)
```
output1
```
1.4.0
2.11.0
True
Segmentation fault (core dumped)
```
<|||||>I am experiencing this exact same problem, and updating to pytorch 1.5 is not an option. did you have any success figuring this out?<|||||>EDIT: This problem is caused by the sentencepiece dependency. It goes away if I comment out all references to this dependency. This will break `xlnet`, `xlm_roberta`, `marian`, `t5`, `albert`, `reformer`, and `camembert`, but if you are using any of the non-sentencepiece models, this should solve your problem.<|||||>@daphnei thx for pointing out this!
The solution for me was to upgrade torch to `1.5.1+cu92`, and downgrade transformers version to `2.6.0.`
Quite weird problem!
<|||||>That seems like a better fix than my hack! Unfortunately, I'm using a managed machine which doesn't have the CUDA version to support Torch 1.5. <|||||>> @daphnei thx for pointing out this!
> The solution for me was to upgrade torch to `1.5.1+cu92`, and downgrade transformers version to `2.6.0.`
> Quite weird problem!
I have met the exactly same problem with you, and did you find the root cause? is this issue same to report 'fix segmentation fault' [#2207](https://github.com/huggingface/transformers/pull/2207)?
The following were my test results with different versions:
1. Segmentation fault (core dumped): torch 1.2.0 ; transformers 2.11.0;cuda 10.0
2. Segmentation fault (core dumped): torch 1.2.0 ; transformers 2.6.0;cuda 10.0
3. Segmentation fault (core dumped): torch 1.2.0 ; transformers 2.5.1;cuda 10.0
4. Success: torch 1.1.0 ; transformers 2.5.1;cuda 10.0
BTW, is there a torch release of ‘1.5.1+cuda10.0’?
Thanks.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>1.5.1+cu10.2 it's okay, but my device is Telsa k40m,Here is the Error: `RuntimeError: CUDA error: no kernel image is available for execution on the device` not support higher version PyTorch version。Here is my Test:
- pytorch==1.2.0~1.3.0
It works well in Telsa K40m, running `python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('I hate you'))"` Segmentation fault (core dumped) responded
- pytorch==1.5.0
it's some error for `import torch; a= torch.Tensor(5,3); a=a.cuda(); a` ; RuntimeError: CUDA error: no kernel image is available for execution on the device
My Env is :
> NVIDIA-SMI 450.51.05 Driver Version: 450.51.05 CUDA Version: 11.0<|||||>I slove this question!
and I've outlined two solutions:
- Update your torch, such as from 1.2.0 to 1.7
- reduce the version of the associated package, such as `sentencepiece` : from 0.1.94 to 0.1.91 and delete `dataclasses`
I tried two above solutions and they both work! And because some reason I cannot upgrade the cuda version to adapt torch1.7, so I use the second solution : )
My Env is :
`transformers 3.5.0
python 3.7
CUDA 10.1
pytorch 1.2.0`
|
transformers | 4,992 | closed | [TFTrainer] Tensorflow Warning : experimental_run_v2 is deprecated | # 🐛 Bug
When running my code with TFTrainer, I'm receiving :
>WARNING - tensorflow - From /home/me/.venv/bart/lib/python3.6/site-packages/transformers/trainer_tf.py:355: StrategyBase.experimental_run_v2 (from tensorflow.python.distribute.distribute_lib) is deprecated and will be removed in a future version.
Instructions for updating:
renamed to `run`
Is it expected ? How can I remove this warning ?
## Information
The problem arises when using:
* [ ] the official example scripts
* [x] my own modified scripts
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: CNN/DM
* [ ] my own task or dataset
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Linux-4.9.0-9-amd64-x86_64-with-debian-9.12
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.0 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: Using TPU training
| 06-15-2020 00:43:46 | 06-15-2020 00:43:46 | closes by #4998 |
transformers | 4,991 | closed | evaluating with trainer.py with TPU results in sudden RAM spike and crash | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Roberta with the official how to train from scratch example
Language I am using the model on (English, Chinese ...): Esperanto
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. follow the official how to train with esperanto example
2. define a second dataset for testing, both datasets are TextDatasets, the train set is around 300MB, the test set around 3MB
3. run trainer.train() with evaluate_during_training
The evaluation loop runs a few examples before crashing the colab session due to an unknown reason. I have managed to increase the number of examples from 2 to 6 by reducing the per_device_eval_batch_size. My eval dataset only contains 31 examples. When checking the colab logs after crashing, it seems that trainer.py tried to allocate 4GB of memory before crashing, which seems unrealistic to me given the size of the datasets.
Specifically (I think this is the relevant part, please correct me if I'm wrong):
2020-06-14 21:45:53.187694: E tensorflow/compiler/xla/xla_client/xla_util.cc:76] (1) Resource exhausted: Attempting to reserve 2.78G at the bottom of memory. That was not possible. There are 2.75G free, 0B reserved, and 2.75G reservable.
When switching to GPU, I encounter a similar issue, but this time with a complete stack trace because this doesn't crash the runtime:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-14-10e96b9a36a3> in <module>()
----> 1 trainer.train()
2 trainer.save_model("./drive/My Drive/models/roberta/output")
2 frames
<ipython-input-11-34c6e8e4c40d> in train(self, model_path)
507
508 if self.args.evaluate_during_training:
--> 509 self.evaluate()
510
511 if self.args.save_steps > 0 and self.global_step % self.args.save_steps == 0:
<ipython-input-11-34c6e8e4c40d> in evaluate(self, eval_dataset, prediction_loss_only)
706 eval_dataloader = self.get_eval_dataloader(eval_dataset)
707 print(2)
--> 708 output = self._prediction_loop(eval_dataloader, description="Evaluation")
709 print(3)
710 self._log(output.metrics)
<ipython-input-11-34c6e8e4c40d> in _prediction_loop(self, dataloader, description, prediction_loss_only)
775 preds = logits.detach()
776 else:
--> 777 preds = torch.cat((preds, logits.detach()), dim=0)
778 if inputs.get("labels") is not None:
779 if label_ids is None:
RuntimeError: CUDA out of memory. Tried to allocate 6.75 GiB (GPU 0; 15.90 GiB total capacity; 7.99 GiB already allocated; 6.40 GiB free; 8.81 GiB reserved in total by PyTorch)
## Expected behavior
I expected the eval loss or error to be computed.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: colab
- Python version: 3.6
- Using GPU in script?: no, TPU with xla
- Using distributed or parallel set-up in script?: in theory, but only 1 TPU core available
| 06-14-2020 21:54:31 | 06-14-2020 21:54:31 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Any update on this? I've got a similar issue |
transformers | 4,990 | closed | Save the Dataset for training GPT2 | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
I am trying to train GPT2 on a custom dataset, the training text file is over 100GB, when creating the Dataset for training, it is taking too long and I was hoping if there is any way to save the dataset for later use.
I am using hugging face blog post on how to train transformers from scratch as a guide (https://huggingface.co/blog/how-to-train).
Thank You | 06-14-2020 19:43:54 | 06-14-2020 19:43:54 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,989 | closed | attention | 06-14-2020 17:01:14 | 06-14-2020 17:01:14 | ||
transformers | 4,988 | closed | error while instantiating model | Hi,
For my experiments, I need to make some changes to the forward pass of Roberta sequence classification model. Thus, I copied **RobertaForSequenceClassification** from _modeling_roberta.py_ into a [separate file](https://github.com/kevinghst/mixmatch_from_scratch/blob/master/models_roberta.py) in my repo.
And I try to instantiate it in my _main.py_ like this:
```
model = RobertaForSequenceClassification.from_pretrained(
'roberta-base',
num_labels = NUM_LABELS[cfg.task],
)
```
However, it gives me the following stacktrace:
----------------------------------------------------------------------------------------------------------
Traceback (most recent call last):
File "/scratch/wz1232/anaconda3/envs/mixmatch/lib/python3.6/site-packages/torch/serialization.py", line 186, in _check_seekable
f.seek(f.tell())
AttributeError: 'NoneType' object has no attribute 'seek'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/scratch/wz1232/anaconda3/envs/mixmatch/lib/python3.6/site-packages/transformers/modeling_utils.py", line 516, in from_pretrained
state_dict = torch.load(resolved_archive_file, map_location="cpu")
File "/scratch/wz1232/anaconda3/envs/mixmatch/lib/python3.6/site-packages/torch/serialization.py", line 368, in load
return _load(f, map_location, pickle_module)
File "/scratch/wz1232/anaconda3/envs/mixmatch/lib/python3.6/site-packages/torch/serialization.py", line 517, in _load
_check_seekable(f)
File "/scratch/wz1232/anaconda3/envs/mixmatch/lib/python3.6/site-packages/torch/serialization.py", line 189, in _check_seekable
raise_err_msg(["seek", "tell"], e)
File "/scratch/wz1232/anaconda3/envs/mixmatch/lib/python3.6/site-packages/torch/serialization.py", line 182, in raise_err_msg
raise type(e)(msg)
AttributeError: 'NoneType' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "main.py", line 208, in <module>
num_labels = NUM_LABELS[cfg.task],
File "/scratch/wz1232/anaconda3/envs/mixmatch/lib/python3.6/site-packages/transformers/modeling_utils.py", line 519, in from_pretrained
"Unable to load weights from pytorch checkpoint file. "
OSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
-----------------------------------------------------------------------------------------------------------
It doesn't occur when I import **RobertaForSequenceClassification** straight from the library, as opposed to from my own file containing the copied code. Why is this? | 06-14-2020 16:07:04 | 06-14-2020 16:07:04 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,987 | closed | Fix Inconsistent NER Grouping (Pipeline) | ### This PR solves issue #4816 by:
1. Applying entity grouping to similar entity types with different prefixes (i.e. `B` and `I`)
2. Ensuring that separate entities at the last filtered index are no longer excluded from grouping.
Running the sample script below (based on reference issue #4816) returns the expected results. Do note that the `entity_group` is based on the `entity_type` of the first entity in the group.
```
from transformers import pipeline
NER_MODEL = "mrm8488/bert-spanish-cased-finetuned-ner"
nlp_ner = pipeline("ner", model=NER_MODEL,
grouped_entities=True,
tokenizer=(NER_MODEL, {"use_fast": False}))
t = """Consuelo Araújo Noguera, ministra de cultura del presidente Andrés Pastrana (1998.2002) fue asesinada por las Farc luego de haber permanecido secuestrada por algunos meses."""
nlp_ner(t)
[{'entity_group': 'B-PER', 'score': 0.9710702640669686, 'word': 'Consuelo Araújo Noguera'},
{'entity_group': 'B-PER', 'score': 0.9997273534536362, 'word': 'Andrés Pastrana'},
{'entity_group': 'B-ORG', 'score': 0.8589080572128296, 'word': 'Farc'}]
```
I also ran another test to ensure that number 2 (separate entity at the last index) is working properly. I confirmed that it is working properly now.
```
nlp = pipeline('ner', grouped_entities=False)
nlp("Enzo works at the the UN")
[{'entity': 'I-PER', 'index': 1, 'score': 0.9968166351318359, 'word': 'En'},
{'entity': 'I-PER', 'index': 2, 'score': 0.9957635998725891, 'word': '##zo'},
{'entity': 'I-ORG', 'index': 7, 'score': 0.9986497163772583, 'word': 'UN'}]
nlp2 = pipeline('ner', grouped_entities=True)
nlp2("Enzo works at the the UN")
[{'entity_group': 'I-PER', 'score': 0.9962901175022125, 'word': 'Enzo'},
{'entity_group': 'I-ORG', 'score': 0.9986497163772583, 'word': 'UN'}]
```
You can test these out yourself in this colab [notebook](https://colab.research.google.com/drive/1D0xK7MSOQcxOCAe8hpnpFVqdoKrmejSS?usp=sharing).
cc @dav009 @mfuntowicz | 06-14-2020 14:07:40 | 06-14-2020 14:07:40 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4987?src=pr&el=h1) Report
> Merging [#4987](https://codecov.io/gh/huggingface/transformers/pull/4987?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/58cca47c16149e43d1b516623d59e3c5d97f695e&el=desc) will **decrease** coverage by `1.40%`.
> The diff coverage is `91.66%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4987?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4987 +/- ##
==========================================
- Coverage 77.83% 76.43% -1.41%
==========================================
Files 141 141
Lines 24634 24638 +4
==========================================
- Hits 19175 18832 -343
- Misses 5459 5806 +347
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4987?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/4987/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `76.16% <91.66%> (+0.16%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/4987/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.62% <0.00%> (-73.11%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4987/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: |
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4987/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <0.00%> (-21.92%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/4987/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `85.75% <0.00%> (-7.85%)` | :arrow_down: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/4987/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.73% <0.00%> (-3.76%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/4987/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.02% <0.00%> (-2.18%)` | :arrow_down: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4987/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.30% <0.00%> (-1.54%)` | :arrow_down: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4987/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `90.86% <0.00%> (-0.46%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4987/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.55% <0.00%> (-0.41%)` | :arrow_down: |
| ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/4987/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4987?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4987?src=pr&el=footer). Last update [58cca47...05f50d9](https://codecov.io/gh/huggingface/transformers/pull/4987?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Looks great, but this sort of code/feature also looks like a perfect candidate for more unit-testing coverage.
What do you think?<|||||>Agree, can add these as test cases in [test_pipelines](https://github.com/huggingface/transformers/blob/master/tests/test_pipelines.py).<|||||>That would be great @enzoampil!<|||||>@julien-c I've added the original issue bug as a test case (Number 1 in the original post). Do note that I only included it in the `torch` version because `mrm8488/bert-spanish-cased-finetuned-ner` seems to only work for torch. Please let me know if this is enough for this PR.
For future PRs to add new test cases coming from issues found on top of this (e.g. those from issue #5077), I was hoping to get some guidance on how we'd include them to the test coverage without making it too heavy. For context, different cases are typically based on different models, which means we'll have to run separate models to add them as test cases.<|||||>I think we should try to make the tests more unitary, meaning that for instance you would feed them fixed model outputs (no actual forward pass) and check that the actual formatted output is correct.
This might require splitting the call method in smaller more testable functions, which is totally fine IMO.<|||||>I see what you mean. Yes, that makes more sense than running different models. Will work on this.<|||||>@julien-c @LysandreJik I've performed the following adjustments to the PR:
1. **I've separated the `group_entities` function from the raw NER forward pass altogether so that it's easy to run tests that feed fixed model outputs and check that the actual formatted output is correct.**
`group_entities` now takes as an argument a list[dict] of raw NER model outputs, and converts them to the *grouped* equivalent.
2. **I've added a new `NerPipelineTests` class in `test_pipelines` which contains all the NER related tests, and includes new tests for the `group_entities` function.**
The test simply confirms if the expected formatted output (grouped) is equivalent to the actual formatted output given the raw model outputs. For the test cases, I used the two samples from the original PR post. It should be straight forward to continue adding test cases moving forward.
Please do let me know what you guys think! :smile:<|||||>Yes, looks good. I would add some typings to (at least) the `group_entities` and `group_sub_entities` but we can do that in a subsequent PR.<|||||>@LysandreJik @julien-c Thanks for the feedback. I've added typings for the `group_entities` and `group_sub_entities` functions :smile: |
transformers | 4,986 | closed | BertTokenizer: ValueError: Input is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers. | # 🐛 Bug
## Information
Tokenizer I am using is BertTokenizer and I've also tried using AlbertTokenizer, but it does not have any effect. So I'm thinking that the bug is in the base tokenizer
Language I am using the model on is English, but I don't believe that's the issue.
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Version: ```transformers==2.11.0```
2. Run this code
```python
from transformers import BertModel, BertTokenizer
text = 'A quick brown fox jumps over' # Just a dummy text
BertTokenizer.encode_plus(
text.split(' '),
None,
add_special_tokens = True,
max_length = 512)
```
3. This should be the error
```
Traceback (most recent call last):
File "classification.py", line 23, in <module>
max_length = 512)
File "D:\Programmering\Python\lib\site-packages\transformers\tokenization_utils.py", line 1576, in encode_plus
first_ids = get_input_ids(text)
File "D:\Programmering\Python\lib\site-packages\transformers\tokenization_utils.py", line 1556, in get_input_ids
"Input is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers."
ValueError: Input is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers.
```
And yes, I've tried just inputting a string, and I still got the same error.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I want the encoder_plus function to return an encoded version of the input sequence.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Windows
- Python version: 3.7.4
- PyTorch version (GPU?): 1.5.0+cpu
- Tensorflow version (GPU?): (Not used)
- Using GPU in script?: Nope
- Using distributed or parallel set-up in script?: No
| 06-14-2020 12:33:57 | 06-14-2020 12:33:57 | The mistake is on me. I forgot to download the tokenizer😂<|||||>I am getting the same error. What exactly do you mean by download the tokenizer? Doesn't it come with the transformers package?<|||||>I think what he meant was that use used the class, and not the instance, to encode text. You should always initialize the class:
```py
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
# or
tokenizer = BertTokenizer(vocabfile)
# now you can encode
text = 'A quick brown fox jumps over' # Just a dummy text
model_inputs = tokenizer.encode_plus(text)
```<|||||>@LysandreJik, while I have you. I know this aint the right place to ask you, but.
I’ve seen that you’re about to release the Electra modeling for question answering, and I’ve written a small script for training the electra discriminator for question answering, and I’m about to train the model.
so Would it be useful for you if I trained the model, or are you already doing that?<|||||>Hi @mariusjohan, we welcome all models here :) The [hub](https://huggingface.co/models) is a very easy way to share models. The way you're training it will surely be different to other trainings, so sharing it on the hub with details of how you trained it is always welcome!<|||||>Ok, this is still not working for me. I am running the run_squad.py script and I keep getting the error.
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/multiprocessing/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/multiprocessing/pool.py", line 44, in mapstar
return list(map(*args))
File "/kriviv-10T/transformers/transformers/src/transformers/data/processors/squad.py", line 142, in squad_convert_example_to_features
return_token_type_ids=True,
File "/kriviv-10T/transformers/transformers/src/transformers/tokenization_utils_base.py", line 1521, in encode_plus
**kwargs,
File "/kriviv-10T/transformers/transformers/src/transformers/tokenization_utils.py", line 356, in _encode_plus
second_ids = get_input_ids(text_pair) if text_pair is not None else None
File "/kriviv-10T/transformers/transformers/src/transformers/tokenization_utils.py", line 343, in get_input_ids
f"Input {text} is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers."
ValueError: Input [] is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers.<|||||>The reason I got the error, was because I forgot to initialize the tokenize module, and therefore it thinks the self argument is the input_ids and then you’re not giving it the real input_ids argument. And ofc, the system was way complex than the example I gave, so maybe try to check how the tokenization module is giving. Maybe also check your inputs and so on if you haven’t already. Sadly I can first fix it in a few hours.<|||||>@vkrishnamurthy11 Did it help?<|||||>I'm still facing the same issue:
ValueError: Input [] is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers.
While trying to run run_squad.py. I'm trying to train and test it with:
https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json
https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json
<|||||>Facing the same issue as Sarang when training on Squad using run_squad.py . Is this a known bug?<|||||>Same issue here when running squad_convert_examples_to_features in my own code. <|||||>I don't know if it helps, but the reason was because I failed to use _**.from_pretrained**_ function. Maybe check for that. So maybe print out the **_self_** argument<|||||>I resolved this issue by updating all packages I was using for training to the newest version. In my experience you need to have:
1.9.0 - torch
0.10.0 - torchtext
4.11.3 - transformers
or newer...
PS: You can check the version you are currently using with:
print(torch.__version__)
print(torchtext.__version__)
print(transformers.__version__)<|||||>> I resolved this issue by updating all packages I was using for training to the newest version. In my experience you need to have: 1.9.0 - torch 0.10.0 - torchtext 4.11.3 - transformers or newer...
>
> PS: You can check the version you are currently using with: print(torch.**version**) print(torchtext.**version**) print(transformers.**version**)
import torch
import torchtext
import transformers
import numpy as np
import os
import collections
os.makedirs('./data', exist_ok=True)
train_dataset, test_dataset = torchtext.datasets.AG_NEWS(root='./data')
classes = ['World', 'Sports', 'Business', 'Sci/Tech']
train_dataset = list(train_dataset)
test_dataset = list(test_dataset)
bert_model = 'bert-base-uncased'
tokenizer = transformers.BertTokenizer.from_pretrained(bert_model)
MAX_SEQ_LEN = 128
PAD_INDEX = tokenizer.convert_tokens_to_ids(tokenizer.pad_token)
UNK_INDEX = tokenizer.convert_tokens_to_ids(tokenizer.unk_token)
def pad_bert(b):
# b is the list of tuples of length batch_size
# - first element of a tuple = label,
# - second = feature (text sequence)
# build vectorized sequence
v = [tokenizer.encode(x[1]) for x in b]
# compute max length of a sequence in this minibatch
l = max(map(len, v))
return ( # tuple of two tensors - labels and features
torch.LongTensor([t[0] for t in b]),
torch.stack([torch.nn.functional.pad(torch.tensor(t), (0, l - len(t)), mode='constant', value=0) for t in v])
)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=8, collate_fn=pad_bert, shuffle=True)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=8, collate_fn=pad_bert)
model = transformers.BertForSequenceClassification.from_pretrained(bert_model, num_labels=4)
optimizer = torch.optim.Adam(model.parameters(), lr=2e-5)
report_freq = 50
iterations = 500 # make this larger to train for longer time!
model.train()
i, c = 0, 0
acc_loss = 0
acc_acc = 0
for labels, texts in train_loader:
labels = labels - 1 # get labels in the range 0-3
texts = texts
loss, out = model(texts, labels=labels)[:2]
labs = out.argmax(dim=1)
acc = torch.mean((labs == labels).type(torch.float32))
optimizer.zero_grad()
loss.backward()
optimizer.step()
acc_loss += loss
acc_acc += acc
i += 1
c += 1
if i % report_freq == 0:
print(f"Loss = {acc_loss.item() / c}, Accuracy = {acc_acc.item() / c}")
c = 0
acc_loss = 0
acc_acc = 0
iterations -= 1
if not iterations:
break
model.eval()
iterations = 100
acc = 0
i = 0
for labels, texts in test_loader:
labels = labels - 1
texts = texts
_, out = model(texts, labels=labels)[:2]
labs = out.argmax(dim=1)
acc += torch.mean((labs == labels).type(torch.float32))
i += 1
if i > iterations: break
print(f"Final accuracy: {acc.item() / i}")<|||||>> Ok, this is still not working for me. I am running the run_squad.py script and I keep getting the error.
>
> Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/multiprocessing/pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/multiprocessing/pool.py", line 44, in mapstar return list(map(*args)) File "/kriviv-10T/transformers/transformers/src/transformers/data/processors/squad.py", line 142, in squad_convert_example_to_features return_token_type_ids=True, File "/kriviv-10T/transformers/transformers/src/transformers/tokenization_utils_base.py", line 1521, in encode_plus **kwargs, File "/kriviv-10T/transformers/transformers/src/transformers/tokenization_utils.py", line 356, in _encode_plus second_ids = get_input_ids(text_pair) if text_pair is not None else None File "/kriviv-10T/transformers/transformers/src/transformers/tokenization_utils.py", line 343, in get_input_ids f"Input {text} is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers." ValueError: Input [] is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers.
Check whether your input is empty. I got similar errors and found when passing empty string to tokenizer, it will get this error.<|||||>hi, @mariusjohan, where this file? `vocabfile`<|||||>IS THERE A RESOLVE??!
|
transformers | 4,985 | closed | Word Embedding input to GPT-2 | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
Is there any way in which we can give custom word embeddings as input to GPT-2 instead of tokenized words?
| 06-14-2020 11:55:58 | 06-14-2020 11:55:58 | You can pass embeddings to the model by using the `input_embeds` keyword argument:
```
result = model(input_embeds=...)
```
They should be if shape `batch_size, sequence_length, hidden_size)`.<|||||>Hi @sgugger
I tried the following code snippet but it is giving an error
<pre>
from transformers import GPT2Tokenizer, GPT2LMHeadModel
modelGPT = GPT2LMHeadModel.from_pretrained('gpt2')
modelGPT(input_embeds=out[2][-1]) #where out[2][-1] is of shape [64, 60, 768] --> [batch_size, sequence_length, hidden_size]
</pre>
Error:
<pre>
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-81-40a402b51d0d> in <module>()
----> 3 modelGPT(input_embeds=out[2][-1])
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
TypeError: forward() got an unexpected keyword argument 'input_embeds'
</pre><|||||>Sorry I made a typo, I meant `inputs_embeds` (with an s).<|||||>Worked like a charm! Thanks |
transformers | 4,984 | closed | FillMaskPipeline return word-piece | # 🚀 . Feature request
Not sure exactly if this is a bug/feature request/ or just me not understanding correctly :).
I am trying to use the [FillMaskPipeline](https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines.py#L739) but as far as I understand the pipeline returns a single word-piece for a \<mask\>. But the mask could be of a word or even a text-spans (for the case of BART).
Sample code:
```python
from transformers import pipeline
nlp =pipeline("fill-mask",model='bart-large')
nlp(f'Their expression was divine; and as they glanced at me timidly but with parted'
f' lips in great <mask>, I forgot all thoughts of their conversion in feelings '
f'that were far more earthly.')
#missing word is bewilderment, this is from librispeech
..
{'sequence': '<s> Their expression was divine; and as they glanced at me timidly but with parted lips in great bewild, I forgot all thoughts of their conversion in feelings that were far more earthly.</s>',
'score': 7.772801473038271e-05,
'token': 33304},
print(tokenizer.decode(33304))
' bewild'
```
As can be seen above, one of the outputs is "bewild" which is the first word-piece in bewilderment:
```python
[tokenizer.decode(i)+' ' for i in tokenizer.batch_encode_plus(['bewilderment'])['input_ids'][0]]
['<s> ', ' bewild ', 'er ', 'ment ', '</s> ']
```
## Your contribution
I assume we could go to the next output as see if it is a word-piece and if it is add it to the first token. not sure exactly if this is correct, specially when it seems that the size of the output is exactly as the number of input word-pieces.
Thanks
| 06-14-2020 09:29:37 | 06-14-2020 09:29:37 | I do not think generally there's a way of knowing that a given token is usually a sub-token vs. a complete "word"
Maybe a heuristic could be to do some kind of greedy decoding, iteratively adding a second `<mask>` after the first filled one and checking if the output score is above a certain threshold/above the previous one.<|||||>@julien-c @orena1 So knowing for one sub-token whether it is complete or not is on at the "token" level. What about a span text?
The closest I have found it https://github.com/huggingface/transformers/issues/3972<|||||>Hi @Diego999, I actually did not find a way to test/train span text on BART, although the paper mention that they train the model using span-text.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,983 | closed | Getting very bad F1 Scores when training SQUAD v2.0 with robertadistil-base | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
https://stackoverflow.com/questions/62370365/getting-very-bad-f1-scores-when-training-squad-v2-0-with-robertadistil-base
Here is my notebook
https://github.com/manishiitg/ML_Experiments/blob/master/squad_huggingface_experiment_with_Trainer_TPU.ipynb
I am getting very bad f1 scores.
Any help on what i am doing wrong?
| 06-14-2020 08:57:15 | 06-14-2020 08:57:15 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,982 | closed | Why run_language_modelling.py does not use segment embeddings or language embeddings? | I am trying to train Bert and XLM on my own data. But I found that it seems run_language_modelling.py only tokenizes input text into ids, but it doesn't create token_type_ids or langs for Bert or XLM.
`batch_encoding = tokenizer.batch_encode_plus(lines, add_special_tokens=True, max_length=block_size)`
`self.examples = batch_encoding["input_ids"]`
As you can see from the codes, the examples only contain input_ids.
I set breakpoints in the modeling_bert.py and modeling_xlm.py, and there is no token_type_ids or langs as a part of input to Bert or XLM. For the downstream tasks, Bert and XLM always use segment embeddings or language embeddings as a part of their input. Why don't we use them in the pretraining step?
If we do not use segment embeddings or language embeddings during pretraining, but use them in fine-tuning, shouldn't that cause bias?
Sorry if misunderstood something, but I am quite confused now. | 06-14-2020 08:07:07 | 06-14-2020 08:07:07 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,981 | closed | Create README.md | 06-14-2020 03:35:35 | 06-14-2020 03:35:35 | ||
transformers | 4,980 | closed | keras | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| 06-13-2020 17:23:08 | 06-13-2020 17:23:08 | |
transformers | 4,979 | closed | Add Code Coverage and Black badges to README | Add Black badge to README based on this:
https://github.com/huggingface/transformers/blob/403d3098572ac308416653648456a940860da39e/.circleci/config.yml#L101 | 06-13-2020 16:02:25 | 06-13-2020 16:02:25 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4979?src=pr&el=h1) Report
> Merging [#4979](https://codecov.io/gh/huggingface/transformers/pull/4979?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/403d3098572ac308416653648456a940860da39e&el=desc) will **decrease** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4979?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4979 +/- ##
==========================================
- Coverage 77.20% 77.20% -0.01%
==========================================
Files 128 128
Lines 21851 21851
==========================================
- Hits 16870 16869 -1
- Misses 4981 4982 +1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4979?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4979/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.38% <0.00%> (-0.41%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4979?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4979?src=pr&el=footer). Last update [403d309...5be7204](https://codecov.io/gh/huggingface/transformers/pull/4979?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,978 | closed | Output hidden states | Attempts to close issue #3879 by refactoring all models to take in an extra argument `output_hidden_states` in the `forward()` method.
@patrickvonplaten | 06-13-2020 12:37:57 | 06-13-2020 12:37:57 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4978?src=pr&el=h1) Report
> Merging [#4978](https://codecov.io/gh/huggingface/transformers/pull/4978?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/68e19f1c228c92d5d800533f558faff24b57127a&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `93.85%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4978?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4978 +/- ##
==========================================
+ Coverage 77.93% 77.94% +0.01%
==========================================
Files 137 137
Lines 23475 23511 +36
==========================================
+ Hits 18295 18326 +31
- Misses 5180 5185 +5
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4978?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_mmbt.py](https://codecov.io/gh/huggingface/transformers/pull/4978/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tbWJ0LnB5) | `22.11% <ø> (ø)` | |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4978/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `94.78% <ø> (ø)` | |
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/4978/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `25.82% <0.00%> (+0.16%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4978/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `74.48% <ø> (ø)` | |
| [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4978/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `79.85% <64.28%> (-0.19%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/4978/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `93.20% <85.71%> (-0.12%)` | :arrow_down: |
| [src/transformers/modeling\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4978/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `80.43% <100.00%> (-0.05%)` | :arrow_down: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/4978/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.23% <100.00%> (-0.02%)` | :arrow_down: |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4978/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.21% <100.00%> (ø)` | |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4978/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `99.14% <100.00%> (ø)` | |
| ... and [24 more](https://codecov.io/gh/huggingface/transformers/pull/4978/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4978?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4978?src=pr&el=footer). Last update [68e19f1...ddaeb44](https://codecov.io/gh/huggingface/transformers/pull/4978?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>@drjosephliu - this is really great work! I added two additional tests for output hidden states that have to pass for all models. Can you fix the remaining models? :-) I think after that we are good to merge!<|||||>This is great, looking forward to having this in master:)<|||||>Just pushed the fixes. Fingers crossed for a hole in one here !<|||||>Looks like one more merge conflict :(. Otherwise LGTM, epic contribution! Also patrick has left at least one unresolved comment w.r.t Pipfile<|||||>I added the refactorisation for mobilebert, so the relevant tests should be passing now. However, there are a bunch of tokenisation tests which are now failing instead and I can't figure out why as I'm pretty certain the changes I made didn't affect the tokenisers whatsoever. FWIW, those tests were already failing on master, so I hope they're unrelated to the changes I've made. Anyways, let me know if there are any additional edits that need to be made.
P.S. what a coincidence to bump into you here @sshleifer ! Glad to see you settling in here.<|||||>Haha small world!
I think there is a git issue causing it to appear that 188 files are changed in the PR.
I don't think it's horribly damaging, (the LHS looks wrong, the RHS looks correct in the diff viewer), but if you have an easy way to resolve it that would be nice.
<|||||>Well crap, it seems like it's gotten worse. I'm out of ideas here, because I've `git fetch upstream` and `git merge upstream/master`, so it's telling me everything's up to date. It seems like it's comparing against an old version of master that's a couple commits ago. Perhaps I can try deleting my local master, creating and pulling a new master, merging output_hidden_states into master and then push and make a new PR. I'm wondering if that will work? Unless you have any other ideas.<|||||>Ok, so with a bit of git ninja'ing, it's no longer showing ~200 files changed. Hoping this works now.<|||||>@drjosephliu - Amazing work! Thank's a lot for this. Really helps the library to become more flexible :-)
The PR is good to merge for me.
Pinging @LysandreJik to verify since it's a big one. Should be merged today though I hope :-) <|||||>Awesome, no problem and great working with y'all ! |
transformers | 4,977 | closed | [model card] model card for bart-large-finetuned-squadv1 | 06-13-2020 11:01:14 | 06-13-2020 11:01:14 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4977?src=pr&el=h1) Report
> Merging [#4977](https://codecov.io/gh/huggingface/transformers/pull/4977?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ca5e1cdf8e314288bd0242a531815a6c75d8178e&el=desc) will **decrease** coverage by `0.70%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4977?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4977 +/- ##
==========================================
- Coverage 77.26% 76.56% -0.71%
==========================================
Files 128 128
Lines 21851 21851
==========================================
- Hits 16884 16730 -154
- Misses 4967 5121 +154
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4977?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0.00%> (-81.21%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.56% <0.00%> (-2.58%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `76.35% <0.00%> (-2.30%)` | :arrow_down: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `38.08% <0.00%> (-1.41%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.51% <0.00%> (-1.39%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.38% <0.00%> (-0.41%)` | :arrow_down: |
| [src/transformers/benchmark/benchmark\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `72.80% <0.00%> (-0.30%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (-0.16%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4977?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4977?src=pr&el=footer). Last update [ca5e1cd...e9e1a0d](https://codecov.io/gh/huggingface/transformers/pull/4977?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>could you also add a metadata link to the dataset, as demonstrated in https://github.com/huggingface/transformers/commit/ca5e1cdf8e314288bd0242a531815a6c75d8178e? Thanks!<|||||>> could you also add a metadata link to the dataset, as demonstrated in [ca5e1cd](https://github.com/huggingface/transformers/commit/ca5e1cdf8e314288bd0242a531815a6c75d8178e)? Thanks!
Sure<|||||>Great, thanks @patil-suraj <|||||>with link to dataset: https://huggingface.co/valhalla/bart-large-finetuned-squadv1 |
|
transformers | 4,976 | closed | Fix parameter 'output_attentions' docstring | This PR fixes parameter 'output_attentions' docstring as follow:
1. Remove duplicate parameter docstring in class OpenAIGPTModel.
2. Fix docstring due to web page display problems. | 06-13-2020 10:30:46 | 06-13-2020 10:30:46 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4976?src=pr&el=h1) Report
> Merging [#4976](https://codecov.io/gh/huggingface/transformers/pull/4976?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ca5e1cdf8e314288bd0242a531815a6c75d8178e&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4976?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4976 +/- ##
=======================================
Coverage 77.26% 77.26%
=======================================
Files 128 128
Lines 21851 21851
=======================================
Hits 16884 16884
Misses 4967 4967
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4976?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `80.48% <ø> (ø)` | |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/4976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.26% <ø> (ø)` | |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.21% <ø> (ø)` | |
| [src/transformers/modeling\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/4976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jYW1lbWJlcnQucHk=) | `100.00% <ø> (ø)` | |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `99.14% <ø> (ø)` | |
| [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/4976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.50% <ø> (ø)` | |
| [src/transformers/modeling\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `78.16% <ø> (ø)` | |
| [src/transformers/modeling\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/4976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF1YmVydC5weQ==) | `84.00% <ø> (ø)` | |
| [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `85.43% <ø> (ø)` | |
| [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `93.02% <ø> (ø)` | |
| ... and [26 more](https://codecov.io/gh/huggingface/transformers/pull/4976/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4976?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4976?src=pr&el=footer). Last update [ca5e1cd...a921f63](https://codecov.io/gh/huggingface/transformers/pull/4976?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Great! Thanks a lot @ZhuBaohe ! |
transformers | 4,975 | closed | Create README.md | Adding readme file for the model. | 06-13-2020 07:51:02 | 06-13-2020 07:51:02 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4975?src=pr&el=h1) Report
> Merging [#4975](https://codecov.io/gh/huggingface/transformers/pull/4975?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ca5e1cdf8e314288bd0242a531815a6c75d8178e&el=desc) will **decrease** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4975?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4975 +/- ##
==========================================
- Coverage 77.26% 77.26% -0.01%
==========================================
Files 128 128
Lines 21851 21851
==========================================
- Hits 16884 16883 -1
- Misses 4967 4968 +1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4975?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.38% <0.00%> (-0.41%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (-0.16%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (+0.11%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4975?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4975?src=pr&el=footer). Last update [ca5e1cd...03bddda](https://codecov.io/gh/huggingface/transformers/pull/4975?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,974 | closed | Patch 4 | 06-13-2020 07:44:20 | 06-13-2020 07:44:20 | Merging readme file. |
|
transformers | 4,973 | closed | How to make my own dataset to use BART summarization? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Hello I'm trying to use BART summarization model. I have a dataset that take the form of dataframe which has two columns 'document' and 'summary'.
Q1.
I read this Readme.md.
> line.this should make a directory called cnn_dm/ with files like test.source. To use your own data, copy that files format. Each article to be summarized is on its own
I don't understand this sentences well. So I just saw the cnn datasets(train.source, train.target, test.source, test.target). **How can i distinguish between each documents?**
Q2.
**How can i change my own datasets into the cnn files format?**
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| 06-13-2020 06:49:05 | 06-13-2020 06:49:05 | |
transformers | 4,972 | closed | Run run_tf_glue.py has bugs | do as https://github.com/huggingface/transformers/tree/master/examples/text-classification suggests, but has following bugs showing **/home/admin/tensorflow_datasets/glue/cola/1.0.0/glue-train.tfrecord-00000-of-00001; No such file or directory**
my dataset_info.json copy from https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/testing/metadata/glue/cola/1.0.0/dataset_info.json
`06/13/2020 14:21:11 - INFO - absl - Field info.citation from disk and from code do not match. Keeping the one from code.
06/13/2020 14:21:11 - INFO - absl - Field info.location from disk and from code do not match. Keeping the one from code.
06/13/2020 14:21:11 - INFO - absl - Reusing dataset glue (/home/admin/tensorflow_datasets/glue/cola/1.0.0)
06/13/2020 14:21:11 - INFO - absl - Constructing tf.data.Dataset for split train, from /home/admin/tensorflow_datasets/glue/cola/1.0.0
dddddddddddddd: <PrefetchDataset shapes: {idx: (), label: (), sentence: ()}, types: {idx: tf.int32, label: tf.int64, sentence: tf.string}>
2020-06-13 14:21:12.132176: W tensorflow/core/framework/op_kernel.cc:1622] OP_REQUIRES failed at iterator_ops.cc:929 : Not found: /home/admin/tensorflow_datasets/glue/cola/1.0.0/glue-train.tfrecord-00000-of-00001; No such file or directory
Traceback (most recent call last):
File "./examples/text-classification/run_tf_glue.py", line 229, in <module>
main()
File "./examples/text-classification/run_tf_glue.py", line 175, in main
if training_args.do_train
File "./examples/text-classification/run_tf_glue.py", line 55, in get_tfds
return glue_convert_examples_to_features(ds, tokenizer, max_seq_length, task_name)
File "/export/sdb/test/tools/anaconda3/envs/tf2env/lib/python3.6/site-packages/transformers/data/processors/glue.py", line 62, in glue_convert_examples_to_features
return _tf_glue_convert_examples_to_features(examples, tokenizer, max_length=max_length, task=task)
File "/export/sdb/test/tools/anaconda3/envs/tf2env/lib/python3.6/site-packages/transformers/data/processors/glue.py", line 79, in _tf_glue_convert_examples_to_features
examples = [processor.tfds_map(processor.get_example_from_tensor_dict(example)) for example in examples]
File "/export/sdb/test/tools/anaconda3/envs/tf2env/lib/python3.6/site-packages/transformers/data/processors/glue.py", line 79, in <listcomp>
examples = [processor.tfds_map(processor.get_example_from_tensor_dict(example)) for example in examples]
File "/export/sdb/test/tools/anaconda3/envs/tf2env/lib/python3.6/site-packages/tensorflow_core/python/data/ops/iterator_ops.py", line 622, in __next__
return self.next()
File "/export/sdb/test/tools/anaconda3/envs/tf2env/lib/python3.6/site-packages/tensorflow_core/python/data/ops/iterator_ops.py", line 666, in next
return self._next_internal()
File "/export/sdb/test/tools/anaconda3/envs/tf2env/lib/python3.6/site-packages/tensorflow_core/python/data/ops/iterator_ops.py", line 651, in _next_internal
output_shapes=self._flat_output_shapes)
File "/export/sdb/test/tools/anaconda3/envs/tf2env/lib/python3.6/site-packages/tensorflow_core/python/ops/gen_dataset_ops.py", line 2673, in iterator_get_next_sync
_six.raise_from(_core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.NotFoundError: /home/admin/tensorflow_datasets/glue/cola/1.0.0/glue-train.tfrecord-00000-of-00001; No such file or directory [Op:IteratorGetNextSync]
run.sh: line 15: 15230 Segmentation fault (core dumped) CUDA_VISIBLE_DEVICES=2 python ./examples/text-classification/run_tf_glue.py --model_name_or_path bert-base-uncased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 12 --per_device_eval_batch_size=8 --per_device_train_batch_size=8 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir /home/admin/test/transformers/tmp/$TASK_NAME/
` | 06-13-2020 06:44:51 | 06-13-2020 06:44:51 | |
transformers | 4,971 | closed | huggingface distillbert classification using multiprocessing | I am trying to use torch multiprocessing to parallelize the predictions from two separate huggingface distillbert classification models. It seems to be deadlocked at the prediction step. I am using python 3.6.5, torch 1.5.0 and huggingface transformers version 2.11.0. The output from running the code is
```
Tree enc done
Begin tree prediction<------(Comment: Both begin tree
End tree predictions<------- and end tree predictions)
0.03125429153442383
Dn prediction
Dn enc done
Begin dn predictions<------(Comment: Both begin dn
End dn predictions<------- and end dn predictions)
0.029727697372436523
----------Done sequential predictions-------------
--------Start Parallel predictions--------------
Tree prediction
Tree enc done
Begin tree prediction. <------(Comment: Process is deadlocked after this)
Dn prediction
Dn enc done
Begin dn predictions. <-------(Comment: Process is deadlocked after this)
```
and the code is
```
def predict(sentences =[], tokenizer=tokenizer,models=(tree_model,dn_model,None)):
MAX_SENTENCE_LENGTH = 16
start = time.time()
input_ids = []
attention_masks = []
predictions = []
tree_model = models[0]
dn_model = models[1]
if models[0]:
print("Tree prediction")
if models[1]:
print("Dn prediction")
for sent in sentences:
encoded_dict = tokenizer.encode_plus(
sent,
add_special_tokens = True,
max_length = MAX_SENTENCE_LENGTH,
pad_to_max_length = True,
return_attention_mask = True,
return_tensors = 'pt',
)
# Add the encoded sentence to the list.
input_ids.append(encoded_dict['input_ids'])
# And its attention mask (simply differentiates padding from non-padding).
attention_masks.append(encoded_dict['attention_mask'])
if tree_model:
print("Tree enc done")
if dn_model:
print("Dn enc done")
# Convert the lists into tensors.
new_input_ids = torch.cat(input_ids, dim=0)
new_attention_masks = torch.cat(attention_masks, dim=0)
with torch.no_grad():
# Forward pass, calculate logit predictions
if tree_model:
print("Begin tree prediction")
outputs = tree_model(new_input_ids,
attention_mask=new_attention_masks)
print("End tree predictions")
else:
print("Begin dn predictions")
outputs = dn_model(new_input_ids,
attention_mask=new_attention_masks)
print("End dn predictions")
logits = outputs[0]
logits = logits.detach().cpu()
print(time.time()-start)
predictions = logits
return predictions
def get_tree_prediction(sentence, tokenizer=tokenizer,models=(tree_model,dn_model, None)):
return predict(sentences =[sentence], tokenizer=tokenizer,models=models)
def get_dn_prediction(sentence, tokenizer=tokenizer,models=(tree_model,dn_model, None)):
return predict(sentences =[sentence], tokenizer=tokenizer,models=models)
if __name__ == '__main__':
sentence = "hello world"
processes = []
get_tree_prediction(sentence, tokenizer, (tree_model,None,None))
get_dn_prediction(sentence, tokenizer, (None,dn_model,None))
print("----------Done sequential predictions-------------")
print('\n--------Start Parallel predictions--------------')
tr_p = mp.Process(target=get_tree_prediction, args=(sentence, tokenizer,
(tree_model,None,None)))
tr_p.start()
processes.append(tr_p)
dn_p = mp.Process(target=get_dn_prediction, args=(sentence, tokenizer,
(None,dn_model,None)))
dn_p.start()
processes.append(dn_p)
for p in processes:
p.join()
```
| 06-13-2020 04:30:15 | 06-13-2020 04:30:15 | hi, did you find a solution to this problem? |
transformers | 4,970 | closed | Handle unexpected weights in checkpoint loading (tf to pytorch) | If checkpoint has additional weights, then the current code will fail to load ignoring those weights. | 06-13-2020 04:20:02 | 06-13-2020 04:20:02 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,969 | closed | Request: pretrained distilgpt2-medium, distilgpt2-large models | # Plans for distilgpt2-medium and distilgpt2-large
## Motivation
While distilgpt2 is useful, I was wondering if there are any plans to create a distilgpt2-medium and distilgpt2-large. I'm also wondering how the result of distilgpt2-medium compare to gpt2, and distilgpt2-large compare to gpt2-medium, in size and performance.
Maybe it's not even worth it to have those pretrained, if distilgpt2-medium is larger than gpt2 and perform worse.
| 06-12-2020 23:34:22 | 06-12-2020 23:34:22 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I'd also be interested in this. The current distilgpt2 is great for use-cases that need cheap/fast compute, but distilled versions of the larger gpt2 models (medium, large, xl) would also be super useful. For example, I am able to fit up to gpt2-large on my GPU, but I'm unable to fit gpt2-xl, which means I can't use it. If there was a distilled version of gpt2-xl which was smaller, that might make it usable for more people.
Are there any plans to distill any larger versions of gpt2?
Thanks!<|||||>Yes we can probably work on that.
There is a bit of work + exploration to do: it is possible that we'll have to use model parallelism tricks to be able to train it in a reasonable time (I haven't checked yet).
Applying the distillation to gpt2-xl the way we did for distilgpt2 (same ratios) would still result in a model that is bigger than gpt2-medium (24L, 1600 hidden dim). Would that fit your use-case?
(sorry for the delayed answer, I don't usually check issues without being pinged/tagged).<|||||>> Applying the distillation to gpt2-xl the way we did for distilgpt2 (same ratios) would still result in a model that is bigger than gpt2-medium (24L, 1600 hidden dim). Would that fit your use-case?
Yes, if we could squish the performance of gpt2-xl into something sized between gpt2-medium and gpt2-large, that would be really useful!<|||||>> Yes we can probably work on that.
> There is a bit of work + exploration to do: it is possible that we'll have to use model parallelism tricks to be able to train it in a reasonable time (I haven't checked yet).
> Applying the distillation to gpt2-xl the way we did for distilgpt2 (same ratios) would still result in a model that is bigger than gpt2-medium (24L, 1600 hidden dim). Would that fit your use-case?
>
> (sorry for the delayed answer, I don't usually check issues without being pinged/tagged).
Even a distilgpt2-large would work for my use case<|||||>I am also interested in a distilled version of the larger models. For our use-case, this would go a long way to improving cost/performance/feasibility.
<|||||>Bumping this - any word on availability of the medium/large distilled models ?<|||||>> Bumping this - any word on availability of the medium/large distilled models ?
I am currently working on it! :)
<|||||>any news on this?<|||||>Any news on this ? 😊<|||||>I would be extremely interested in having GPT2-XL distilled to the size of GPT2-L or smaller. Consumer-grade GPUs currently top out at around 8GB VRAM, which is enough to run inference using GPT2-L but is not enough for GPT2-XL. Unless you can find a beefier GPU than that, it will only become possible to efficiently run GPT2-XL on a desktop PC when someone trains a distilled model. |
transformers | 4,968 | closed | Eli5 examples | This PR adds Explain Like I'm Five scripts and models to Transformers.
The `examples/eli5` folder contains training code for the dense retriever and to fine-tune a BART model, the jupyter notebook for the [blog post](https://yjernite.github.io/lfqa.html), and the code for the live demo.
The RetriBert model implements the dense passage retriever. It's basically a wrapper for two Bert models and projection matrices, but it does gradient checkpointing in a way that is very different from [a concurrent PR](https://github.com/huggingface/transformers/pull/4659) and I thought it would be easier to write its own class for now and see if we can merge later.
The Bart files are only modified to add a reference to the ELI5 fine-tuned model on the model repo. | 06-12-2020 23:16:33 | 06-12-2020 23:16:33 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4968?src=pr&el=h1) Report
> Merging [#4968](https://codecov.io/gh/huggingface/transformers/pull/4968?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/439aa1d6e9c953069f75fc23c737221d0df2c977&el=desc) will **increase** coverage by `0.88%`.
> The diff coverage is `48.36%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4968?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4968 +/- ##
==========================================
+ Coverage 76.45% 77.34% +0.88%
==========================================
Files 130 133 +3
Lines 22024 22146 +122
==========================================
+ Hits 16839 17128 +289
+ Misses 5185 5018 -167
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4968?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/4968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `93.33% <ø> (ø)` | |
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/4968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.12% <ø> (ø)` | |
| [src/transformers/modeling\_retribert.py](https://codecov.io/gh/huggingface/transformers/pull/4968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZXRyaWJlcnQucHk=) | `34.24% <34.24%> (ø)` | |
| [src/transformers/configuration\_retribert.py](https://codecov.io/gh/huggingface/transformers/pull/4968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JldHJpYmVydC5weQ==) | `34.78% <34.78%> (ø)` | |
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/4968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.16% <100.00%> (+0.02%)` | :arrow_up: |
| [src/transformers/configuration\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `93.02% <100.00%> (+0.16%)` | :arrow_up: |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `70.76% <100.00%> (+0.17%)` | :arrow_up: |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.67% <100.00%> (+0.05%)` | :arrow_up: |
| [src/transformers/tokenization\_retribert.py](https://codecov.io/gh/huggingface/transformers/pull/4968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmV0cmliZXJ0LnB5) | `100.00% <100.00%> (ø)` | |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.00% <0.00%> (-0.41%)` | :arrow_down: |
| ... and [12 more](https://codecov.io/gh/huggingface/transformers/pull/4968/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4968?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4968?src=pr&el=footer). Last update [439aa1d...f05664d](https://codecov.io/gh/huggingface/transformers/pull/4968?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Finally managed to add the doc after a bit of a rebasing hell :)
Will merge tomorrow morning if there aren't any further comments.<|||||>@yjernite Tried to use your bart model but I cant load the decoder.
There are only pytorch model and config.json uploaded to the model-hub<|||||>> There are only pytorch model and config.json uploaded to the model-hub
How did you load the model, could you add some minimum reproduction code?
Also, this might be better as an issue :)
|
transformers | 4,967 | closed | Add Linformer model | # 🌟 New model addition
## Model description
### Linformer: Self-Attention with Linear Complexity
Paper published June 9th on ArXiv: https://arxiv.org/abs/2006.04768
Large transformer models have shown extraordinary success in achieving state-of-the-art results in many natural language processing applications. However, training and deploying these models can be prohibitively costly for long sequences, as the standard self-attention mechanism of the Transformer uses O(n²) time and space with respect to sequence length. In this paper, we demonstrate that the self-attention mechanism can be approximated by a low-rank matrix. We further exploit this finding to propose a new self-attention mechanism, which reduces the overall self-attention complexity from O(n²) to O(n) in both time and space. The resulting linear transformer, the **Linformer**, performs on par with standard Transformer models, while being much more memory- and time-efficient.
## Open source status
* [ ] the model implementation is available: (give details)
* [ ] the model weights are available: (give details)
* [x] who are the authors: Sinong Wang, Belinda Z. Li, Madian Khabsa, Han Fang, Hao Ma | 06-12-2020 23:09:34 | 06-12-2020 23:09:34 | Here is an pytorch implementation
https://github.com/tatp22/linformer-pytorch<|||||>Just another implementation by the authors
https://github.com/facebookresearch/pytext/pull/1407
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Any Tensorflow implementation? |
transformers | 4,966 | closed | Spanbert TACRED model not found, despite model card | I am trying to load this model: https://huggingface.co/mrm8488/spanbert-large-finetuned-tacred in Transformers 2.11.0. My exact code is:
` tokenizer = AutoTokenizer.from_pretrained("mrm8488/spanbert-large-finetuned-tacred") model = AutoModel.from_pretrained("mrm8488/spanbert-large-finetuned-tacred")`.
However, I get a 'not found' error: `OSError: Model name 'mrm8488/spanbert-base-finetuned-tacred' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'mrm8488/spanbert-base-finetuned-tacred' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.
`.
Guidance would be appreciated. Thanks! | 06-12-2020 21:29:41 | 06-12-2020 21:29:41 | Vocab file for that model is indeed missig, /cc @mrm8488
But in the meantime I think you can use:
```python
tokenizer = AutoTokenizer.from_pretrained("SpanBERT/spanbert-base-cased")
```<|||||>I will upload it ASAP. Thank you for letting me know!<|||||>Tokenizer files have been uploaded!!! @michaelroyzen <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,965 | closed | How to Paraphrase with GPT2? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
| 06-12-2020 21:24:10 | 06-12-2020 21:24:10 | You should rather use a seq2seq model for paraphrasing like T5 or BART. But if you want to do it using GPT-2 then maybe you can use this format
`input: input_text paraphrase: parahrase_text`
while training, set attention mask to 0 on the paraphrased text
and when generating just pass `input: input_text paraphrase: ` and sample till the `eos` token<|||||>Thank you. I'll be sure to try that! I know there are datasets online that I can use, but are there edits I should make to ensure that I get a suitable output? I've read something about cheating, and I want to avoid that. (I'm sorry if I am using the wrong terminology)<|||||>Can we use `T5` or `BART` like we would `GPT-2`?<|||||>hi @shamoons , yes you can, have a look at this https://madewithml.com/projects/1094/paraphrase-any-question-with-t5-text-to-text-transformer/<|||||>This looks like it’s only for questions. What about arbitrary sentences?<|||||>> This looks like it’s only for questions. What about arbitrary sentences?
You can do it with arbitrary sentences as well, but you'll need to fine-tune it yourself.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>> hi @shamoons , yes you can, have a look at this https://madewithml.com/projects/1094/paraphrase-any-question-with-t5-text-to-text-transformer/
This url is gone.
<|||||>@kingglory
This is the [url](https://github.com/ramsrigouthamg/Paraphrase-any-question-with-T5-Text-To-Text-Transfer-Transformer-) for the repo of that project.
Also there are quite a few paraphrase models on the [hub](https://huggingface.co/models?search=paraphrase) that you can try |
transformers | 4,964 | closed | GPT: Weights not being initialized | Hello covid-19 survivors,
I have been trying to use GPT for token classification, however currently there is none from hugging face, hence I copied your code from berttokenclassification and stitched the below code. But it says all the weights not initialized. Did I make a mistake, please help me!!!
`
class GPTClassifier(OpenAIGPTPreTrainedModel):
def __init__(self, config):
super().__init__(config)
self.num_labels = 2
self.gpt = OpenAIGPTModel(config)
self.dropout = nn.Dropout(0.1)
self.classifier = nn.Linear(768, 2)
self.init_weights()
@add_start_docstrings(
"""GPT Model with a token classification head on top (a linear layer on top of
the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. """,
)
def forward(
self,
input_ids=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
labels=None,
):
outputs = self.gpt(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
)
sequence_output = outputs[0]
sequence_output = self.dropout(sequence_output)
logits = self.classifier(sequence_output)
outputs = (logits,) + outputs[2:] # add hidden states and attention if they are here
if labels is not None:
loss_fct = CrossEntropyLoss()
# Only keep active parts of the loss
if attention_mask is not None:
active_loss = attention_mask.view(-1) == 1
active_logits = logits.view(-1, self.num_labels)
active_labels = torch.where(
active_loss, labels.view(-1), torch.tensor(loss_fct.ignore_index).type_as(labels)
)
loss = loss_fct(active_logits, active_labels)
else:
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
outputs = (loss,) + outputs
return outputs # (loss), scores, (hidden_states), (attentions)
`
weights not initalized:
`Weights of GPTClassifier not initialized from pretrained model: ['gpt.tokens_embed.weight', 'gpt.positions_embed.weight', 'gpt.h.0.attn.bias', 'gpt.h.0.attn.c_attn.weight', 'gpt.h.0.attn.c_attn.bias', 'gpt.h.0.attn.c_proj.weight', 'gpt.h.0.attn.c_proj.bias', 'gpt.h.0.ln_1.weight', 'gpt.h.0.ln_1.bias', 'gpt.h.0.mlp.c_fc.weight', 'gpt.h.0.mlp.c_fc.bias', 'gpt.h.0.mlp.c_proj.weight', 'gpt.h.0.mlp.c_proj.bias', 'gpt.h.0.ln_2.weight', 'gpt.h.0.ln_2.bias', 'gpt.h.1.attn.bias', 'gpt.h.1.attn.c_attn.weight', 'gpt.h.1.attn.c_attn.bias', 'gpt.h.1.attn.c_proj.weight', 'gpt.h.1.attn.c_proj.bias', 'gpt.h.1.ln_1.weight', 'gpt.h.1.ln_1.bias', 'gpt.h.1.mlp.c_fc.weight', 'gpt.h.1.mlp.c_fc.bias', 'gpt.h.1.mlp.c_proj.weight', 'gpt.h.1.mlp.c_proj.bias', 'gpt.h.1.ln_2.weight', 'gpt.h.1.ln_2.bias', 'gpt.h.2.attn.bias', 'gpt.h.2.attn.c_attn.weight', 'gpt.h.2.attn.c_attn.bias', 'gpt.h.2.attn.c_proj.weight', 'gpt.h.2.attn.c_proj.bias', 'gpt.h.2.ln_1.weight', 'gpt.h.2.ln_1.bias', 'gpt.h.2.mlp.c_fc.weight', 'gpt.h.2.mlp.c_fc.bias', 'gpt.h.2.mlp.c_proj.weight', 'gpt.h.2.mlp.c_proj.bias', 'gpt.h.2.ln_2.weight', 'gpt.h.2.ln_2.bias', 'gpt.h.3.attn.bias', 'gpt.h.3.attn.c_attn.weight', 'gpt.h.3.attn.c_attn.bias', 'gpt.h.3.attn.c_proj.weight', 'gpt.h.3.attn.c_proj.bias', 'gpt.h.3.ln_1.weight', 'gpt.h.3.ln_1.bias', 'gpt.h.3.mlp.c_fc.weight', 'gpt.h.3.mlp.c_fc.bias', 'gpt.h.3.mlp.c_proj.weight', 'gpt.h.3.mlp.c_proj.bias', 'gpt.h.3.ln_2.weight', 'gpt.h.3.ln_2.bias', 'gpt.h.4.attn.bias', 'gpt.h.4.attn.c_attn.weight', 'gpt.h.4.attn.c_attn.bias', 'gpt.h.4.attn.c_proj.weight', 'gpt.h.4.attn.c_proj.bias', 'gpt.h.4.ln_1.weight', 'gpt.h.4.ln_1.bias', 'gpt.h.4.mlp.c_fc.weight', 'gpt.h.4.mlp.c_fc.bias', 'gpt.h.4.mlp.c_proj.weight', 'gpt.h.4.mlp.c_proj.bias', 'gpt.h.4.ln_2.weight', 'gpt.h.4.ln_2.bias', 'gpt.h.5.attn.bias', 'gpt.h.5.attn.c_attn.weight', 'gpt.h.5.attn.c_attn.bias', 'gpt.h.5.attn.c_proj.weight', 'gpt.h.5.attn.c_proj.bias', 'gpt.h.5.ln_1.weight', 'gpt.h.5.ln_1.bias', 'gpt.h.5.mlp.c_fc.weight', 'gpt.h.5.mlp.c_fc.bias', 'gpt.h.5.mlp.c_proj.weight', 'gpt.h.5.mlp.c_proj.bias', 'gpt.h.5.ln_2.weight', 'gpt.h.5.ln_2.bias', 'gpt.h.6.attn.bias', 'gpt.h.6.attn.c_attn.weight', 'gpt.h.6.attn.c_attn.bias', 'gpt.h.6.attn.c_proj.weight', 'gpt.h.6.attn.c_proj.bias', 'gpt.h.6.ln_1.weight', 'gpt.h.6.ln_1.bias', 'gpt.h.6.mlp.c_fc.weight', 'gpt.h.6.mlp.c_fc.bias', 'gpt.h.6.mlp.c_proj.weight', 'gpt.h.6.mlp.c_proj.bias', 'gpt.h.6.ln_2.weight', 'gpt.h.6.ln_2.bias', 'gpt.h.7.attn.bias', 'gpt.h.7.attn.c_attn.weight', 'gpt.h.7.attn.c_attn.bias', 'gpt.h.7.attn.c_proj.weight', 'gpt.h.7.attn.c_proj.bias', 'gpt.h.7.ln_1.weight', 'gpt.h.7.ln_1.bias', 'gpt.h.7.mlp.c_fc.weight', 'gpt.h.7.mlp.c_fc.bias', 'gpt.h.7.mlp.c_proj.weight', 'gpt.h.7.mlp.c_proj.bias', 'gpt.h.7.ln_2.weight', 'gpt.h.7.ln_2.bias', 'gpt.h.8.attn.bias', 'gpt.h.8.attn.c_attn.weight', 'gpt.h.8.attn.c_attn.bias', 'gpt.h.8.attn.c_proj.weight', 'gpt.h.8.attn.c_proj.bias', 'gpt.h.8.ln_1.weight', 'gpt.h.8.ln_1.bias', 'gpt.h.8.mlp.c_fc.weight', 'gpt.h.8.mlp.c_fc.bias', 'gpt.h.8.mlp.c_proj.weight', 'gpt.h.8.mlp.c_proj.bias', 'gpt.h.8.ln_2.weight', 'gpt.h.8.ln_2.bias', 'gpt.h.9.attn.bias', 'gpt.h.9.attn.c_attn.weight', 'gpt.h.9.attn.c_attn.bias', 'gpt.h.9.attn.c_proj.weight', 'gpt.h.9.attn.c_proj.bias', 'gpt.h.9.ln_1.weight', 'gpt.h.9.ln_1.bias', 'gpt.h.9.mlp.c_fc.weight', 'gpt.h.9.mlp.c_fc.bias', 'gpt.h.9.mlp.c_proj.weight', 'gpt.h.9.mlp.c_proj.bias', 'gpt.h.9.ln_2.weight', 'gpt.h.9.ln_2.bias', 'gpt.h.10.attn.bias', 'gpt.h.10.attn.c_attn.weight', 'gpt.h.10.attn.c_attn.bias', 'gpt.h.10.attn.c_proj.weight', 'gpt.h.10.attn.c_proj.bias', 'gpt.h.10.ln_1.weight', 'gpt.h.10.ln_1.bias', 'gpt.h.10.mlp.c_fc.weight', 'gpt.h.10.mlp.c_fc.bias', 'gpt.h.10.mlp.c_proj.weight', 'gpt.h.10.mlp.c_proj.bias', 'gpt.h.10.ln_2.weight', 'gpt.h.10.ln_2.bias', 'gpt.h.11.attn.bias', 'gpt.h.11.attn.c_attn.weight', 'gpt.h.11.attn.c_attn.bias', 'gpt.h.11.attn.c_proj.weight', 'gpt.h.11.attn.c_proj.bias', 'gpt.h.11.ln_1.weight', 'gpt.h.11.ln_1.bias', 'gpt.h.11.mlp.c_fc.weight', 'gpt.h.11.mlp.c_fc.bias', 'gpt.h.11.mlp.c_proj.weight', 'gpt.h.11.mlp.c_proj.bias', 'gpt.h.11.ln_2.weight', 'gpt.h.11.ln_2.bias', 'classifier.weight', 'classifier.bias']
I0612 13:58:30.055731 4472821184 modeling_utils.py:460] Weights from pretrained model not used in GPTClassifier: ['tokens_embed.weight', 'positions_embed.weight', 'h.0.attn.bias', 'h.0.attn.c_attn.weight', 'h.0.attn.c_attn.bias', 'h.0.attn.c_proj.weight', 'h.0.attn.c_proj.bias', 'h.0.ln_1.weight', 'h.0.ln_1.bias', 'h.0.mlp.c_fc.weight', 'h.0.mlp.c_fc.bias', 'h.0.mlp.c_proj.weight', 'h.0.mlp.c_proj.bias', 'h.0.ln_2.weight', 'h.0.ln_2.bias', 'h.1.attn.bias', 'h.1.attn.c_attn.weight', 'h.1.attn.c_attn.bias', 'h.1.attn.c_proj.weight', 'h.1.attn.c_proj.bias', 'h.1.ln_1.weight', 'h.1.ln_1.bias', 'h.1.mlp.c_fc.weight', 'h.1.mlp.c_fc.bias', 'h.1.mlp.c_proj.weight', 'h.1.mlp.c_proj.bias', 'h.1.ln_2.weight', 'h.1.ln_2.bias', 'h.2.attn.bias', 'h.2.attn.c_attn.weight', 'h.2.attn.c_attn.bias', 'h.2.attn.c_proj.weight', 'h.2.attn.c_proj.bias', 'h.2.ln_1.weight', 'h.2.ln_1.bias', 'h.2.mlp.c_fc.weight', 'h.2.mlp.c_fc.bias', 'h.2.mlp.c_proj.weight', 'h.2.mlp.c_proj.bias', 'h.2.ln_2.weight', 'h.2.ln_2.bias', 'h.3.attn.bias', 'h.3.attn.c_attn.weight', 'h.3.attn.c_attn.bias', 'h.3.attn.c_proj.weight', 'h.3.attn.c_proj.bias', 'h.3.ln_1.weight', 'h.3.ln_1.bias', 'h.3.mlp.c_fc.weight', 'h.3.mlp.c_fc.bias', 'h.3.mlp.c_proj.weight', 'h.3.mlp.c_proj.bias', 'h.3.ln_2.weight', 'h.3.ln_2.bias', 'h.4.attn.bias', 'h.4.attn.c_attn.weight', 'h.4.attn.c_attn.bias', 'h.4.attn.c_proj.weight', 'h.4.attn.c_proj.bias', 'h.4.ln_1.weight', 'h.4.ln_1.bias', 'h.4.mlp.c_fc.weight', 'h.4.mlp.c_fc.bias', 'h.4.mlp.c_proj.weight', 'h.4.mlp.c_proj.bias', 'h.4.ln_2.weight', 'h.4.ln_2.bias', 'h.5.attn.bias', 'h.5.attn.c_attn.weight', 'h.5.attn.c_attn.bias', 'h.5.attn.c_proj.weight', 'h.5.attn.c_proj.bias', 'h.5.ln_1.weight', 'h.5.ln_1.bias', 'h.5.mlp.c_fc.weight', 'h.5.mlp.c_fc.bias', 'h.5.mlp.c_proj.weight', 'h.5.mlp.c_proj.bias', 'h.5.ln_2.weight', 'h.5.ln_2.bias', 'h.6.attn.bias', 'h.6.attn.c_attn.weight', 'h.6.attn.c_attn.bias', 'h.6.attn.c_proj.weight', 'h.6.attn.c_proj.bias', 'h.6.ln_1.weight', 'h.6.ln_1.bias', 'h.6.mlp.c_fc.weight', 'h.6.mlp.c_fc.bias', 'h.6.mlp.c_proj.weight', 'h.6.mlp.c_proj.bias', 'h.6.ln_2.weight', 'h.6.ln_2.bias', 'h.7.attn.bias', 'h.7.attn.c_attn.weight', 'h.7.attn.c_attn.bias', 'h.7.attn.c_proj.weight', 'h.7.attn.c_proj.bias', 'h.7.ln_1.weight', 'h.7.ln_1.bias', 'h.7.mlp.c_fc.weight', 'h.7.mlp.c_fc.bias', 'h.7.mlp.c_proj.weight', 'h.7.mlp.c_proj.bias', 'h.7.ln_2.weight', 'h.7.ln_2.bias', 'h.8.attn.bias', 'h.8.attn.c_attn.weight', 'h.8.attn.c_attn.bias', 'h.8.attn.c_proj.weight', 'h.8.attn.c_proj.bias', 'h.8.ln_1.weight', 'h.8.ln_1.bias', 'h.8.mlp.c_fc.weight', 'h.8.mlp.c_fc.bias', 'h.8.mlp.c_proj.weight', 'h.8.mlp.c_proj.bias', 'h.8.ln_2.weight', 'h.8.ln_2.bias', 'h.9.attn.bias', 'h.9.attn.c_attn.weight', 'h.9.attn.c_attn.bias', 'h.9.attn.c_proj.weight', 'h.9.attn.c_proj.bias', 'h.9.ln_1.weight', 'h.9.ln_1.bias', 'h.9.mlp.c_fc.weight', 'h.9.mlp.c_fc.bias', 'h.9.mlp.c_proj.weight', 'h.9.mlp.c_proj.bias', 'h.9.ln_2.weight', 'h.9.ln_2.bias', 'h.10.attn.bias', 'h.10.attn.c_attn.weight', 'h.10.attn.c_attn.bias', 'h.10.attn.c_proj.weight', 'h.10.attn.c_proj.bias', 'h.10.ln_1.weight', 'h.10.ln_1.bias', 'h.10.mlp.c_fc.weight', 'h.10.mlp.c_fc.bias', 'h.10.mlp.c_proj.weight', 'h.10.mlp.c_proj.bias', 'h.10.ln_2.weight', 'h.10.ln_2.bias', 'h.11.attn.bias', 'h.11.attn.c_attn.weight', 'h.11.attn.c_attn.bias', 'h.11.attn.c_proj.weight', 'h.11.attn.c_proj.bias', 'h.11.ln_1.weight', 'h.11.ln_1.bias', 'h.11.mlp.c_fc.weight', 'h.11.mlp.c_fc.bias', 'h.11.mlp.c_proj.weight', 'h.11.mlp.c_proj.bias', 'h.11.ln_2.weight', 'h.11.ln_2.bias']`
| 06-12-2020 21:06:00 | 06-12-2020 21:06:00 | Never mind, resolved now!!
Thanks!! <|||||>How did you resolve? |
transformers | 4,963 | closed | Cannot load optimizer and lr_scheduler states with TPU training | # 🐛 Bug
When restarting training and loading the optimizer.pt and scheduler.pt, the training crashes as the existing code does not know how to load it with TPU.
## Information
The stacktrace -
```
Exception in device=TPU:5: don't know how to restore data location of torch.FloatStorage (tagged with xla:0)
Traceback (most recent call last):
File "/home/saurabh/venv/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn
fn(gindex, *args)
File "/home/saurabh/<retracted>", line 334, in _mp_fn
main()
File "/home/saurabh/<retracted>", line 303, in main
trainer.train(model_path=model_path)
File "/home/saurabh/venv/lib/python3.6/site-packages/transformers/trainer.py", line 386, in train
torch.load(os.path.join(model_path, "optimizer.pt"), map_location=self.args.device)
File "/home/saurabh/venv/lib/python3.6/site-packages/torch/serialization.py", line 584, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/home/saurabh/venv/lib/python3.6/site-packages/torch/serialization.py", line 764, in _legacy_load
result = unpickler.load()
File "/home/saurabh/venv/lib/python3.6/site-packages/torch/serialization.py", line 720, in persistent_load
deserialized_objects[root_key] = restore_location(obj, location)
File "/home/saurabh/venv/lib/python3.6/site-packages/torch/serialization.py", line 802, in restore_location
return default_restore_location(storage, str(map_location))
File "/home/saurabh/venv/lib/python3.6/site-packages/torch/serialization.py", line 179, in default_restore_location
+ location + ")")
RuntimeError: don't know how to restore data location of torch.FloatStorage (tagged with xla:0)
```
This happens when loading a partially trained model.
A reference implementation is this
https://github.com/pytorch-tpu/fairseq/blob/tpu/fairseq/trainer.py#L195
With a discussion here https://github.com/pytorch/xla/issues/1343
Model I am using (Bert, XLNet ...): any model
Language I am using the model on (English, Chinese ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Train any model on TPU, wait for a checkpoint to happen
2. move the tokenizer files to the checkpoint dir (another bug, where the trainer expects the tokenizer configs to be present at the same directory as checkpoint dir, that only happens at the very end of training, not at one of the earlier checkpoints)
3. Restart training again from the checkpoint on TPU
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Trainer loads the optimizer and scheduler to TPU and starts training.
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0 (master)
- Platform: Linux-5.3.0-1026-gcp-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0a0+6bdfd6a (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: yes, 8 way with xla_spawn.py
| 06-12-2020 19:00:51 | 06-12-2020 19:00:51 | Thanks @misrasaurabh1, I'll look into it.
We'll have to add such a test into the TPU CI once we have it (sooner rather than later).<|||||>Any updates on getting rid of this error?
Makes it hard to use TPUs because preemptible machines cannot be used in Google Cloud if there is no way to resume from checkpoints.
Thanks!<|||||>I encountered the same issue which I found to be due to the fact that the script cannot map the optimizer to the
proper tpu device, here's the line in question:
https://github.com/huggingface/transformers/blob/d088d744adb4e5aa45262a34acab3ae9e81de169/src/transformers/trainer.py#L403
My solution was to replace
```
optimizer.load_state_dict(
torch.load(os.path.join(model_path, "optimizer.pt"), map_location=self.args.device)
)
```
by:
```
if is_torch_tpu_available():
# load state_dict on CPU and then transfer object to xla device
optimizer.load_state_dict(torch.load(os.path.join(model_path, "optimizer.pt")))
xm.send_cpu_data_to_device(optimizer,xm.xla_device())
else:
optimizer.load_state_dict(
torch.load(os.path.join(model_path, "optimizer.pt"), map_location=self.args.device)
)
```
that seemed to have done the trick with torch-xla-nightly. hope this helps
<|||||>> I encountered the same issue which I found to be due to the fact that the script cannot map the optimizer to the
> proper tpu device, here's the line in question:
>
> https://github.com/huggingface/transformers/blob/d088d744adb4e5aa45262a34acab3ae9e81de169/src/transformers/trainer.py#L403
>
> My solution was to replace
>
> ```
> optimizer.load_state_dict(
> torch.load(os.path.join(model_path, "optimizer.pt"), map_location=self.args.device)
> )
> ```
>
> by:
>
> ```
> if is_torch_tpu_available():
>
> # load state_dict on CPU and then transfer object to xla device
> optimizer.load_state_dict(torch.load(os.path.join(model_path, "optimizer.pt")))
> xm.send_cpu_data_to_device(optimizer,xm.xla_device())
> else:
> optimizer.load_state_dict(
> torch.load(os.path.join(model_path, "optimizer.pt"), map_location=self.args.device)
> )
> ```
>
> that seemed to have done the trick with torch-xla-nightly. hope this helps
I tried:
```python
print(device)
# BUG: can't simply map to the XLA device at the moment
if TPU_ACCELERATOR:
# load state_dict on CPU and then transfer object to XLA device
net.load_state_dict(torch.load(model_load_file, map_location="cpu"))
xm.send_cpu_data_to_device(net, device)
else:
net.load_state_dict(torch.load(model_load_file, map_location=device))
```
and it failed with:
```
xla:4
<ipython-input-4-bbc8442c330c> in <module>
96 # load state_dict on CPU and then transfer object to XLA device
97 net.load_state_dict(torch.load(model_load_file, map_location="cpu"))
---> 98 xm.send_cpu_data_to_device(net, device)
99 else:
100 net.load_state_dict(torch.load(model_load_file, map_location=device))
/opt/conda/lib/python3.7/site-packages/torch_xla/core/xla_model.py in send_cpu_data_to_device(data, device)
629 return type(v) == torch.Tensor and v.device.type == 'cpu'
630
--> 631 return ToXlaTensorArena(convert_fn, select_fn).transform(data)
632
633
/opt/conda/lib/python3.7/site-packages/torch_xla/core/xla_model.py in transform(self, inputs)
312 self._collect_tensors(inputs)
313 self._convert()
--> 314 return self._replace_tensors(inputs)
315
316
/opt/conda/lib/python3.7/site-packages/torch_xla/core/xla_model.py in _replace_tensors(self, inputs)
306
307 return xu.for_each_instance_rewrite(inputs, lambda x: self._select_fn(x),
--> 308 convert_fn)
309
310 def transform(self, inputs):
/opt/conda/lib/python3.7/site-packages/torch_xla/utils/utils.py in for_each_instance_rewrite(value, select_fn, fn)
197 def for_each_instance_rewrite(value, select_fn, fn):
198 rwmap = dict()
--> 199 return _for_each_instance_rewrite(value, select_fn, fn, rwmap)
200
201
/opt/conda/lib/python3.7/site-packages/torch_xla/utils/utils.py in _for_each_instance_rewrite(value, select_fn, fn, rwmap)
188 rwmap[id(value)] = result
189 for k in result.__dict__.keys():
--> 190 v = _for_each_instance_rewrite(result.__dict__[k], select_fn, fn, rwmap)
191 result.__dict__[k] = v
192 else:
/opt/conda/lib/python3.7/site-packages/torch_xla/utils/utils.py in _for_each_instance_rewrite(value, select_fn, fn, rwmap)
189 for k in result.__dict__.keys():
190 v = _for_each_instance_rewrite(result.__dict__[k], select_fn, fn, rwmap)
--> 191 result.__dict__[k] = v
192 else:
193 rwmap[id(value)] = result
TypeError: 'mappingproxy' object does not support item assignment
```<|||||>How about something simpler like
```python
# load state_dict on CPU and then transfer object to XLA device
net.load_state_dict(torch.load(model_load_file, map_location="cpu"))
net.to(device)
```
?
I think it does the job just fine.<|||||>> I encountered the same issue which I found to be due to the fact that the script cannot map the optimizer to the
> proper tpu device, here's the line in question:
> https://github.com/huggingface/transformers/blob/d088d744adb4e5aa45262a34acab3ae9e81de169/src/transformers/trainer.py#L403
>
> My solution was to replace
>
> ```
> optimizer.load_state_dict(
> torch.load(os.path.join(model_path, "optimizer.pt"), map_location=self.args.device)
> )
> ```
>
> by:
>
> ```
> if is_torch_tpu_available():
>
> # load state_dict on CPU and then transfer object to xla device
> optimizer.load_state_dict(torch.load(os.path.join(model_path, "optimizer.pt")))
> xm.send_cpu_data_to_device(optimizer,xm.xla_device())
> else:
> optimizer.load_state_dict(
> torch.load(os.path.join(model_path, "optimizer.pt"), map_location=self.args.device)
> )
> ```
>
> that seemed to have done the trick with torch-xla-nightly. hope this helps
This works, however, the progress bar starts from 0, and then, just takes a load of time to come to the step where the checkpoint is present! How to tackle that? I am training on the cloud (tpu v 3.8) and using xla_spawn script to distribute training among cores<|||||>@LysandreJik Any updates on this bug? this prevents resuming training from a checkpoint on TPUs<|||||>I am also having the same problem on loading a model from TPU, and resume the training. Any solutions? |
transformers | 4,962 | closed | How to implement differential learning rates and still ensure "weight_decay" = 0 for the the parameters it should? | I notice that most models are trained with the following parameter groups:
```
no_decay = ["bias", "LayerNorm.weight"]
optimizer_grouped_parameters = [
{
"params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
"weight_decay": 1e-5,
},
{
"params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)],
"weight_decay": 0.0,
},
]
optimizer = AdamW(optimizer_grouped_parameters, lr=5e-3, eps=1e-9)
```
**Questions:**
1. Why do we set `weight_decay` = 0.0 for the `no_decay` named parameters?
2. Assuming we need to set `weight_decay` = 0.0 as such, how would we extend the above to allow for differential learning rates (which may be especially helpful for models that include both an encoder and decoder stack)?
3. Have there been any tests to demonstrate the relative effectiveness of differential learning rates for the various transformer models? | 06-12-2020 18:57:37 | 06-12-2020 18:57:37 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,961 | closed | Extending Encoder Decoder to GPT-2 | Adding GPT2 initialization for EncoderDecoder model as pointed out in the issue below.
> Currently, only Bert works as a decoder. We might add GPT2 in a couple of weeks. Note that no model has `cross-attention` layers if it is not already an encoder-decoder model (like Bart or T5) and in this case it does not make sense to use the encoder-decoder wrapper. The model is initialized with random weights for the cross attention layers which will have to be fine-tuned. I agree, that this should be made clearer in the documentation!
_Originally posted by @patrickvonplaten in https://github.com/huggingface/transformers/issues/4517#issuecomment-638058577_ | 06-12-2020 17:40:51 | 06-12-2020 17:40:51 | It's on the roadmap :-) <|||||>Thank you! Look forward to it :)<|||||>Hi - I've actually been working on this myself the past couple days, should I submit a PR when finished? <|||||>That'd be great!<|||||>Will do - likely sometime this week. <|||||>@djw1809 Any update on the PR? :)<|||||>@patrickvonplaten Hello Patrick, I am watching with much interest EncodeDecoder from transformers :) . Any updates on supporting GPT2 with EncodeDecoder ?<|||||>Got sidetracked with other research - coming back to it in several days,
working on my end, just need to play nice with the rest of the repo.
On Tue, Jul 7, 2020 at 3:32 PM Mihai Ilie <[email protected]> wrote:
> @patrickvonplaten <https://github.com/patrickvonplaten> Hello Patrick, I
> am watching with much interest EncodeDecoder from transformers :) . Any
> updates on supporting GPT2 with EncodeDecoder ?
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/4961#issuecomment-655170674>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AG3PODZXYPBB33F4CBSNZLDR2OO7VANCNFSM4N4QTZQA>
> .
>
--
Dylan Weber, Research Assistant | PhD Candidate
School of Math and Statistical Sciences
WXLR642/BYENG593 Arizona State University
<|||||>@djw1809 - also feel free to already open a PR with unfinished code yet so that I can take a look early on and help you :-) <|||||>Working on it now. Also linking this PR: #4483<|||||>@patrickvonplaten Hello Patrick.
As I see from https://github.com/huggingface/transformers/commit/1d6e71e1167dea9e026391ec5a1a2d7ec33d22af current cross attention implimentation assume that encoder have same hidden size as GPT-2. I have encoder with hidden size 512 and want to combine it with GPT-2 medium with hidden size 1024. I have done it by Fairseq code and now want to do same by Huggingface. Could you update your solution to support any suitable encoder hidden space size?<|||||>Hey @Squire-tomsk,
I see what you mean - this would mean to add a new config param for each model that has cross-attention...is this common practice? Would be great if you could open a new issue for that :-) <|||||>Done https://github.com/huggingface/transformers/issues/6645<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,960 | closed | unexpected keyword argument 'lm_labels' when using BertModel as Decoder with EncoderDecoderModel | The `BertModel.forward()` method does not expect a `lm_labels` and `masked_lm_labels` arguments. Yet, it looks like the `EncoderDecoderModel.forward()` method calls it's decoder's `forward()` method with those arguments which throws a TypeError when a BertModel is used as a decoder.
Am I using the BertModel incorrectly? I can get rid of the error by modifying the EncoderDecoderModel to not use those arguments for the decoder.
Exact Error:
```
File "/Users/utkarsh/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/Users/utkarsh/Projects/ai4code/transformers/bert2bert/models.py", line 12, in forward
dec_out, dec_cls, enc_out, enc_cls = self.bertmodel(input_ids=inputs, attention_mask=input_masks, decoder_input_ids=targets, decoder_attention_mask=target_masks)
File "/Users/utkarsh/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/Users/utkarsh/anaconda3/envs/py37/lib/python3.7/site-packages/transformers/modeling_encoder_decoder.py", line 283, in forward
**kwargs_decoder,
File "/Users/utkarsh/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
TypeError: forward() got an unexpected keyword argument 'lm_labels'
```
Relevant part of the code:
```
encoder = BertModel(enc_config)
dec_config = BertConfig(...,is_decoder=True)
decoder = BertModel(dec_config)
model = EncoderDecoderModel(encoder=encoder, decoder=decoder)
```
...
`dec_out, dec_cls, enc_out, enc_cls = model(input_ids=inputs, attention_mask=input_masks, decoder_input_ids=targets, decoder_attention_mask=target_masks)`
| 06-12-2020 16:29:50 | 06-12-2020 16:29:50 | I'm facing the same problem. Since #4874 it seems like it should be just `labels` instead of `lm_labels`. According to the documentation it should do masked language modeling-loss, but from my debugging it seems like it actually does next word prediction-loss.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,959 | closed | Add AlbertForMultipleChoice | Cleaning up the rebase and opening a fresh PR. | 06-12-2020 13:52:13 | 06-12-2020 13:52:13 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4959?src=pr&el=h1) Report
> Merging [#4959](https://codecov.io/gh/huggingface/transformers/pull/4959?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/02e5f79662d72cccdca81a47e3001a5f6d36e5b1&el=desc) will **increase** coverage by `0.76%`.
> The diff coverage is `86.43%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4959?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4959 +/- ##
==========================================
+ Coverage 76.46% 77.23% +0.76%
==========================================
Files 128 128
Lines 21502 21818 +316
==========================================
+ Hits 16442 16851 +409
+ Misses 5060 4967 -93
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4959?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/4959/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <ø> (ø)` | |
| [src/transformers/modeling\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/4959/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jYW1lbWJlcnQucHk=) | `100.00% <ø> (ø)` | |
| [src/transformers/modeling\_mmbt.py](https://codecov.io/gh/huggingface/transformers/pull/4959/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tbWJ0LnB5) | `22.11% <ø> (ø)` | |
| [src/transformers/modeling\_tf\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/4959/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jYW1lbWJlcnQucHk=) | `100.00% <ø> (ø)` | |
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/4959/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `25.65% <0.00%> (ø)` | |
| [src/transformers/modeling\_tf\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4959/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG1fcm9iZXJ0YS5weQ==) | `100.00% <ø> (ø)` | |
| [src/transformers/modeling\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4959/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG1fcm9iZXJ0YS5weQ==) | `100.00% <ø> (ø)` | |
| [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4959/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `19.04% <0.00%> (ø)` | |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4959/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.48% <36.00%> (-0.14%)` | :arrow_down: |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4959/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `70.58% <55.55%> (-3.02%)` | :arrow_down: |
| ... and [47 more](https://codecov.io/gh/huggingface/transformers/pull/4959/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4959?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4959?src=pr&el=footer). Last update [02e5f79...ef1e404](https://codecov.io/gh/huggingface/transformers/pull/4959?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Very clean! |
transformers | 4,958 | closed | Issue with an inline code comment | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
Language I am using the model on (English, Chinese ...):
The problem arises when using:
* [x ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
https://github.com/huggingface/transformers/blob/d4c2cb402d6674211726fd5f4803d1090664e438/examples/question-answering/run_squad.py#L322
This specific comment written is wrong.
I wasted lot of time due to this comment.
i and feature_index are not the same no
:)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| 06-12-2020 11:25:12 | 06-12-2020 11:25:12 | |
transformers | 4,957 | closed | Memory leakage with bert-large-uncased-whole-word-masking-finetuned-squad | # 🐛 Bug
## Information
I am using 'bert-large-uncased-whole-word-masking-finetuned-squad' and observed an memory leak during the inference. Below the code snippet to reproduce it
tokenizer = AutoTokenizer.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad")
model = AutoModelForQuestionAnswering.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad")
device = 'cuda'
question = 'what is your question'
answer = 'This is the answer'
def tokenize_qa(tokenizer,question, answer_context):
input_ids = tokenizer.encode(question, answer_context, max_length=512)
tokens = tokenizer.convert_ids_to_tokens(input_ids)
sep_index = input_ids.index( tokenizer.sep_token_id)
num_seg_a = sep_index + 1
num_seg_b = len(input_ids) - num_seg_a
segment_ids = [0] * num_seg_a + [1] * num_seg_b
assert len(segment_ids) == len(input_ids)
return input_ids, segment_ids, tokens
###################### Inference Code ##########################
input_ids, segment_ids, tokens= tokenize_qa(tokenizer,question,answer_context)
start_scores, end_scores = model(torch.tensor([input_ids], device= device),token_type_ids=torch.tensor([segment_ids], device= device))
When i run the inference code multiple times it accumulates large gpu memory and finally runs OOM. Please check let me know if anything wrong here.
Thank you.
Language I am using the model : English,
- `transformers` version: '2.11.0'
- Platform: Ubuntu 18.04
- Python version:3.6.9
- PyTorch version (GPU): 1.4.0
| 06-12-2020 09:31:15 | 06-12-2020 09:31:15 | Hi, @ashispapu, you might wanna wrap the inference code with
`with torch.no_grad():`<|||||>Thanks for the response. Should not it be grad disabled by default like other models in transformer during inference.<|||||>I don't think so, how would the model know that you are doing training or inference. You can use `pipeline` for question answering, that takes care of such things |
transformers | 4,956 | closed | RobertaForMaskedLM Failing for example code given on Hugging Face' documentation page | # 🐛 Bug
from transformers import RobertaTokenizer, RobertaForMaskedLM
import torch
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
model = RobertaForMaskedLM.from_pretrained('roberta-base')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1
outputs = model(input_ids, labels=input_ids)
loss, prediction_scores = outputs[:2]
I am using this example code given on hugging face's documentation page , but this also gives an error
TypeError: forward() got an unexpected keyword argument 'labels'
The doc clearly states that forward accepts labels ,what's wrong here? | 06-12-2020 09:05:33 | 06-12-2020 09:05:33 | Which version of the library are you using? The documentation corresponds to the master branch and the change in the argument names was pretty recent, so I don't think it's in the latest release yet. You should either [install from source](https://github.com/huggingface/transformers#from-source) or change the code to use the (soon-to-be deprecated) argument `masked_lm_labels` (see the documentation for the latest release [here](https://huggingface.co/transformers/v2.5.0/model_doc/bert.html#bertformaskedlm)).<|||||>Thank you @sgugger for the latest documentation. I reinstalled transformers again yesterday sO I guess I am using latest version
I was scratching my head due to this |
transformers | 4,955 | closed | How to use fine-tuned BERT to fill <mask> | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
I have fine-tuned a BERT model by classification task, use transformers.BertForSequenceClassification. Now, i want use this model to fill mask when i give a input like: 'my dog \<mask\> beautiful'.
Can this be implemented?
I would really appreciate if someone can teach me how to implement.
| 06-12-2020 08:59:25 | 06-12-2020 08:59:25 | Hi @Odimmsun , `BertForSequenceClassification` is used for classification task and not for mak-filling. You can use the pre-trained BERT model as it is for make-filling. There is pipeline for this task which you can find here https://huggingface.co/transformers/usage.html#masked-language-modeling
Basically what you'll need to do is this
```python3
from transformers import pipeline
nlp = pipeline("fill-mask")
print(nlp(f"HuggingFace is creating a {nlp.tokenizer.mask_token} that the community uses to solve NLP tasks."))
```
you can also pass your own model to the pipeline using the `model` parameter.<|||||>> Hi @Odimmsun , `BertForSequenceClassification` is used for classification task and not for mak-filling. You can use the pre-trained BERT model as it is for make-filling. There is pipeline for this task which you can find here https://huggingface.co/transformers/usage.html#masked-language-modeling
>
> Basically what you'll need to do is this
>
> ```python
> from transformers import pipeline
>
> nlp = pipeline("fill-mask")
> print(nlp(f"HuggingFace is creating a {nlp.tokenizer.mask_token} that the community uses to solve NLP tasks."))
> ```
>
> you can also pass your own model to the pipeline using the `model` parameter.
Hi @patil-suraj ,thank you for your reply. Now, if i use "nlp = pipeline("fill-mask")", the model i used is not fine-tuned, but i want to use a fine-tuned model to fill mask. How should i do to implement this?
<|||||>The pre-trained BERT model does mask-filling out of the box, but if you want to use your own fine-tuned model then just pass the model(path or url) to the `model` parameter .
```python3
nlp = pipeline("fill-mask", model="your_model_path")
```<|||||>\`---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-12-d85876ca5c10> in <module>()
1 my_model.to('cpu')
2 nlp = pipeline(task='fill-mask', model=my_model, tokenizer=tokenizer)
----> 3 nlp('我是你<mask>爸爸')
/usr/local/lib/python3.6/dist-packages/transformers/pipelines.py in __call__(self, *args, **kwargs)
804 values, predictions = topk.values.numpy(), topk.indices.numpy()
805 else:
--> 806 masked_index = (input_ids == self.tokenizer.mask_token_id).nonzero().item()
807 logits = outputs[i, masked_index, :]
808 probs = logits.softmax(dim=0)
ValueError: only one element tensors can be converted to Python `scalars`
\`
hi, my model is fine-tuned by BertForSequenceClassification. Then i use it to fill mask, the error raised as above. Now, i am confused.<|||||>`BertForSequenceClassification` is meant for classification, it won't work for mask-filling task. If you want to fine-tune BERT for masked lm task then you should use BertForMaskedLM<|||||>Check this example for how to fine-tune bert for masked LM
https://github.com/huggingface/transformers/tree/master/examples/language-modeling#robertabertdistilbert-and-masked-language-modeling<|||||>Thank @patil-suraj, do you know how to use BART with in-filling scheme (where spans of text are replaced with a single mask token)? I have not seen this pipline
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>> `---------------------------------------------------------------------------
> ValueError Traceback (most recent call last)
> in ()
> 1 my_model.to('cpu')
> 2 nlp = pipeline(task='fill-mask', model=my_model, tokenizer=tokenizer)
> ----> 3 nlp('我是你爸爸')
>
> /usr/local/lib/python3.6/dist-packages/transformers/pipelines.py in **call**(self, *args, **kwargs)
> 804 values, predictions = topk.values.numpy(), topk.indices.numpy()
> 805 else:
> --> 806 masked_index = (input_ids == self.tokenizer.mask_token_id).nonzero().item()
> 807 logits = outputs[i, masked_index, :]
> 808 probs = logits.softmax(dim=0)
>
> ValueError: only one element tensors can be converted to Python `scalars`
> `
> hi, my model is fine-tuned by BertForSequenceClassification. Then i use it to fill mask, the error raised as above. Now, i am confused.
老哥稳! |
transformers | 4,954 | closed | ElectraForMultipleChoice | This PR add `ElectraForMultipleChoice`. One of the missing models in this [project](https://github.com/huggingface/transformers/projects/17).
Since, for multiple choice pooled outputs are needed, added `ElectraPooler` class.
@sgugger , @LysandreJik | 06-12-2020 07:48:21 | 06-12-2020 07:48:21 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4954?src=pr&el=h1) Report
> Merging [#4954](https://codecov.io/gh/huggingface/transformers/pull/4954?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/efeb75b8054cc299698cf8bc09f395ada2660745&el=desc) will **increase** coverage by `0.08%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4954?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4954 +/- ##
==========================================
+ Coverage 77.24% 77.33% +0.08%
==========================================
Files 133 133
Lines 22134 22166 +32
==========================================
+ Hits 17097 17141 +44
+ Misses 5037 5025 -12
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4954?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/4954/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.16% <ø> (ø)` | |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4954/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `70.76% <ø> (ø)` | |
| [src/transformers/configuration\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4954/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VsZWN0cmEucHk=) | `100.00% <100.00%> (ø)` | |
| [src/transformers/modeling\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4954/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `80.12% <100.00%> (+1.95%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4954/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <0.00%> (+0.12%)` | :arrow_up: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4954/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.23%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4954/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.57% <0.00%> (+1.55%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4954?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4954?src=pr&el=footer). Last update [efeb75b...9cb71f9](https://codecov.io/gh/huggingface/transformers/pull/4954?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>hi @LysandreJik could you tell me why these tests are failing now ? Thanks.<|||||>Looks like @LysandreJik deleted a function by mistake during the merge (the `create_and_check_electra_for_multiple_choice`), could you add it back?<|||||>My bad, sorry about that. Thanks for the fix! |
transformers | 4,953 | closed | Added feature to move added tokens in vocabulary for Transformer-XL | As discussed in #3554 the tokens in the tokenizer have to be shifted if adding a new token into aka resizing an embedding layer other than the last one. Of course this applies only for an `AdaptiveEmbedding` with more than one layer.
This implementation adds a function to move an added token in the tokenizer to a specific position.
This is closely related to the PR #4759 | 06-12-2020 07:28:32 | 06-12-2020 07:28:32 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4953?src=pr&el=h1) Report
> Merging [#4953](https://codecov.io/gh/huggingface/transformers/pull/4953?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5620033115e571013325d017bcca92991b0a4ace&el=desc) will **increase** coverage by `0.72%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4953?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4953 +/- ##
==========================================
+ Coverage 76.49% 77.21% +0.72%
==========================================
Files 128 128
Lines 21745 21756 +11
==========================================
+ Hits 16633 16799 +166
+ Misses 5112 4957 -155
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4953?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/4953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.21% <100.00%> (+1.53%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.38% <0.00%> (-0.41%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.49% <0.00%> (-0.12%)` | :arrow_down: |
| [src/transformers/benchmark/benchmark\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `73.09% <0.00%> (+0.29%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (+0.77%)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.90% <0.00%> (+1.38%)` | :arrow_up: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.48% <0.00%> (+1.40%)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <0.00%> (+2.29%)` | :arrow_up: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `99.14% <0.00%> (+2.57%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0.00%> (+81.20%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4953?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4953?src=pr&el=footer). Last update [5620033...265ea74](https://codecov.io/gh/huggingface/transformers/pull/4953?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Left some nitpicks, but overall this looks great :-) <|||||>LGTM |
transformers | 4,952 | closed | Data used for training MarianMT models | # ❓ Questions & Help
Are there specifications available on the amount and type of training data used for the released MarianMT [models](https://huggingface.co/transformers/model_doc/marian.html)?
## Details
<!-- Description of your issue -->
I want to use these models for back translation in order to augment my current training data (the [VQA dataset](http://visualqa.org/) which is in English). I understand that for this, I could use any number of models in conjunction with source and target languages as `en`.
However, I am concerned about the domain mismatch of the VQA dataset vs training data of the MT model. For example, using `en-ROMANCE` and `ROMANCE-en` models, I find that back-translation of the input sentence:
`['>>lmo<< What do you think are the children playing with?']` is
`['What do you think of the youth of the jubilee?']`
Also, do you have any suggestions/guidelines on which/how many models to use particularly for this use case? I did not find this question suitable for SO hence it posted here directly
Thank you for releasing the models!
| 06-12-2020 04:20:52 | 06-12-2020 04:20:52 | Hi, @yashkant, you might be able to find these details here https://github.com/Helsinki-NLP/Opus-MT |
transformers | 4,951 | closed | [examples] SummarizationModule improvements | This PR makes the SummarizationTrainer much more usable, and when improvements are not unique to summarization, they are implemented in `lightning_base.py` instead.
- **Checkpointing** Before this PR, the code saves 5GB of PL checkpoints per epoch, now SummarizationTrainer saves the best checkpoint based on ROUGE 2 score, and also saves it in huggingface `save_pretrained` format using the `on_save_checkpoint`. This will help resolve lots of confusion in various issues about how to load the pl checkpoints.
The current summarization code can only accept bs=1 and takes 24h to run 1 epoch on CNN DM. With the following changes, you can train much faster, if you wish. The docs suggested that larger batch sizes were possible with default params, which is fixed.
### Changes to Allow Faster Summarization Training
*these are all optional and turned off by default*
1) freezing: before this PR, it was basically only possible to finetune with batchsize 2-4 on a 16GB system. With `--freeze_embeds` and `--freeze_encoder`, you can get batch size MUCH higher, towards 32. I've seen strong results with these options.
2) On CNNDM and XSUM the datasets are 200K examples, and epochs are very long. For this reason it is preferable to run validation (and get a rouge score) more frequently, but with previous params each `validation_step` took 1hr. By passing `--n_val=1000 --val_check_interval=0.25`, you can run validation 4x per epoch and it only takes 3 minutes. I also allows the config's beam search parameters to be used, rather than hardcoding faster but lower scoring ones.
3) `{train|val|test}_max_target_length`: I have found it preferable to truncate train summaries to 56 for XSUM and CNNDM respectively, but doing this for val/test artificially inflates rouge scores. So these clargs are separated.
Changes to `lightning_base`
- Number of trainable parameters and total parameters are logged by default.
- All possible `pl.Trainer` clargs are passed through `add_generic_args` (Inspired by @nateraw)
### WandbLogger
- `--logger wandb` will instantiate a default wandb logger.
- `--logger wandb_shared` will post results to [here](https://app.wandb.ai/sshleifer/hf_summarization/table?workspace=user-), so that the community can compare hyperparameter settings empirically.
- the default logger is still tensorboard logger because it doesn't require making an account.
### Distillation
- `SummarizationDistiller` and `T5SummarizationDistiller` are checked in. This code was sent to me by a researcher who wishes to remain anonymous. DM to discuss. | 06-12-2020 03:53:04 | 06-12-2020 03:53:04 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4951?src=pr&el=h1) Report
> Merging [#4951](https://codecov.io/gh/huggingface/transformers/pull/4951?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c852036b4abca2c20e1adf92eda48472a7d84ef0&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4951?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4951 +/- ##
=======================================
Coverage 77.41% 77.42%
=======================================
Files 130 130
Lines 22023 22023
=======================================
+ Hits 17050 17051 +1
+ Misses 4973 4972 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4951?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/4951/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.26% <ø> (ø)` | |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4951/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (ø)` | |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4951/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.40% <0.00%> (+0.40%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4951?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4951?src=pr&el=footer). Last update [c852036...b7e1d5e](https://codecov.io/gh/huggingface/transformers/pull/4951?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Merging now. Happy to address post-merge comments! |
transformers | 4,950 | closed | GPTDoubleHeadsModel Unexpected node type: onnx:: Sub | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): GPT2DoubleHeadsModel
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
I've been following the ipython notebook provided [here](https://github.com/huggingface/transformers/blob/master/notebooks/04-onnx-export.ipynb)
1. Take an off-the-shelf pretrained `gpt` model and export to onnx format using the following script:
```
import torch
from transformers import (GPT2Config, GPT2Model, GPT2Tokenizer)
# use_cache is True by default in GPT2Model. Here we wrap a class to disable past state output.
class GPT2DoubleHeadsModelNoPastState(GPT2DoubleHeadsModel):
def __init__(self, config):
super().__init__(config)
def forward(self, input_ids, token_type_ids):
return super().forward(input_ids, past=None, attention_mask=None, token_type_ids=token_type_ids, use_cache=False)
model_name="gpt2"
config = GPT2Config.from_pretrained(model_name)
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
model = GPT2ModelNoPastState.from_pretrained(model_name)
example_inputs = tokenizer.encode_plus("This is a sample input", return_tensors="pt")
del example_inputs["attention_mask"]
example_outputs = model(**example_inputs)
input_names = ['input_ids', 'token_type_ids']
output_names=['output_1', 'output_2']
dynamic_axes={'input_ids': {0: 'batch_size', 1: 'num_choices', 2: 'seq_len'},
'token_type_ids': {0: 'batch_size', 1: 'num_choices', 2: 'seq_len'},
'output_1': {0: 'batch_size', 1: 'num_choices': 2: 'seq_len', 3: 'vocab_size'},
'output_2': {0: 'batch_size', 1: 'num_choices'}
}
output_path='gpt2.onnx'
torch.onnx.export(model=model,
args=(example_inputs[input_names[0]].unsqueeze(0), example_inputs[input_names[1]].unsqueeze(0)),
f=output_path,
input_names=input_names,
output_names=output_names,
example_outputs=example_outputs,
dynamic_axes=dynamic_axes,
do_constant_folding=True,
opset_version=11,
use_external_data_format=False)
```
This script is based off of #4805
2.
After invoking the above, I get the error:
```
.../torch/onnx/symbolic_helper.py", line 87...
RuntimeError: Unexpected node type: onnx::Sub
```
## Expected behavior
I would expect this to work successfully, and unfortunately I'm not exactly sure how to interpret this error. There's not a lot of documentation online.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: Commit 0e1869cc286d607f1598506be7bd1312b76ca82c
- Onnxruntime: 1.3.0
- Python version: 3.6.10
- PyTorch version (GPU?): 1.5.0+cu101
- Using GPU in script?: Yes
Thanks for your help! @mfuntowicz @tianleiwu
| 06-12-2020 01:31:41 | 06-12-2020 01:31:41 | Hi @mihail911, thanks for reporting the issue and the script to reproduce.
I can confirm the issue, as it seems to happen on PyTorch side, I suspect it's a bug on their side. @tianleiwu should we forward the issue on PyTorch issue tracker?
Slightly updated the script to avoid errors:
```python
import torch
from transformers import (GPT2Config, GPT2Model, GPT2Tokenizer, GPT2DoubleHeadsModel)
# use_cache is True by default in GPT2Model. Here we wrap a class to disable past state output.
class GPT2DoubleHeadsModelNoPastState(GPT2DoubleHeadsModel):
def __init__(self, config):
super().__init__(config)
def forward(self, input_ids, token_type_ids):
return super().forward(input_ids, past=None, attention_mask=None, token_type_ids=token_type_ids, use_cache=False)
model_name="gpt2"
config = GPT2Config.from_pretrained(model_name)
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
model = GPT2DoubleHeadsModelNoPastState.from_pretrained(model_name)
example_inputs = tokenizer.encode_plus("This is a sample input", return_tensors="pt")
del example_inputs["attention_mask"]
example_outputs = model(**example_inputs)
input_names = ['input_ids', 'token_type_ids']
output_names=['output_1', 'output_2']
dynamic_axes={'input_ids': {0: 'batch_size', 1: 'num_choices', 2: 'seq_len'},
'token_type_ids': {0: 'batch_size', 1: 'num_choices', 2: 'seq_len'},
'output_1': {0: 'batch_size', 1: 'num_choices', 2: 'seq_len', 3: 'vocab_size'},
'output_2': {0: 'batch_size', 1: 'num_choices'}
}
output_path='gpt2.onnx'
torch.onnx.export(model=model,
args=(example_inputs[input_names[0]].unsqueeze(0), example_inputs[input_names[1]].unsqueeze(0)),
f=output_path,
input_names=input_names,
output_names=output_names,
example_outputs=example_outputs,
dynamic_axes=dynamic_axes,
do_constant_folding=True,
opset_version=11,
use_external_data_format=False)
```<|||||>@mfuntowicz, I've forwarded the issue to the developer of pytorch onnx exporter.
I did narrow down the issue to [one line](https://github.com/huggingface/transformers/blob/ca5e1cdf8e314288bd0242a531815a6c75d8178e/src/transformers/modeling_utils.py#L2056). A walk-around is to add int() to cast data type:
Before:
```
cls_index = torch.full_like(hidden_states[..., :1, :], hidden_states.shape[-2] - 1, dtype=torch.long,)
```
After:
```
cls_index = torch.full_like(hidden_states[..., :1, :], int(hidden_states.shape[-2]) - 1, dtype=torch.long,)
```
@mihail911, could you try this (need install transformers from source) to see whether you can export the model?
<|||||>Yes I am able to export the model. Thanks @tianleiwu @mfuntowicz! |
transformers | 4,949 | closed | [mbart] Fix fp16 testing logic | the expected logits must be in the same dtype as the resulting logits. | 06-12-2020 00:51:39 | 06-12-2020 00:51:39 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4949?src=pr&el=h1) Report
> Merging [#4949](https://codecov.io/gh/huggingface/transformers/pull/4949?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/473808da0d476792070f0e7dfebcf1121a12a34f&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4949?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4949 +/- ##
=======================================
Coverage 77.14% 77.14%
=======================================
Files 128 128
Lines 21745 21745
=======================================
Hits 16775 16775
Misses 4970 4970
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4949?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4949/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (-0.16%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4949/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.79% <0.00%> (+0.40%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4949?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4949?src=pr&el=footer). Last update [473808d...a9ebdd5](https://codecov.io/gh/huggingface/transformers/pull/4949?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,948 | closed | [wip] Send slack message if self-scheduled runner fails | 06-12-2020 00:46:51 | 06-12-2020 00:46:51 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4948?src=pr&el=h1) Report
> Merging [#4948](https://codecov.io/gh/huggingface/transformers/pull/4948?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/473808da0d476792070f0e7dfebcf1121a12a34f&el=desc) will **increase** coverage by `0.06%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4948?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4948 +/- ##
==========================================
+ Coverage 77.14% 77.21% +0.06%
==========================================
Files 128 128
Lines 21745 21745
==========================================
+ Hits 16775 16790 +15
+ Misses 4970 4955 -15
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4948?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4948/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (-0.16%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4948/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (+0.11%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4948/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.79% <0.00%> (+0.40%)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4948/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <0.00%> (+2.29%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4948?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4948?src=pr&el=footer). Last update [473808d...7329595](https://codecov.io/gh/huggingface/transformers/pull/4948?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>@julien-c do you know an easy way to test whether this works without merging? <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
|
transformers | 4,947 | closed | Trainer.evaluate does not support seq2seq models | # 🐛 Bug
## Information
Hi! I can't thank you enough for Transformers. I know that the Trainer is still under development, but would like to report this just to know the current status.
Currently `Trainer._prediction_loop` assumes that different batches of data have the same shape.
Specifically, [this line](https://github.com/huggingface/transformers/blob/473808da0d476792070f0e7dfebcf1121a12a34f/src/transformers/trainer.py#L786)
```python
preds = torch.cat((preds, logits.detach()), dim=0)
```
This does not allow to use Trainer.evaluate for models with a variable output (e.g. seq2seq models). One of the possible solutions is to pad all batches to the same length, but it is pretty inefficient.
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. create seq2seq model
2. pad batches in such a way that each batch is padded to the maximum length within batch
3. create Trainer for the model, call .evaluate()
```
Traceback (most recent call last):
File "/home/vlialin/miniconda3/lib/python3.7/site-packages/transformers/trainer.py", line 509, in train
self.evaluate()
File "/home/vlialin/miniconda3/lib/python3.7/site-packages/transformers/trainer.py", line 696, in evaluate
output = self._prediction_loop(eval_dataloader, description="Evaluation")
File "/home/vlialin/miniconda3/lib/python3.7/site-packages/transformers/trainer.py", line 767, in _prediction_loop
preds = torch.cat((preds, logits.detach()), dim=0)
RuntimeError: Sizes of tensors must match except in dimension 0. Got 29 and 22 in dimension 1
```
## Expected behavior
Trainer is able to evaluate Seq2seq
## Environment info
- `transformers` version: 2.11
- Platform: Linux
- Python version: 3.7.6
- PyTorch version (GPU?): 1.5.0
- Tensorflow version (GPU?): 2.2.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 06-12-2020 00:26:18 | 06-12-2020 00:26:18 | Hi @Guitaricet , if you only want to evaluate for loss (AFAIK this is the case for seq2seq models) then you can set `prediction_loss_only` to `True`<|||||>Hi! Thank you, but I need the metrics too. Workaround was to inherit from `Trainer` and override `_prediction_loop`. <|||||>That sounds like a reasonable solution, but we should document this somewhere. Pinging @sgugger on this:)<|||||>Yes, documentation about trainer would be awesome! Would love to contribute<|||||>Still no updates on this issue?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,946 | closed | feat(TFTrainer): improve logging | This PR supersedes PR #4756
I wanted initially to refactor logging between `Trainer` and `TFTrainer` but there seems to be too many differences between them for it to make sense (distributed training, logging methods, `wandb.watch`…).
Logging is now handled directly within each Trainer and all comments from previous PR have been applied here.
Notes:
* `TFTrainer`: @jplu There should be the same modifications we discussed. I just moved the check of `global_step` within logging for better readability.
* `TFTrainer._log` does not have a tqdm iterator argument (unlike in `Trainer`) since you're not using it at the moment but it could be useful in the future
* `Trainer`: main update is to handle the case where we do only evaluations (no training) | 06-11-2020 23:06:19 | 06-11-2020 23:06:19 | Integration test error does not seem to be related to this PR.
@jplu On another note, I got strange results while using `run_tf_glue` as learning rate goes quickly to 0.
Command:
`run_tf_glue.py --model_name_or_path bert-base-cased --task_name MRPC --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir /tmp/MRPC/ --overwrite_output_dir --logging_dir log --evaluate_during_training --eval_steps 50 --logging_steps 10`
Graph

W&B Run: https://app.wandb.ai/borisd13/huggingface/runs/21rxop7c<|||||>Good you have restarted a new PR!
Humm for the loss that drop quickly to 0 I think it might come from your side because I get normal evolution even after multiple runs. Here an output:
```
06/12/2020 10:05:38 - INFO - transformers.trainer_tf - ***** Running training *****
06/12/2020 10:05:38 - INFO - transformers.trainer_tf - Num examples = 3668
06/12/2020 10:05:38 - INFO - transformers.trainer_tf - Num Epochs = 3
06/12/2020 10:05:38 - INFO - transformers.trainer_tf - Total optimization steps = 29
WARNING:tensorflow:From /home/jplu/transformers/src/transformers/trainer_tf.py:355: StrategyBase.experimental_run_v2 (from tensorflow.python.distribute.distribute_lib) is deprecated and will be removed in a future version.
Instructions for updating:
renamed to `run`
06/12/2020 10:05:38 - WARNING - tensorflow - From /home/jplu/transformers/src/transformers/trainer_tf.py:355: StrategyBase.experimental_run_v2 (from tensorflow.python.distribute.distribute_lib) is deprecated and will be removed in a future version.
Instructions for updating:
renamed to `run`
WARNING:tensorflow:From /opt/anaconda3/envs/jplu-transformers/lib/python3.7/site-packages/tensorflow/python/ops/resource_variable_ops.py:1817: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
06/12/2020 10:05:47 - WARNING - tensorflow - From /opt/anaconda3/envs/jplu-transformers/lib/python3.7/site-packages/tensorflow/python/ops/resource_variable_ops.py:1817: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
06/12/2020 10:06:17 - INFO - tensorflow - Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
06/12/2020 10:06:17 - INFO - tensorflow - Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
06/12/2020 10:06:53 - INFO - tensorflow - Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
06/12/2020 10:06:53 - INFO - tensorflow - Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:batch_all_reduce: 201 all-reduces with algorithm = nccl, num_packs = 1
06/12/2020 10:08:32 - INFO - tensorflow - batch_all_reduce: 201 all-reduces with algorithm = nccl, num_packs = 1
INFO:tensorflow:batch_all_reduce: 201 all-reduces with algorithm = nccl, num_packs = 1
06/12/2020 10:08:48 - INFO - tensorflow - batch_all_reduce: 201 all-reduces with algorithm = nccl, num_packs = 1
06/12/2020 10:09:46 - INFO - transformers.trainer_tf - Epoch 1 Step 10 Train Loss 0.6610
06/12/2020 10:10:02 - INFO - transformers.trainer_tf - Epoch 1 Step 20 Train Loss 0.5626
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
06/12/2020 10:10:46 - INFO - tensorflow - Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
06/12/2020 10:10:46 - INFO - tensorflow - Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
06/12/2020 10:11:15 - INFO - transformers.trainer_tf - Epoch 2 Step 30 Train Loss 0.5585
06/12/2020 10:11:31 - INFO - transformers.trainer_tf - Epoch 2 Step 40 Train Loss 0.5492
06/12/2020 10:11:47 - INFO - transformers.trainer_tf - ***** Running Evaluation *****
06/12/2020 10:11:47 - INFO - transformers.trainer_tf - Batch size = 32
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
06/12/2020 10:11:53 - INFO - tensorflow - Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
06/12/2020 10:11:53 - INFO - tensorflow - Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
06/12/2020 10:12:04 - INFO - tensorflow - Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
06/12/2020 10:12:04 - INFO - tensorflow - Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
06/12/2020 10:12:08 - INFO - transformers.trainer_tf - Epoch 2 Step 50 Validation Metrics {'eval_eval_loss': 0.5300102, 'eval_eval_acc': 0.7230392156862745, 'eval_eval_f1': 0.8274809160305343, 'eval_eval_acc_and_f1': 0.7752600658584043, 'learning_rate': 0.0}
06/12/2020 10:12:08 - INFO - transformers.trainer_tf - Epoch 2 Step 50 Train Loss 0.6273
06/12/2020 10:13:21 - INFO - transformers.trainer_tf - Epoch 3 Step 60 Train Loss 0.6168
06/12/2020 10:13:37 - INFO - transformers.trainer_tf - Epoch 3 Step 70 Train Loss 0.5616
06/12/2020 10:13:53 - INFO - transformers.trainer_tf - Epoch 3 Step 80 Train Loss 0.5551
06/12/2020 10:14:05 - INFO - transformers.trainer_tf - Saving model in /tmp/MRPC/
```
I used the exact same command line than yours.<|||||>I seem to be having a different value of `self.train_steps`.
```
Num examples = 3668
Num Epochs = 3
Total optimization steps = 115
```
Here is my full log output: https://app.wandb.ai/borisd13/huggingface/runs/1u5spvau/logs
Actually I just checked and I'm getting the exact same problem in the `master` branch so i guess the issue I'm having is independent from this PR.
Let me know if you need any modification on this PR.<|||||>Sorry I don't see what is the issue in your logs, all the values seems ok. I get 29 steps because I run over 4 gpus.<|||||>Here is a gist with full output: https://gist.github.com/borisdayma/62fd9338aae4f373a9c0709a8961f5bc
It's probably independent. I could file a separate issue.<|||||>I still don't see what is the issue sorry, everything looks normal.<|||||>@jplu actually I noticed you have the same issue.
```
06/12/2020 10:12:08 - INFO - transformers.trainer_tf - Epoch 2 Step 50 Validation Metrics {'eval_eval_loss': 0.5300102, 'eval_eval_acc': 0.7230392156862745, 'eval_eval_f1': 0.8274809160305343, 'eval_eval_acc_and_f1': 0.7752600658584043, 'learning_rate': 0.0}
```
In current master, learning_rate is displayed only at eval. It should not be 0 at this specific step.
If you confirm let me know if you want me to file an issue.
In any case, since it's independent from this PR, let me know if you approve it.
It's always difficult to stay in sync with master so I'd like to make all necessary changes as soon as possible since it's a difficult PR.<|||||>This is normal, it is the decay of the LR so at some point it gets 0. It is ok. I will take some time to review your PR this weekend.<|||||>Oh I see what you mean, yes it gets to 0 at the end of the first epoch, and it shouldn't, it is fixed from my side, PR will be here so no worries we can focus on your logging improvement. Thanks!!<|||||>@jplu I addressed your comments. Let me know if I understood correctly:
* `global_step` moved to `__init__`
* `epoch` added to logs directly within training loop
* In addition, I also added `epoch` in eval loop when called from training loop. If we log training loss every 20 steps and we log evaluation metrics every 50 steps, we need to make sure we add `epoch`
I'm going to add a comment for a further simplification of the code if you'd like.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4946?src=pr&el=h1) Report
> Merging [#4946](https://codecov.io/gh/huggingface/transformers/pull/4946?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/473808da0d476792070f0e7dfebcf1121a12a34f&el=desc) will **decrease** coverage by `0.00%`.
> The diff coverage is `18.18%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4946?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4946 +/- ##
==========================================
- Coverage 77.14% 77.14% -0.01%
==========================================
Files 128 128
Lines 21745 21773 +28
==========================================
+ Hits 16775 16796 +21
- Misses 4970 4977 +7
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4946?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `18.98% <11.76%> (-0.07%)` | :arrow_down: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <100.00%> (+0.14%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (ø)` | |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.79% <0.00%> (+0.40%)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <0.00%> (+2.29%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4946?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4946?src=pr&el=footer). Last update [473808d...28f004b](https://codecov.io/gh/huggingface/transformers/pull/4946?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Ok, we almost see the end of the tunnel. Just few comments yet :)<|||||>The way it is now, `epoch` and `global_step` will just be 0 in eval mode only. |
transformers | 4,945 | closed | Unable to evaluate on fine-tuned bart for summarization | Hello,
I used the [finetune_bart.sh ](https://github.com/huggingface/transformers/blob/master/examples/summarization/finetune_bart.sh) and was able to finetune the model on my task. I checked the output directory and checkpoints were stored and there were no errors.
Following the training, i ran evaluate_cnn.py using this command.
```
python evaluate_cnn.py <path_to_test.source> test_generations.txt <model-name> --score_path rouge_scores.txt
```
specified [here](https://github.com/huggingface/transformers/tree/master/examples/summarization). I made sure the above command points to test.source for the finetuning task.
However, the evaluation is unable to load the checkpoint and throws the following error
```
OSError: Can't load config for '/..PATH_TO../wiki_bart'. Make sure that:
- '/..PATH_TO../wiki_bart' is a correct model identifier listed on 'https://huggingface.co/models'
- or '/..PATH_TO../wiki_bart' is the correct path to a directory containing a config.json file
```
Seems like it is looking for the config.json which is not present along with the checkpoint files.
I saw the finetune.py has a do_predict argument like do_train. Is that to be used instead of evaluate_cnn.py?
Can you please help? | 06-11-2020 22:23:20 | 06-11-2020 22:23:20 | Got around this by running the following code to generate the config.json and then running evaluate_cnn.py as above.
```
from lightning_base import BaseTransformer
from finetune import SummarizationTrainer
import torch
from argparse import Namespace
args = Namespace(adam_epsilon=1e-08, cache_dir='', config_name='', data_dir='PATH_TO_DATA', do_predict=False, do_train=False, eval_batch_size=4, learning_rate=3e-05, max_source_length=400, max_target_length=400, model_name_or_path='facebook/bart-large', n_gpu=8, num_train_epochs=3, output_dir='PATH_TO_OUTPUT', tokenizer_name='', train_batch_size=4, warmup_steps=0, weight_decay=0.0)
model = SummarizationTrainer(args)
model = model.load_from_checkpoint('PATH_TO_CHECKPOINT')
torch.save(model.state_dict(), args.output_dir + '/pytorch_model.bin')
model.config.to_json_file(args.output_dir + '/config.json')
``` |
transformers | 4,944 | closed | Refactor proposition for multiple choice models | Opening the discussion about refactoring the dupe code in all task-specific models. This is a proposal of design that still leaves the specific classes and their docstrings, does not change the name of their attributes for backward compatibility but delegates the actual forward method to a task-specific method in `PreTrainedModel`.
This is the initial development to get feedback and suggestions for improvements :-)
An alternative is to have directly a model with multiple choice built for any architecture that supports the task. Explored that in [this gist](https://gist.github.com/sgugger/edc345943c92b155e0d73ef7a1897c21) if you want to see what it could look like. The main problem with this approach is that the model-specific arguments get hidden in kwargs, not sure if this is a blocker or not. | 06-11-2020 22:14:33 | 06-11-2020 22:14:33 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4944?src=pr&el=h1) Report
> Merging [#4944](https://codecov.io/gh/huggingface/transformers/pull/4944?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/473808da0d476792070f0e7dfebcf1121a12a34f&el=desc) will **decrease** coverage by `0.03%`.
> The diff coverage is `91.66%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4944?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4944 +/- ##
==========================================
- Coverage 77.14% 77.11% -0.04%
==========================================
Files 128 128
Lines 21745 21713 -32
==========================================
- Hits 16775 16743 -32
Misses 4970 4970
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4944?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4944/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.48% <90.00%> (-0.02%)` | :arrow_down: |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4944/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `87.86% <100.00%> (-0.33%)` | :arrow_down: |
| [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4944/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `92.75% <100.00%> (-0.28%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4944/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `94.35% <100.00%> (-0.43%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4944/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `76.26% <100.00%> (-0.10%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4944/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (ø)` | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4944?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4944?src=pr&el=footer). Last update [473808d...95d5bad](https://codecov.io/gh/huggingface/transformers/pull/4944?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,943 | closed | NER: fix construction of input examples for RoBERTa | Hi,
this PR avoids adding an extra `</s>` symbol, so that the final sequence ends with `</s> </s>`.
The documentation clearly states, that ther's only one `</s>` expected at the end of a single sequence:
https://huggingface.co/transformers/model_doc/roberta.html#transformers.RobertaTokenizer.build_inputs_with_special_tokens
The `fairseq` reference implementation shows also only one `</s>` at the end:
https://github.com/pytorch/fairseq/tree/master/examples/roberta#apply-byte-pair-encoding-bpe-to-input-text
Technically, the `sep_token_extra` argument is set to `False` now - I didn't remove this parameter, so future models/tokenizers can use it (when they really need it).
This fixes #4755. | 06-11-2020 20:36:34 | 06-11-2020 20:36:34 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4943?src=pr&el=h1) Report
> Merging [#4943](https://codecov.io/gh/huggingface/transformers/pull/4943?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/473808da0d476792070f0e7dfebcf1121a12a34f&el=desc) will **increase** coverage by `0.06%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4943?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4943 +/- ##
==========================================
+ Coverage 77.14% 77.20% +0.06%
==========================================
Files 128 128
Lines 21745 21745
==========================================
+ Hits 16775 16789 +14
+ Misses 4970 4956 -14
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4943?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4943/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (ø)` | |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4943/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <0.00%> (+2.29%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4943?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4943?src=pr&el=footer). Last update [473808d...e9eb626](https://codecov.io/gh/huggingface/transformers/pull/4943?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>LGTM, thanks! |
transformers | 4,942 | closed | Dataloader in Trainer num_workers > 0 | # ❓ Questions & Help
I notice that when doing inference/training with a pre-trained language model in a Trainer, only one worker is used. For training it's not a problem as the batch size is usually small, however, for inference, I can largely increase it.
However, it seems that https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L263 only supports num_workers=0. I tried to modified it manually and put num_workers=10, but no effect.
Am I doing something wrong in my reasoning?
Thanks for your help! | 06-11-2020 20:14:18 | 06-11-2020 20:14:18 | I also have the same issue.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,941 | closed | [Benchmark] fix indentation error | Because of a wrong indentation, the memory usage was calculated incorrectly | 06-11-2020 19:18:57 | 06-11-2020 19:18:57 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4941?src=pr&el=h1) Report
> Merging [#4941](https://codecov.io/gh/huggingface/transformers/pull/4941?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/699541c4b34479451c91b3c6c204d904f62bed83&el=desc) will **decrease** coverage by `0.00%`.
> The diff coverage is `0.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4941?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4941 +/- ##
==========================================
- Coverage 77.17% 77.17% -0.01%
==========================================
Files 128 128
Lines 21723 21722 -1
==========================================
- Hits 16764 16763 -1
Misses 4959 4959
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4941?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/benchmark/benchmark.py](https://codecov.io/gh/huggingface/transformers/pull/4941/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrLnB5) | `69.42% <0.00%> (+0.56%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4941/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (-0.16%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4941?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4941?src=pr&el=footer). Last update [699541c...7f4d433](https://codecov.io/gh/huggingface/transformers/pull/4941?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,940 | closed | Delay decay schedule until the end of warmup | The decay schedule should start at the end of the warmup steps. Without this simple edit, learning rate will drop at the end of the warmup steps, like the figure shown below:
<img width="378" alt="image" src="https://user-images.githubusercontent.com/15783079/84426617-c5820500-ac38-11ea-9579-2c0d5f374e33.png">
| 06-11-2020 18:47:04 | 06-11-2020 18:47:04 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4940?src=pr&el=h1) Report
> Merging [#4940](https://codecov.io/gh/huggingface/transformers/pull/4940?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6293eb04dfef704e87a6e0b358848ffc41587b4f&el=desc) will **increase** coverage by `0.40%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4940?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4940 +/- ##
==========================================
+ Coverage 77.20% 77.60% +0.40%
==========================================
Files 128 128
Lines 21746 21746
==========================================
+ Hits 16789 16877 +88
+ Misses 4957 4869 -88
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4940?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4940/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `57.27% <ø> (ø)` | |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4940/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (+0.15%)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4940/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <0.00%> (+2.29%)` | :arrow_up: |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4940/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `56.68% <0.00%> (+23.24%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4940?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4940?src=pr&el=footer). Last update [6293eb0...96ae67a](https://codecov.io/gh/huggingface/transformers/pull/4940?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>@jplu what do you think of this?<|||||>Fix #5098 |
transformers | 4,939 | closed | [cleanup] Hoist ModelTester objects to top level | Fixes #4902
@sshleifer
All tests seem to pass. Please review!
| 06-11-2020 17:38:47 | 06-11-2020 17:38:47 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4939?src=pr&el=h1) Report
> Merging [#4939](https://codecov.io/gh/huggingface/transformers/pull/4939?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f9f8a5312e92541ff9a5f483fc4907ec87da876e&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4939?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4939 +/- ##
=======================================
Coverage 77.39% 77.40%
=======================================
Files 130 130
Lines 22018 22018
=======================================
+ Hits 17041 17042 +1
+ Misses 4977 4976 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4939?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4939/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (-0.16%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4939/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.73% <0.00%> (+0.11%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4939/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.40% <0.00%> (+0.40%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4939?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4939?src=pr&el=footer). Last update [f9f8a53...16a4a1e](https://codecov.io/gh/huggingface/transformers/pull/4939?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>I think we can investigate code reuse between `ModelTester`s in a separate PR, given the enormity of this one :)<|||||>I like the changes! I only checked the files superficially. I trust @aretius and @sshleifer that no tests functionality was changed of got lost :-)<|||||>Thanks, @sshleifer :)
Would be glad to pick up code reuse in a separate PR
Yes @patrickvonplaten no test functionality was lost!<|||||>Merge conflict resolved @LysandreJik |
transformers | 4,938 | closed | T5ForConditionalGeneration fp16 issues | This is a continuation of and very related to https://github.com/huggingface/transformers/issues/4586
but the issue here is `NaN` loss during finetuning, rather than `nan` in model outputs.
### Instructions to reproduce
From `examples/summarization`
Setup cnn tiny:
```bash
wget https://s3.amazonaws.com/datasets.huggingface.co/summarization/cnn_tiny.tgz
tar -xzvf cnn_tiny.tgz
rm cnn_tiny.tgz
export OUTPUT_DIR_NAME=bart_utest_output
export CURRENT_DIR=${PWD}
export OUTPUT_DIR=${CURRENT_DIR}/${OUTPUT_DIR_NAME}
# Make output directory if it doesn't exist
mkdir -p $OUTPUT_DIR
# Add parent directory to python path to access lightning_base.py and utils.py
export PYTHONPATH="../":"${PYTHONPATH}"
```
### Call finetune.py with 'O1'
```bash
python finetune.py \
--data_dir=cnn_tiny/ \
--model_name_or_path=t5-large \
--learning_rate=3e-5 \
--train_batch_size=1 \
--eval_batch_size=2 \
--output_dir=$OUTPUT_DIR \
--num_train_epochs=10 \
--n_gpu=1 \
--fp16 \
--fp16_opt_level=O1 \
--do_train $@
```
Uses 16GB, but loss is always NAN.
### Call finetune.py without fp16
```bash
mkdir t5_large_fp32
python finetune.py \
--data_dir=cnn_tiny/ \
--model_name_or_path=t5-large \
--learning_rate=3e-5 \
--train_batch_size=1 \
--eval_batch_size=1 \
--output_dir=t5_large_fp32 \
--num_train_epochs=10 \
--n_gpu=1 \
--do_train $@
```
Result: OOM on 16GB card, uses 19GB on RTX (24GB Card)
If you try `t5_base`, it seems to use 8GB of GPU ram in fp16, and no issues.
| 06-11-2020 15:31:06 | 06-11-2020 15:31:06 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,937 | closed | What is the different options for pooler_type in Bert config ? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I want to change the pooling type at the top of the output hidden states of Bert.
I search in the documentation and find nothing. Can anyone help me ? I just want the different option of pooling (max, average etc.). Here's a piece of code to see the option i am talking about.
`import transformers
encoder = transformers.TFBertModel.from_pretrained("bert-base-uncased")
encoder.config`
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| 06-11-2020 14:26:20 | 06-11-2020 14:26:20 | Hi! The pooler is actually a linear layer, that is used on top of the BERT encoder (the hidden layers). It's not doing a pooling operation like average or max pooling.
<|||||>Ok i read that somewhere. But my is on what component ? If you use a pooler on the last layer of bert encoder you will get an output of size [batch, step, dense_unit], but i am sure that the pooler output is a squeeze_first output. Why is there a information about pooler_type in the config, if we cannot change the pooler_type ? |
transformers | 4,936 | closed | [Model card] model card for electra-base QA model | 06-11-2020 11:48:22 | 06-11-2020 11:48:22 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4936?src=pr&el=h1) Report
> Merging [#4936](https://codecov.io/gh/huggingface/transformers/pull/4936?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/699541c4b34479451c91b3c6c204d904f62bed83&el=desc) will **increase** coverage by `0.40%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4936?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4936 +/- ##
==========================================
+ Coverage 76.77% 77.17% +0.40%
==========================================
Files 128 128
Lines 21723 21723
==========================================
+ Hits 16677 16764 +87
+ Misses 5046 4959 -87
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4936?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4936/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.79% <0.00%> (+0.40%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4936/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (+0.77%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/4936/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `75.85% <0.00%> (+19.75%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4936?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4936?src=pr&el=footer). Last update [699541c...4301a6e](https://codecov.io/gh/huggingface/transformers/pull/4936?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 4,935 | closed | name 'ElectraForSequenceClassification' is not defined | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| 06-11-2020 10:53:12 | 06-11-2020 10:53:12 | Hi @Gemini77 , what is your transformers version ?
`ElectraForSequenceClassification` is available in transformers >=2.10.0 version |
transformers | 4,934 | closed | Using LongformerForQuestionAnswering on large documents (40K+ characters) | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarily intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiasts can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted.
-->
## Details
<!-- Description of your issue -->
So I decided to try the new LongformerForQuestionAnswering model on a larger input. From the [paper](https://arxiv.org/pdf/2004.05150.pdf) & the [source code](https://huggingface.co/transformers/_modules/transformers/modeling_longformer.html#LongformerForQuestionAnswering) I've understood that large document processing can be achieved by batching the input sequence in a certain format. For QA task the format goes like this: **question<\/sep><\/sep>context_block** where **<\/sep>** represents the separator token. This pattern allows the model to place global attention on the question. Feel free to correct me if I'm wrong.
To test my assumption I've made a [notebook](https://colab.research.google.com/drive/1bEglkGcTXM_ZvdqbUT2_csp_TsRFPJwx?usp=sharing), but after running it I receive a `CUDA run out of memory error`. To get more insight, I've decided to process each batch record separately (`answer = get_answer(text, question, False)` ) but none of the records gave a reasonable answer.
So my questions are as follows:
1. Is my assumption for processing longer documents correct?
2. If so, what are the possible solutions for the memory error? I was thinking about a sliding window approach on the batch records 🤔
3. What could be the reason for such poor results when processing a single batch record?
4. This is more a PyTorch question, but shouldn't this code snippet empty the allocated GPU storage? Am I missing something here? As far as I know, CUDA storage is cleared when detaching the variable back to the CPU. Snippet:
```python
if torch.cuda.is_available():
input_ids = input_ids.to("cpu")
attention_mask = attention_mask.to("cpu")
```
Thank you 🤗
| 06-11-2020 10:51:28 | 06-11-2020 10:51:28 | _What model of GPU are you using ?_
I didn't open your notebook, that's what I was looking for :
```
CUDA out of memory. Tried to allocate 4.92 GiB (GPU 0; 11.17 GiB total capacity; 4.64 GiB already allocated; 3.69 GiB free; 7.12 GiB reserved in total by PyTorch)
Detached cpu cpu
```<|||||>> What model of GPU are you using ?
@tuanardouin I was running the example on Colab only. So most of the time it was K80. But even on P100 it ran out of memory. I've expected that the model was performance expensive, but not this much 😱 <|||||>This notebook might help :-) https://github.com/patrickvonplaten/notebooks/blob/master/How_to_evaluate_Longformer_on_TriviaQA_using_NLP.ipynb |
transformers | 4,933 | closed | [AutoModel] Split AutoModelWithLMHead into clm, mlm, encoder-decoder | This is a follow-up PR of #4874.
In #4874, `BertModelForMaskedLM` , was split into a clm `BertLMHeadModel` and mlm Bert Model `BertForMaskedLM`. In order for the encoder-decoder framework to work correctly, the `BertLMHeadModel` needs to be loaded, when instantiating an `EncoderDecoder.from_encoder_decoder_pretrained()`, therefore a new `AutoModelForCausalLM` has to be created.
This PR deprecates `AutoModelWithLMHead` and introduces:
- `AutoModelForCausalLM` for Autoregressive models
- `AutoModelForMaskedLM` for Autoencoding models
- `AutoModelForSeq2SeqCausalLM` for Sequence-to-sequence models with causal LM for the decoder
@julien-c @LysandreJik @sgugger @thomwolf @sshleifer -> `AutoModelWithLMHead` still works as before so no breaking changes, but it's deprecated. All AutoModels are exposed in the `__init__` (don't really see a reason why they shouldn't). What do you guys think about the naming?
**IMPORTANT:**
#4874 and this PR might introduce some breaking changes for the encoder-decoder framework:
Instead of using `AutoModelWithLMHead` one has to use `AutoModelForCausalLM` for the decoder model now and instead of using `BertForMaskedLM` one should use `BertLMHeadModel` from now one.
There are **no breaking changes** for the encoder-decoder user-facing functions `.from_pretrained()` and `.from_encoder_decoder_pretrained()` | 06-11-2020 09:34:45 | 06-11-2020 09:34:45 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4933?src=pr&el=h1) Report
> Merging [#4933](https://codecov.io/gh/huggingface/transformers/pull/4933?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e80d6c689bd62f805a5c8d77ec0cc3b09f240d14&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `60.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4933?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4933 +/- ##
==========================================
+ Coverage 77.10% 77.11% +0.01%
==========================================
Files 128 128
Lines 21723 21769 +46
==========================================
+ Hits 16749 16788 +39
- Misses 4974 4981 +7
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4933?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/4933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <ø> (ø)` | |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `70.58% <55.55%> (-7.82%)` | :arrow_down: |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.21% <100.00%> (+0.02%)` | :arrow_up: |
| [src/transformers/modeling\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/4933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `92.20% <100.00%> (ø)` | |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.79% <0.00%> (ø)` | |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (+0.11%)` | :arrow_up: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.48% <0.00%> (+0.23%)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <0.00%> (+2.29%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4933?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4933?src=pr&el=footer). Last update [e80d6c6...7117cb1](https://codecov.io/gh/huggingface/transformers/pull/4933?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>This all looks good to me. I'd just simplify `AutoModelForSeq2SeqCausalLM` to `AutoModelForSeq2SeqLM` and the related names since I don't think there exists an encoder/decoder setup for LM where the decoder doesn't have a causal mask, so causal doesn't really add any needed info to the name.<|||||>I like this but if we merge this, I think we should make a note to actually remove the `AutoModelWithLMHead` at the next (or next-next?) major release otherwise users will still use it and it will be confusing<|||||>Merging to unblock the encoder decoder framework to work correctly. Pinging @thomwolf for notification.<|||||>Yes, I'm happy with this!<|||||>what is the different when i pass a ones.tril() mask to the bertencoder |
transformers | 4,932 | closed | Training ELECTRA model on TPU with the help of Trainer or TFTrainer classes | # ❓ Questions & Help
Hi there,
I am trying to train Electra model from scratch using HF's Trainer interface. My primary source is this colab:
https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb#scrollTo=ri2BIQKqjfHm
There are several questions about Electra training specifics:
1. Which model to use: ElectraForPreTraining or ElectraForMaskedLM? From my perspective, both seem like appropriate choice, however <model>ForMaskedLM is been used in the Colab notebook above.
2. If I opt for PyTorch model, I am stuck with LineByLineTextDataset. The way LineByLineTextDataset is implemented makes me use small datasets (the whole dataset is loaded into RAM and is preprocessed, so I cannot make use of really huge amount of text (hundreds of millions of sentences, for example). I tried to inherit my own Dataset type from torch's IterableDataset class, but it doesn't support sampling, which is a further step of making the whole pipeline functional.
Does transformers framework offer another variant of Dataset class? I would much prefer having something similar to TFRecords dataset from TensorFlow 2.
3. If I still overcome the second issue, I see some crucial differences between trainer classes written for torch and TF. The most important one is Data Collator. TF Trainer doesn't offer such a parameter in class constructor, whereas the torch's Trainer does. If I'm not mistaken, data collator is responsible for masking input tokens, so running Electra model on TF Trainer is not correct, since I haven't found any implicit masking inside TF model.
That's why I'm asking how to train Electra model with the help of TF Trainer class? Since choosing Torch's Trainer seems impossible due to the Datasets being loaded into RAM, the only variant that seems logical to me, is to train Electra model on TPU with TF Records dataset and thus making use of TF Trainer, which doesn't have a vital data preparation step of masking out 15% of tokens.
If I'm mistaken, please give me a hint. Thanks :)
| 06-11-2020 08:11:20 | 06-11-2020 08:11:20 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,931 | closed | BertTokenizer from own vocab meet problem | # ❓ Questions & Help
I set up own vocab.txt and the item like 3-213,3A23 and so on. When use BertTokenizer.from_pretrained() it couldn't tokenize the '3-213'. Even when I write 'A' in vocab.txt ,and couldn't encode('A'). Do anyone know how to fix this issue?
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| 06-11-2020 07:09:50 | 06-11-2020 07:09:50 | ok, I fix it by using lower case |
transformers | 4,930 | closed | Update setup.py for sentencepiece. | Now the `sentencepiece` library has upgraded to `0.1.92` version which is incompatible with ` transformers==2.8.0`.
sentencepiece==0.1.92 gives segmentation fault (core dumped). | 06-11-2020 06:59:08 | 06-11-2020 06:59:08 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4930?src=pr&el=h1) Report
> Merging [#4930](https://codecov.io/gh/huggingface/transformers/pull/4930?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/699541c4b34479451c91b3c6c204d904f62bed83&el=desc) will **increase** coverage by `0.39%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4930?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4930 +/- ##
==========================================
+ Coverage 76.77% 77.16% +0.39%
==========================================
Files 128 128
Lines 21723 21723
==========================================
+ Hits 16677 16763 +86
+ Misses 5046 4960 -86
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4930?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (+0.77%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/4930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `75.85% <0.00%> (+19.75%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4930?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4930?src=pr&el=footer). Last update [699541c...ad6c947](https://codecov.io/gh/huggingface/transformers/pull/4930?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,929 | closed | [ElectraForQuestionAnswering] fix qa example in doc | This PR fixes the QA example in the doc for `ElectraForQuestionAnswering`. In the last example `token_type_ids` were not used, this PR fixes that
@sgugger | 06-11-2020 06:47:04 | 06-11-2020 06:47:04 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4929?src=pr&el=h1) Report
> Merging [#4929](https://codecov.io/gh/huggingface/transformers/pull/4929?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e80d6c689bd62f805a5c8d77ec0cc3b09f240d14&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4929?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4929 +/- ##
=======================================
Coverage 77.10% 77.11%
=======================================
Files 128 128
Lines 21723 21723
=======================================
+ Hits 16749 16751 +2
+ Misses 4974 4972 -2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4929?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `78.16% <ø> (ø)` | |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.38% <0.00%> (-0.41%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (+0.11%)` | :arrow_up: |
| [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/4929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `78.81% <0.00%> (+0.19%)` | :arrow_up: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.48% <0.00%> (+0.23%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4929?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4929?src=pr&el=footer). Last update [e80d6c6...4a895d8](https://codecov.io/gh/huggingface/transformers/pull/4929?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,928 | closed | sentencepiece dependency must be a specific version. | # 🐛 Bug
## Information
Now the `sentencepiece` library has upgraded to `0.1.92` version which is incompatible with `transformers==2.8.0`.
## Error
`segmentation fault (core dumped)`
## location.
https://github.com/huggingface/transformers/blob/v2.8.0/setup.py#L113
## configurations
- Platform:
- Torch version: torch==1.3.1
- Transformer version; transformers==2.8.0
- Python version: 3.7.3
- Using GPU in the script?: No.
| 06-11-2020 06:42:50 | 06-11-2020 06:42:50 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,927 | closed | Incorrect loss values calculated for TPU training. | # 🐛 Bug
Currently using Trainer on TPU calculates incorrect training and eval_during_training loss values. This leads to loss values logged on Wandb also being incorrect.
## Information
The problem seems to be that with a PyTorch/XLA training setup with multiprocessing, each processes trains and evals on disjoint (I believe) subsets of the training and validation set respectively. This leads to multiple train_loss and eval_during_training_loss values equaling the number of processes used. These loss values are also different. None of these values is the correct loss values as the loss is calculated on the entire dataset and not on smaller subsets of it.
The solution would be to aggregate these loss values with XLA operations into a single train_loss and eval_loss values.
Different eval_loss values is evident in this console log
```
06/11/2020 05:33:59 - INFO - transformers.trainer - ***** Running Evaluation ***** 06/11/2020 05:33:59 - INFO - transformers.trainer - Num examples = 5180
06/11/2020 05:33:59 - INFO - transformers.trainer - Batch size = 8
Evaluation: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 81/81 [00:41<00:00, 1.96it/s]
{"eval_loss": 2.407614219335862, "epoch": 0.06633645851760127, "step": 1500}
06/11/2020 05:34:40 - INFO - transformers.trainer - Saving model checkpoint to /home/saurabh/data/<retracted>/checkpoint-1500█████| 81/81 [00:41<00:00, 2.06it/s$
Evaluation: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 81/81 [00:42<00:00, 1.89it/s$
{"eval_loss": 1.757087172181518, "epoch": 0.06633645851760127, "step": 1500}
06/11/2020 05:34:41 - INFO - transformers.trainer - Saving model checkpoint to /home/saurabh/data/<retracted>/checkpoint-1500
Evaluation: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 81/81 [00:43<00:00, 1.87it/s$
{"eval_loss": 2.2870501747101915, "epoch": 0.06633645851760127, "step": 1500}
06/11/2020 05:34:42 - INFO - transformers.trainer - Saving model checkpoint to /home/saurabh/data/<retracted>/checkpoint-1500
Evaluation: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 81/81 [00:43<00:00, 1.87it/s$
{"eval_loss": 2.3224751780062545, "epoch": 0.06633645851760127, "step": 1500}
06/11/2020 05:34:42 - INFO - transformers.trainer - Saving model checkpoint to /home/saurabh/data/<retracted>/checkpoint-1500
Evaluation: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 81/81 [00:43<00:00, 1.84it/s]
{"eval_loss": 2.339173612035351, "epoch": 0.06633645851760127, "step": 1500}
06/11/2020 05:34:42 - INFO - transformers.trainer - Saving model checkpoint to /home/saurabh/data/<retracted>/checkpoint-1500
Evaluation: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 81/81 [00:43<00:00, 1.85it/s]
{"eval_loss": 2.3176549371377924, "epoch": 0.06633645851760127, "step": 1500}
06/11/2020 05:34:42 - INFO - transformers.trainer - Saving model checkpoint to /home/saurabh/data/<retracted>/checkpoint-1500
Evaluation: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 81/81 [00:43<00:00, 1.84it/s]
{"eval_loss": 2.449997420664187, "epoch": 0.06633645851760127, "step": 1500}
06/11/2020 05:34:42 - INFO - transformers.trainer - Saving model checkpoint to /home/saurabh/data/<retracted>/checkpoint-1500
Evaluation: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 81/81 [00:44<00:00, 1.84it/s]
{"eval_loss": 2.18177890336072, "epoch": 0.06633645851760127, "step": 1500}
```
Model I am using (Bert, XLNet ...): Every model with PyTorch Trainer
Language I am using the model on (English, Chinese ...): Doesn't matter
The problem arises when using:
* [x] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Run any PyTorch/TPU training, for example a language modelling task
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
1. Setup a PyTorch/XLA training environment
```
export TRAIN_FILE=/path/to/dataset/wiki.train.raw
export TEST_FILE=/path/to/dataset/wiki.test.raw
export WANDB_WATCH=false # Fixes bug https://github.com/huggingface/transformers/issues/4814
python xla_spawn.py --num_cores 8 language_modeling/run_language_modeling.py \
--output_dir=output \
--model_type=roberta \
--model_name_or_path=roberta-base \
--do_train \
--train_data_file=$TRAIN_FILE \
--do_eval \
--eval_data_file=$TEST_FILE \
--mlm
--evaluate_during_training
--per_device_train_batch_size=4
--per_device_eval_batch_size=4
```
## Expected behavior
A single train_loss and eval_loss value per logging_step in console output and also with Wandb.
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0 (master)
- Platform: Linux-5.3.0-1026-gcp-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0a0+6bdfd6a (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: yes, 8 way TPU/XLA multiprocessing | 06-11-2020 06:31:48 | 06-11-2020 06:31:48 | @LysandreJik Any comments on this evaluation behavior with Trainer?<|||||>Could it be solved by adding a `if self.is_world_master` prior to calling `self._log` (or directly within that function)?<|||||>I think you are right. I was concerned that the evaluator is calculating all these eval loss values separately but the trainer is aggregating the eval values properly so the eval loss logged into wandb should be correct.
```
def _prediction_loop(
......
elif is_tpu_available():
# tpu-comment: Get all predictions and labels from all worker shards of eval dataset
if preds is not None:
preds = xm.mesh_reduce("eval_preds", preds, torch.cat)
if label_ids is not None:
label_ids = xm.mesh_reduce("eval_label_ids", label_ids, torch.cat)
```
And yes we could still remove these eval loss values from log to remove the confusion it creates.<|||||>Actually we added [this line](https://github.com/huggingface/transformers/blob/f9f8a5312e92541ff9a5f483fc4907ec87da876e/src/transformers/trainer.py#L574) recently so I don't think you should have any issue.<|||||>Yes, the wandb graphs look good. My concern was that these console logs look incorrect.<|||||>Ok, maybe we should wrap the entire logging (wandb + tensorboard + console) with "is_world_master" instead of doing it only for wandb.
@julien-c what do you think? If that's the way to go I can submit a quick PR.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,926 | closed | Fixing TPU training by disabling wandb.watch gradients logging | Fixes issue https://github.com/huggingface/transformers/issues/4814
PyTorch TPU trainer.py had a bug where the training would freeze up during the logging step. On investigation, the culprit was found to be a wandb.watch call which was trying to log gradients. This operation is suspected to be unsupported by Wandb for TPUs. Waiting for a confirmation of TPU gradient logging support by the Wandb team. | 06-11-2020 05:54:20 | 06-11-2020 05:54:20 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4926?src=pr&el=h1) Report
> Merging [#4926](https://codecov.io/gh/huggingface/transformers/pull/4926?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e80d6c689bd62f805a5c8d77ec0cc3b09f240d14&el=desc) will **increase** coverage by `0.07%`.
> The diff coverage is `0.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4926?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4926 +/- ##
==========================================
+ Coverage 77.10% 77.17% +0.07%
==========================================
Files 128 128
Lines 21723 21723
==========================================
+ Hits 16749 16765 +16
+ Misses 4974 4958 -16
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4926?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4926/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.48% <0.00%> (+0.23%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4926/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.79% <0.00%> (ø)` | |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4926/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (+0.15%)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4926/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <0.00%> (+2.29%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4926?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4926?src=pr&el=footer). Last update [e80d6c6...d857f37](https://codecov.io/gh/huggingface/transformers/pull/4926?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,925 | closed | Use dataloader_drop_last in TF dataset | Follows PR #4757, fixes issue #4891. I didn't realize then that `TFTrainingArguments` inherited `TrainingArguments` and all of the same options would be available there as well.
Adding this to get to feature parity across PyTorch and TF trainers. | 06-11-2020 02:28:25 | 06-11-2020 02:28:25 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4925?src=pr&el=h1) Report
> Merging [#4925](https://codecov.io/gh/huggingface/transformers/pull/4925?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e80d6c689bd62f805a5c8d77ec0cc3b09f240d14&el=desc) will **increase** coverage by `0.07%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4925?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4925 +/- ##
==========================================
+ Coverage 77.10% 77.17% +0.07%
==========================================
Files 128 128
Lines 21723 21723
==========================================
+ Hits 16749 16765 +16
+ Misses 4974 4958 -16
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4925?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4925/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `19.04% <ø> (ø)` | |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4925/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.79% <0.00%> (ø)` | |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4925/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (+0.11%)` | :arrow_up: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4925/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.48% <0.00%> (+0.23%)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4925/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <0.00%> (+2.29%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4925?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4925?src=pr&el=footer). Last update [e80d6c6...a9099e0](https://codecov.io/gh/huggingface/transformers/pull/4925?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>LGTM from a quick glance but I'll let @jplu chime in<|||||>Nice! LGTM!! |
transformers | 4,924 | closed | Fix deprecation warnings due to invalid escape sequences. | Fixes #3754 | 06-11-2020 02:12:19 | 06-11-2020 02:12:19 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4924?src=pr&el=h1) Report
> Merging [#4924](https://codecov.io/gh/huggingface/transformers/pull/4924?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e80d6c689bd62f805a5c8d77ec0cc3b09f240d14&el=desc) will **increase** coverage by `0.07%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4924?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4924 +/- ##
==========================================
+ Coverage 77.10% 77.17% +0.07%
==========================================
Files 128 128
Lines 21723 21723
==========================================
+ Hits 16749 16765 +16
+ Misses 4974 4958 -16
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4924?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/4924/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `40.67% <100.00%> (ø)` | |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4924/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.79% <0.00%> (ø)` | |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4924/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (+0.15%)` | :arrow_up: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4924/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.48% <0.00%> (+0.23%)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4924/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <0.00%> (+2.29%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4924?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4924?src=pr&el=footer). Last update [e80d6c6...51acaab](https://codecov.io/gh/huggingface/transformers/pull/4924?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,923 | closed | Documentation doesn't include instructions for applying BertModel to documents using GPU acceleration | # 🐛 Bug
## Information
I am using `BertModel` to encode my documents using the representation I've just fine-tuned and it is not using the GPU to do the encoding. This is bad because encoding is very slow. I have it running on many cores but would prefer GPU acceleration. It is not clear from the documentation how to use the GPU to encode documents using a trained `BertModel`.
Model I am using (Bert, XLNet ...): Bert.
Language I am using the model on (English, Chinese ...): English.
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Use the language model example to fine-tune a bert model
2. Load your BertModel and apply it to text documents:
```python
import numpy as np
import torch
from transformers import BertModel, BertTokenizer
tokenizer = BertTokenizer.from_pretrained('my_model')
bert_model = BertModel.from_pretrained('my_model')
docs = ['I like cats but also dogs', 'I like cars full of cats', 'I like apples but not oranges', 'I like tutus on ballerinas.']
docs = np.tile(docs, 5000)
encoded_docs = []
for doc in docs[0:10]:
tensor = torch.tensor(tokenizer.encode(doc, add_special_tokens=True)).unsqueeze(0)
encoded_tensor = bert_model(tensor)
encoded_ary = encoded_tensor[0].cpu().detach().numpy()
encoded_docs.append(encoded_ary)
encoded_docs
```
3. While this is going run:
```bash
gpustat -i 1
```
4. See that the GPUs aren't being used.
```bash
[0] Tesla T4 | 31'C, 0 % | 0 / 15109 MB |
[1] Tesla T4 | 31'C, 0 % | 0 / 15109 MB |
[2] Tesla T4 | 32'C, 0 % | 0 / 15109 MB |
[3] Tesla T4 | 30'C, 0 % | 0 / 15109 MB |
```
5. Run this on many cores and see how incredibly slow it still is without GPUs. Get frustrated.
## Expected behavior
I want transformers/PyTorch to use the GPU to encode the text using the Bert model so it is muc faster than doing it in CPU.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Ubuntu 18.04 Amazon Deep Learning AMI
- Python version: Python 3.6.10 :: Anaconda, Inc.
- PyTorch version (GPU?): torch==1.4.0
- Tensorflow version (GPU?):
- Using GPU in script?: Trying
- Using distributed or parallel set-up in script?: In reality I am running this in Dask in an apply, but the example should do the same thing.
| 06-11-2020 00:44:50 | 06-11-2020 00:44:50 | This is a pretty basic question that's not really about the library:)
Check out the PyTorch doc, in particular, the [60 minute blitz](https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html). TL;DR: You'll need to move your model and inputs to the device you want with `.to(device)` calls.<|||||>@julien-c Ok, thanks I appreciate it :)<|||||>You're welcome:)
Closing this for now<|||||>@julien-c I am stuck in that I can't figure out how to get the string data to the GPU. Tensor.from_numpy does not support strings. So how do you achieve this? I get errors about the data not being on the save device as the GPU. It works on CPU but I've searched and searched and can't figure out how to get strings onto the GPU. Is there another method that use encoded data? That I can send to the GPU.
If I get it working I'll create an example of GPU encoding :D<|||||>It seem that the `encode()` method needs a brother that accepts tokenized/BERT encoded data from the first layer. Then the model will be applied to the data. This makes GPU support easy because you can create a Tensor out of data encoded by a BertTokenizer and send it to a GPU device. No such luck with strings.<|||||>Actually I think I'm confused. You don't use encode() to apply the model, do you? Use use SentenceTransformer(data). |
transformers | 4,922 | closed | Unexpected behavior encoding token_type_ids in GPT models | # 🐛 Bug
## Information
When `token_type_ids` are passed into the `GPT2Model` and subclasses, they're encoded using the `nn.Embedding` lookup table as the vocabulary ([line](https://github.com/huggingface/transformers/blob/3ae2e86baffc1fea8b8b93695fb5a10941fd63dc/src/transformers/modeling_gpt2.py#L477)). Because the token type ids output by `encode_plus()` are `{0,1}`, this seems to create unexpected behavior by using the same embedding vectors to represent both word-tokens 0 and 1 in the vocabulary and the very distinct token type ids.
## Expected Behavior
Either:
1. instead of returning `token_type_ids` consisting of indices 0 and 1, extend the vocabulary by 2 and use those two ids as the `token_type_ids` in `encode_plus()`, or
2. addition of a separate `nn.Embedding` matrix for token_type_ids [here](https://github.com/huggingface/transformers/blob/3ae2e86baffc1fea8b8b93695fb5a10941fd63dc/src/transformers/modeling_gpt2.py#L352) (such as [in BERTModel](https://github.com/huggingface/transformers/blob/3ae2e86baffc1fea8b8b93695fb5a10941fd63dc/src/transformers/modeling_bert.py#L153))
3. discourage or throw a warning when `token_type_ids` are passed to GPTModel instances
| 06-10-2020 21:01:59 | 06-10-2020 21:01:59 | |
transformers | 4,921 | closed | Make multiple choice models work with input_embeds | Currently, all the multiple choice models will fail if we pass them `inputs_embeds` instead of `input_ids`. This PR fixes that on the pytorch side and adapts the corresponding test for common models. | 06-10-2020 21:00:17 | 06-10-2020 21:00:17 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4921?src=pr&el=h1) Report
> Merging [#4921](https://codecov.io/gh/huggingface/transformers/pull/4921?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/466aa57a45bfb9fc47d4b75d22c02c34b4b4b0fc&el=desc) will **increase** coverage by `0.07%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4921?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4921 +/- ##
==========================================
+ Coverage 77.06% 77.13% +0.07%
==========================================
Files 128 128
Lines 21649 21653 +4
==========================================
+ Hits 16683 16702 +19
+ Misses 4966 4951 -15
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4921?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4921/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.31% <100.00%> (+0.02%)` | :arrow_up: |
| [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4921/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `93.02% <100.00%> (+0.01%)` | :arrow_up: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4921/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `94.78% <100.00%> (+0.02%)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4921/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <100.00%> (+2.33%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4921/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (+0.15%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4921?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4921?src=pr&el=footer). Last update [466aa57...edd1ede](https://codecov.io/gh/huggingface/transformers/pull/4921?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,920 | closed | Support multiple choice in tf common model tests | This the same as #4886, but for tensorflow (first time ever of me coding in tf!) | 06-10-2020 20:20:03 | 06-10-2020 20:20:03 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4920?src=pr&el=h1) Report
> Merging [#4920](https://codecov.io/gh/huggingface/transformers/pull/4920?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5d63ca6c38cc0f583cdec4c3efcfce13c0a41fdc&el=desc) will **decrease** coverage by `0.04%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4920?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4920 +/- ##
==========================================
- Coverage 77.10% 77.05% -0.05%
==========================================
Files 128 128
Lines 21617 21618 +1
==========================================
- Hits 16667 16657 -10
- Misses 4950 4961 +11
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4920?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4920/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `95.16% <100.00%> (+0.89%)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4920/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `76.31% <0.00%> (-2.31%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4920/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.49% <0.00%> (-0.12%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4920?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4920?src=pr&el=footer). Last update [5d63ca6...05e5aa7](https://codecov.io/gh/huggingface/transformers/pull/4920?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Very clean. LGTM! |
transformers | 4,919 | closed | File is not found due to extension | Hi,
In the configuration where the internet is not available the system searches the cache directory. However url_to_filename(url, etag) returns filename with extension in Windows. Thus this line is not able to find the file locations. One remedy could be using filename.split('.')[0] instead of filename
https://github.com/huggingface/transformers/blob/466aa57a45bfb9fc47d4b75d22c02c34b4b4b0fc/src/transformers/file_utils.py#L404 | 06-10-2020 20:18:21 | 06-10-2020 20:18:21 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,918 | closed | Pegasus for summarization ! | # 🌟 New model addition
## Model description
https://ai.googleblog.com/2020/06/pegasus-state-of-art-model-for.html?m=1
https://arxiv.org/abs/1912.08777
Abstract
Recent work pre-training Transformers with self-supervised objectives on large text corpora has shown great success when fine-tuned on downstream NLP tasks including text summarization. However, pre-training objectives tailored for abstractive text summarization have not been explored. Furthermore there is a lack of systematic evaluation across diverse domains. In this work, we propose pre-training large Transformer-based encoder-decoder models on massive text corpora with a new self-supervised objective. In PEGASUS, important sentences are removed/masked from an input document and are generated together as one output sequence from the remaining sentences, similar to an extractive summary. We evaluated our best PEGASUS model on 12 downstream summarization tasks spanning news, science, stories, instructions, emails, patents, and legislative bills. Experiments demonstrate it achieves state-of-the-art performance on all 12 downstream datasets measured by ROUGE scores. Our model also shows surprising performance on low-resource summarization, surpassing previous state-of-the-art results on 6 datasets with only 1000 examples. Finally we validated our results using human evaluation and show that our model summaries achieve human performance on multiple datasets.
## Open source status
* [x] the model implementation is available: https://github.com/google-research/pegasus
* [x] the model weights are available: https://github.com/google-research/pegasus
* [x] who are the authors: Jingqing Zhang @JingqingZ, Yao Zhao @yaozhaogoogle, Mohammad Saleh and Peter J. Liu
| 06-10-2020 20:12:36 | 06-10-2020 20:12:36 | Thanks! The model checkpoints are available actually. [Check here](https://github.com/google-research/pegasus#install-library-and-dependencies) :)<|||||>Hope to provide a pytorch version code <|||||>I might try the Huggingface's weight transfer code from tensorflow to pytorch in July if nobody's working on this post <|||||>Work has started on this, but we are still a few weeks out. <|||||>Just wanted to know when this model will be available<|||||>We're a little behind schedule. I'd say 60% by August 1, 90% by Sept 1.<|||||>this is awesome.<|||||>Very cool! Can it also be evaluated with Bert-Score?<|||||>Can't wait for this... <|||||>Converted torch checkpoints are now available on master if you build from source.
[Here](https://huggingface.co/models?search=pegasus) is a list of available checkpoints.
PR: #6340
Usage:
```python
from transformers import PegasusForConditionalGeneration, PegasusTokenizer
src_text = [
""" PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow."""
]
model_name = 'google/pegasus-xsum'
torch_device = 'cuda' if torch.cuda.is_available() else 'cpu'
tokenizer = PegasusTokenizer.from_pretrained(model_name)
model = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device)
batch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest').to(torch_device)
translated = model.generate(**batch)
tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
assert tgt_text[0] == "California's largest electricity provider has turned off power to tens of thousands of customers."
```
Please make a **new issue** if you encounter a bug with the torch checkpoints and assign @sshleifer .
For conceptual/how to questions, ask on discuss.huggingface.co, (you can also tag @sshleifer. )
Still TODO:
- Tensorflow 2.0 implementation.
- ROUGE score is slightly worse than the original paper because we don't implement length penalty the same way. If anyone wants to try it, see #6420 .
- fp16 doesn't work for generation or finetuning
- I have not tried finetuning yet, no guarantees on that working well or replicating the paper.<|||||>I assume these checkpoints are based on Mixed & Stochastic models, as opposed to models trained exclusively on either C4 or HugeNews?<|||||>Yes!<|||||>@sshleifer I am trying this code on Colab but running into below error. Can you let me know what is the issue?
`ImportError: cannot import name 'PegasusForConditionalGeneration'`<|||||>I'm having the same issue as @chetanambi
<|||||>I think you need to install from source, it's not part of the latest release. (will be in the next release).<|||||>@sshleifer :
for the following model:
model_name = 'google/pegasus-cnn_dailymail';
I encountered this error when running:
`translated = model.generate(**batch)`
'---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-42-635894de22cc> in <module>
1 batch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest').to(torch_device)
----> 2 translated = model.generate(**batch)
3 tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
~/anaconda3/envs/abstractive_summarizer/lib/python3.8/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
13 def decorate_context(*args, **kwargs):
14 with self:
---> 15 return func(*args, **kwargs)
16 return decorate_context
17
~/projects/transformers/src/transformers/generation_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, num_return_sequences, attention_mask, decoder_start_token_id, use_cache, **model_specific_kwargs)
394 encoder = self.get_encoder()
395
--> 396 encoder_outputs: tuple = encoder(input_ids, attention_mask=attention_mask)
397
398 # Expand input ids if num_beams > 1 or num_return_sequences > 1
~/anaconda3/envs/abstractive_summarizer/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
~/projects/transformers/src/transformers/modeling_bart.py in forward(self, input_ids, attention_mask, output_attentions, output_hidden_states, return_dict)
328
329 inputs_embeds = self.embed_tokens(input_ids) * self.embed_scale
--> 330 embed_pos = self.embed_positions(input_ids)
331 x = inputs_embeds + embed_pos
332 x = self.layernorm_embedding(x)
~/anaconda3/envs/abstractive_summarizer/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
~/anaconda3/envs/abstractive_summarizer/lib/python3.8/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
13 def decorate_context(*args, **kwargs):
14 with self:
---> 15 return func(*args, **kwargs)
16 return decorate_context
17
~/projects/transformers/src/transformers/modeling_bart.py in forward(self, input_ids, use_cache)
1337 # starts at 0, ends at 1-seq_len
1338 positions = torch.arange(seq_len, dtype=torch.long, device=self.weight.device)
-> 1339 return super().forward(positions)
~/anaconda3/envs/abstractive_summarizer/lib/python3.8/site-packages/torch/nn/modules/sparse.py in forward(self, input)
122
123 def forward(self, input: Tensor) -> Tensor:
--> 124 return F.embedding(
125 input, self.weight, self.padding_idx, self.max_norm,
126 self.norm_type, self.scale_grad_by_freq, self.sparse)
~/anaconda3/envs/abstractive_summarizer/lib/python3.8/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1812 # remove once script supports set_grad_enabled
1813 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1814 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1815
1816
IndexError: index out of range in self'<|||||>@yxyzzz can you make a new issue and follow the bug-report template. I can't reproduce based on what you've provided. Thanks!<|||||>> I think you need to install from source, it's not part of the latest release. (will be in the next release).
Could you please let me know how to do this. Thanks!!<|||||>@chetanambi The instructions are provided [here](https://github.com/huggingface/transformers#from-source)<|||||>@sshleifer
I installed transformers from the source using the current `master` branch.
```
I experience the following issue.
>>> from transformers import PegasusForConditionalGeneration, PegasusTokenizer
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ubuntu/env5/lib/python3.6/site-packages/transformers/__init__.py", line 21, in <module>
from .configuration_albert import ALBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, AlbertConfig
File "/home/ubuntu/env5/lib/python3.6/site-packages/transformers/configuration_albert.py", line 18, in <module>
from .configuration_utils import PretrainedConfig
File "/home/ubuntu/env5/lib/python3.6/site-packages/transformers/configuration_utils.py", line 24, in <module>
from .file_utils import CONFIG_NAME, cached_path, hf_bucket_url, is_remote_url
File "/home/ubuntu/env5/lib/python3.6/site-packages/transformers/file_utils.py", line 32, in <module>
from .utils import logging
ModuleNotFoundError: No module named 'transformers.utils'
```
**question** It is the problem with the current `master`. How many commits do I need to rollback to sucsessuly run PEGASUS before September release?
Thank you in advance for the info!
<|||||>master fixed by #6754 .<|||||>> master fixed by #6754 .
@sshleifer
**(1)** I confirm that `master` is working now. So I was able to successfully run PEGASUS.
**(2)** Is there any way to control a length of a resulting summary made by PEGASUS? I would like to generate longer summaries.<|||||>> **(2)** Is there any way to control a length of a resulting summary made by PEGASUS? I would like to generate longer summaries.
@andrei-volkau
You can (1) fine-tune PEGASUS on a customised dataset which has longer summaries (2) tune the hyper-parameter `beam_alpha` which can lead to slightly longer/shorter summaries.
<|||||>`beam_alpha` is called "length penalty" in this repo.
Be that `length_penalty` is named confusingly: (#4915)
- Increasing `length_penalty` will result in longer generations.
- Decreasing `length_penalty` will result in shorter generations.
- the formula differs slightly from the pegasus paper (#6420)<|||||>Is there a short finetuning example somewhere?<|||||>Nothing short. Finetuning with `examples/seq2seq/finetune.py` https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune_pegasus_xsum.sh is almost ready (will be ready after #6654). To use that you should read the README.MD which covers how to format your data.<|||||>> @chetanambi The instructions are provided [here](https://github.com/huggingface/transformers#from-source)
I was able to run the models successfully. During the summarization I would like to run with different beam size. How can I do this?
Thanks!!<|||||>Interesting, when I ran the example in the documentation (copied below).
I got the output: `California's largest electricity provider has turned off power to hundreds of thousands of customers.`
Whereas the assertion output was: `California's largest electricity provider has turned off power to tens of thousands of customers.`
Could someone shine a light on why this might be the case and which one is the 'correct' output? I'm certain I didn't change anything.
```
from transformers import PegasusForConditionalGeneration, PegasusTokenizer
import torch
src_text = [
""" PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow."""
]
model_name = 'google/pegasus-xsum'
torch_device = 'cuda' if torch.cuda.is_available() else 'cpu'
tokenizer = PegasusTokenizer.from_pretrained(model_name)
model = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device)
batch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest').to(torch_device)
translated = model.generate(**batch)
tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
assert tgt_text[0] == "California's largest electricity provider has turned off power to tens of thousands of customers."
```
<|||||>The docs are wrong, the code is right:
#6526 (merged since documentation was written) affected output (in a good way).
**Update**: I fixed the docs.<|||||>@sshleifer I am trying to implement this in a machine that is not connected to internet. So, I will have to download the model (ex: reddit-tifu) and pass the location to from_pretrained. Could you please suggest what all the files I need to download. Apperciate your help.
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("google/pegasus-reddit_tifu")
model = AutoModelWithLMHead.from_pretrained("google/pegasus-reddit_tifu")
```<|||||>You can figure that out on your machine with internet by calling
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("google/pegasus-reddit_tifu")
model = AutoModelWithLMHead.from_pretrained("google/pegasus-reddit_tifu")
model.save_pretrained('local_pegasus')
tokenizer.save_pretrained('local_pegasus')
```
Should contain `['config.json', 'pytorch_model.bin', 'tokenizer_config.json', 'special_tokens_map.json'
'spiece.model']`
<|||||>Thanks @sshleifer . I was able to figure it out by looking at the implementation for `from_pretrained` method. I have implemented it successfully now. Thanks !<|||||>Thanks @sshleifer for all of your efforts on this. Your & HF's work is such a big win for the NLP community, I can't thank you enough.
Out of curiosity, any sense for when TF2.0 support may go live?<|||||>Thanks. I don't have a great guess, but it will be more than a few weeks. Feel free to tinker with #5411.
Our new tensorflow maven @jplu is trying to make some big API improvements, so I am waiting for those to settle before adding (Bart, Pegasus, Marian, mBART) TF support all in one go. |
transformers | 4,917 | closed | enable invocation of run_ner.py and utils_ner.py in cython | Due to extant issue https://github.com/cython/cython/issues/2903, `run_ner.py` and `utils_ner.py` (among others, I imagine) cannot be invoked inside Cython. By manually adding in annotations, these changes work around the missing features in Cython 3.7 re: PEP557.
https://github.com/cython/cython/pull/3400 ought to eventually fix the underlying issue.
I'm offering code here that works around this behavior in case it would be helpful to others. | 06-10-2020 19:53:39 | 06-10-2020 19:53:39 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4917?src=pr&el=h1) Report
> Merging [#4917](https://codecov.io/gh/huggingface/transformers/pull/4917?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ef2dcdccaa9a115aca44d81f31c6dc4d32bebb3f&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4917?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4917 +/- ##
=======================================
Coverage 77.12% 77.13%
=======================================
Files 128 128
Lines 21650 21650
=======================================
+ Hits 16698 16700 +2
+ Misses 4952 4950 -2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4917?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4917/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.09% <0.00%> (-0.41%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4917/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (+0.11%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4917/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.57% <0.00%> (+0.31%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4917?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4917?src=pr&el=footer). Last update [ef2dcdc...74e1e66](https://codecov.io/gh/huggingface/transformers/pull/4917?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,916 | closed | Don't init TPU device twice | closes #4893
The TPU device was initialized twice when using a the `xla_spawn.py` script. Removing this initialization solves the issue.
@patrickvonplaten, is this necessary for the benchmarking script? | 06-10-2020 19:43:03 | 06-10-2020 19:43:03 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4916?src=pr&el=h1) Report
> Merging [#4916](https://codecov.io/gh/huggingface/transformers/pull/4916?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ef2dcdccaa9a115aca44d81f31c6dc4d32bebb3f&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4916?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4916 +/- ##
=======================================
Coverage 77.12% 77.13%
=======================================
Files 128 128
Lines 21650 21649 -1
=======================================
+ Hits 16698 16699 +1
+ Misses 4952 4950 -2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4916?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4916/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `74.19% <100.00%> (+0.69%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4916?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4916?src=pr&el=footer). Last update [ef2dcdc...acb4cb4](https://codecov.io/gh/huggingface/transformers/pull/4916?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Oh, my bad! Not sure why I left it there<|||||>No worries! |
transformers | 4,915 | closed | [generate] Increasing length_penalty makes generations longer | In `generate`, we document
```python
length_penalty: Exponential penalty to the length. Default to 1.
```
Given the name and the docstring, you might expect that if you increase the `length_penalty` your model will, on average, produce shorter generations.
You would be wrong! (at least for `bart-large-xsum`)
When we decide the score of a hypothesis [here](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_utils.py#L1714), we calculate
```python
score = sum_logprobs / len(hyp) ** self.length_penalty
```
The issue is that the numerator, `sum_logprobs`, is negative (the result of `F.log_softmax`), and the denominator, `len(hyp) ** self.length_penalty`, is positive. If we increase `length_penalty` we increase the denominator (and the derivative of the denominator w.r.t length) and therefore make the score less negative, so greater.
Fairseq has the same [logic](https://github.com/pytorch/fairseq/blob/eb509f0c584ebae01834e773fb83584102a4f4da/fairseq/sequence_generator.py#L524).
I can think of two groups of solutions:
1) keep the name and change the code so that length is actually penalized:
```python
denominator = len(hyp) ** self.length_penalty
if numerator < 0: denominator *= -1
```
2) Change the name/docstring to something like `len_adjustment` and explain that increasing it is likely to make generations shorter.
@yjernite @patrickvonplaten @LysandreJik @thomwolf, have you guys seen this/do you think it's worth fixing or redocumenting?
### Empirical Evidence
```python
model = BartForConditionalGeneration.from_pretrained('facebook/bart-large-xsum')
tok = BartTokenizer.from_pretrained("facebook/bart-large")
PGE_ARTICLE = """ PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow."""
batch = tok.batch_encode_plus([PGE_ARTICLE], max_length=1024, pad_to_max_length=True, return_tensors="pt",)
ids_lp1 = model.generate(**batch, length_penalty=1.)
ids_lp2 = model.generate(**batch, length_penalty=2.)
text_a, text_b = [tok.batch_decode(x, skip_special_tokens=True,)[0] for x in [ids_lp1, ids_lp2]]
```
text a:
> "California's largest power company, PG&E, has shut off power to tens of thousands of customers across the state."
text_b:
>"California's largest power company, PG&E, has shut off power to tens of thousands of **homes and businesses in the north-east of** the state."
I found similar results for `bart-large-cnn`. | 06-10-2020 19:20:35 | 06-10-2020 19:20:35 | Yes, I remember being confused about the name earlier as well...I would be in favor of keeping the code and renaming the variable, but I can't think of a good variable name (not a huge fan of `len_adjustment`, but can't really think of a better name - maybe `length_reward`?) <|||||>Sorry just catching up :)
I'd go for changing the name, but `length_reward` feels a little too connoted (makes me think of RL)
How about `length_normalization`?<|||||>I'm good with that.
I propose:
- rename the parameter from `length_penalty`-> `length_normalization_alpha`
- if the user **OR** the config passes length_penalty, raise a `DeprecationWarning`.
- Slowly update configs
This would eventually be a (very minor) breaking change @LysandreJik @thomwolf @julien-c .
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,914 | closed | Simple way to convert a Python tokenizer to a fast tokenizer | # 🚀 Feature request
Tokenizer are provided with each model, some have a fast version of their tokenizer (Rust based), others like CamemBERT have only the slow version.
## Motivation
Fast tokenizer improves inference times drastically (in real time inference for instance).
Plus there is no reason it should not be possible
## Your contribution
If you provide me with basic guidelines on how to manually make a conversion, I can submit a PR to offer such feature.
| 06-10-2020 18:43:49 | 06-10-2020 18:43:49 | requires unigram algo implemented on tokenizers
https://github.com/huggingface/tokenizers/pull/292<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>PR merged, closing the issue<|||||>Hi @pommedeterresautee, could you please refer me to how I can convert an existing Python tokenizer to a Fast tokenizer?
Sorry if I missed something, and thanks so much for your help!<|||||>Maybe @SaulLu or @Narsil can comment and link to an example!<|||||>Hi @varun-tandon ,
The code to change from slow to fast is included here: https://github.com/huggingface/transformers/blob/main/src/transformers/convert_slow_tokenizer.py
As you can see there are many variables, depending on the actual model and what you want to achieve.
Usually it involves understanding how a model actually does the tokenization (and all the bits like CLS, SEP etc..) and using the compponents of `tokenizers` to assemble them to make the output similar to what the python code does:
https://huggingface.co/docs/tokenizers/components
Sometimes we're missing a brick and we simply add it (although it becomes rarer with time)
|
transformers | 4,913 | closed | ElectraForQuestionAnswering | This PR adds `ElectraForQuestionAnswering`. One of the missing models in this [project](https://github.com/huggingface/transformers/projects/17)
@LysandreJik , @sgugger | 06-10-2020 17:57:52 | 06-10-2020 17:57:52 | Great, thanks for your help!
Could you also add the new model to [`all_model_classes`](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_electra.py#L41) which would test the model a little bit more?<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4913?src=pr&el=h1) Report
> Merging [#4913](https://codecov.io/gh/huggingface/transformers/pull/4913?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5d63ca6c38cc0f583cdec4c3efcfce13c0a41fdc&el=desc) will **increase** coverage by `0.03%`.
> The diff coverage is `93.93%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4913?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4913 +/- ##
==========================================
+ Coverage 77.10% 77.13% +0.03%
==========================================
Files 128 128
Lines 21617 21650 +33
==========================================
+ Hits 16667 16699 +32
- Misses 4950 4951 +1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4913?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/4913/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <ø> (ø)` | |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4913/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.40% <ø> (ø)` | |
| [src/transformers/modeling\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4913/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `78.16% <93.93%> (+2.07%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4913/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.49% <0.00%> (-0.12%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4913/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (+0.15%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4913/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.49% <0.00%> (+0.40%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4913?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4913?src=pr&el=footer). Last update [5d63ca6...d8a5995](https://codecov.io/gh/huggingface/transformers/pull/4913?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>> Great, thanks for your help!
> Could you also add the new model to [`all_model_classes`](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_electra.py#L41) which would test the model a little bit more?
Sure<|||||>Oh and before I forget, if you don't mind, could you add the new model in the docs as well in [this file](https://github.com/huggingface/transformers/blob/master/docs/source/model_doc/electra.rst) (between `ElectraForTokenClassification` and `TFElectraModel` ideally).<|||||>@sgugger
some tests are failing, but I'm not sure if they are related to this model
The test for this qa model is passed.<|||||>The failing tests are the test common for all models (that are applied to the one you're adding because of the change I made you do). One of the failure I see is linked to the `input_ids` not defaulting to `None` (for when you pass input embeddings instead). There is another one linked to the attentions, I pointed out the problems in comments.<|||||>> The failing tests are the test common for all models (that are applied to the one you're adding because of the change I made you do). One of the failure I see is linked to the `input_ids` not defaulting to `None` (for when you pass input embeddings instead). There is another one linked to the attentions, I pointed out the problems in comments.
Thanks @sgugger !
The tests are happy now :)<|||||>@sgugger this examples failure is related to `TestBartExamples.test_bart_summarization_dataset `<|||||>Cool! All green 🤗 |
transformers | 4,912 | closed | Benchmarks | # Benchmarks
This PR adds the functionality to measure the following functionalities for TF and PT:
**Tensorflow:**
- Inference: CPU, GPU, GPU + XLA, GPU + eager mode, CPU + eager mode, TPU
**PyTorch:**
- Inference: CPU, CPU + torchscript, GPU, GPU + torchscript, GPU + mixed precision, Torch/XLA TPU
- Training: CPU, GPU, GPU + mixed precision, Torch/XLA TPU
## How is memory measured?
**CPU**
We are always interested in the peak memory usage of the process. For CPU, the library `psutil` in combination with multiprocessing is leveraged
**GPU**
It is difficult to have exact memory measurement on GPU. Tensorflow allocates the full GPU memory by default. This is disabled with `tf.config.experimental.set_memory_growth=True`, but Tensorflow still allocates more memory than it needs for efficiency as far as I know.
=> Memory is therefore always measured to give the same maximal result as shown by `nvidia-smi`. This means that also memory for loading PyTorch / Tensorflow is taken into account which is for example not done when measuring via `torch.cuda.max_allocated_memory`.
Tensorflow also does not release GPU memory before the process is finished. Therefore, all measurement functions are wrapped into their own spawned process via Python's multiprocessing tools.
Also note that because TF does not release memory during the same process, memory and inference is measured using a multiprocess approach in TF. Also TF does not provide an official memory monitoring function, so that the same result that `nvidia-smi` would show for TF is used.
**TPU**
Memory measurement is currently not supported
## How is speed measured?
For all functionality that requires compilation (TPU, XLA, Torchscript), 5 warmup calls of the function are done beforehand.
Afterwards, the minimum of `self.args.repeat` x the time-averaged over 10 function calls.
## Example Colabs:
The colabs give quick examples for each functionality with little explanation for the moment:
Pytorch TPU: https://colab.research.google.com/drive/1GJFOdcBe1pW_FKWpA0jK_AOsIQ5epcvE?usp=sharing
Tensorflow TPU:
https://colab.research.google.com/drive/1t8DW1NxA4b1BsWSZ1ehFG9oT69l0h7os?usp=sharing
GPU: https://colab.research.google.com/drive/15XTPT_GPp42Zj7_f1W9X_T3NNXE9_1Te?usp=sharing
CPU: https://colab.research.google.com/drive/1OG2rZgo18KvliS-ratybld9pHD06-v5S?usp=sharing
## Future PR:
- [ ] Make nicer examples and explanations
- [ ] Update docs and think about automatic measuring on website
- [ ] Training in TF. Because the LM Head models currently do not accept `labels` parameter as an input, adding measurement for training is left for a future PR
- [ ] GPU fp16 in TF. We currently have a bug in the lib that does not allow to run TF models in fp16 on GPU: https://github.com/huggingface/transformers/issues/3320
- [ ] PyTorch's amp package has memory leaks, so that we simply do `model.half()` to measure fp16 in Pytorch. See issue here: https://github.com/NVIDIA/apex/issues/439 . Wait until amp is supported in upstream torch 1.6
- [ ] Currently memory is not measured on TPU. Wait for more functionality for TPU
- [ ] Allow multi-gpu measurments
| 06-10-2020 17:08:06 | 06-10-2020 17:08:06 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4912?src=pr&el=h1) Report
> Merging [#4912](https://codecov.io/gh/huggingface/transformers/pull/4912?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/355954ffca798bb81d9db8886e30ce10f11e8a40&el=desc) will **decrease** coverage by `0.94%`.
> The diff coverage is `78.29%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4912?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4912 +/- ##
==========================================
- Coverage 77.28% 76.34% -0.95%
==========================================
Files 133 134 +1
Lines 22134 22369 +235
==========================================
- Hits 17107 17078 -29
- Misses 5027 5291 +264
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4912?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/benchmark/benchmark\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4912/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `69.77% <68.51%> (-3.20%)` | :arrow_down: |
| [src/transformers/benchmark/benchmark\_args\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4912/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdXRpbHMucHk=) | `89.13% <71.42%> (-7.75%)` | :arrow_down: |
| [src/transformers/benchmark/benchmark.py](https://codecov.io/gh/huggingface/transformers/pull/4912/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrLnB5) | `79.13% <76.00%> (+9.70%)` | :arrow_up: |
| [src/transformers/benchmark/benchmark\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4912/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3RmLnB5) | `82.69% <82.69%> (ø)` | |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4912/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.80% <86.66%> (+1.40%)` | :arrow_up: |
| [src/transformers/benchmark/benchmark\_args\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4912/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdGYucHk=) | `87.50% <87.50%> (ø)` | |
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/4912/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.18% <100.00%> (+0.02%)` | :arrow_up: |
| [src/transformers/benchmark/benchmark\_args.py](https://codecov.io/gh/huggingface/transformers/pull/4912/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3MucHk=) | `86.04% <100.00%> (+0.68%)` | :arrow_up: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4912/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.57% <100.00%> (+0.18%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4912/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `50.10% <0.00%> (-43.61%)` | :arrow_down: |
| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/4912/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4912?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4912?src=pr&el=footer). Last update [355954f...8b71041](https://codecov.io/gh/huggingface/transformers/pull/4912?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>> This is great. I really like that you can import the benchmark if you want to use them during runtime, rather than the only option being to run a script.
>
> Some remarks after playing with it:
>
> * Maybe you should raise an error when no `model_names` are specified. Right now it crashes with `UnboundLocalError: local variable 'inference_summary' referenced before assignment` (pytorch version at least)
> * There seems to be an error in the way the runtimes are computed. PyTorch using GPU, is slower than TensorFlow on CPU (10x times slower), while PyTorch on CPU is 150x slower than TensorFlow on CPU.
>
> Here are the results from my runs so far. The following is on CPU with TensorFlow (2ms per inference with `bert-base-cased`, seq len 8 and batch size 512 on a CPU??) I didn't test the memory usage so they're not in the results:
>
> ```
> ==================== INFERENCE - SPEED - RESULT ====================
> --------------------------------------------------------------------------------
> Model Name Batch Size Seq Length Time in s
> --------------------------------------------------------------------------------
> bert-base-cased 8 8 0.001
> bert-base-cased 8 32 0.001
> bert-base-cased 8 128 0.001
> bert-base-cased 8 512 0.002
> --------------------------------------------------------------------------------
>
> ==================== ENVIRONMENT INFORMATION ====================
> - transformers_version: 2.11.0
> - framework: Tensorflow
> - eager_mode: False
> - use_xla: False
> - framework_version: 2.2.0
> - python_version: 3.6.10
> - system: Linux
> - cpu:
> - architecture: 64bit
> - date: 2020-06-18
> - time: 11:57:18.595804
> - fp16: False
> - use_multiprocessing: True
> - cpu_ram_mb: 64333
> - use_gpu: False
> - use_tpu: False
> ```
>
> Here's the test with PyTorch on GPU:
>
> ```
> ==================== INFERENCE - SPEED - RESULT ====================
> --------------------------------------------------------------------------------
> Model Name Batch Size Seq Length Time in s
> --------------------------------------------------------------------------------
> bert-base-cased 8 8 0.007
> bert-base-cased 8 32 0.007
> bert-base-cased 8 128 0.019
> bert-base-cased 8 512 0.074
> --------------------------------------------------------------------------------
>
> ==================== ENVIRONMENT INFORMATION ====================
> - transformers_version: 2.11.0
> - framework: PyTorch
> - use_torchscript: False
> - framework_version: 1.5.0
> - python_version: 3.6.10
> - system: Linux
> - cpu:
> - architecture: 64bit
> - date: 2020-06-18
> - time: 11:56:31.041360
> - fp16: False
> - use_multiprocessing: True
> - cpu_ram_mb: 64333
> - use_gpu: True
> - num_gpus: 1
> - gpu: N/A
> - gpu_ram_mb: N/A
> - gpu_power_watts: N/A
> - gpu_performance_state: N/A
> - use_tpu: False
> ```
>
> I'm not sure that PyTorch on GPU is ~37x slower than TensorFlow on CPU I tried to debug but it's not easy to debug tf functions unfortunately
Thanks a lot for checking everything! Found the error :-) One just has to return a tensor out of the tf.function context so that it is actually computed. I guess before compilation TF compilation optimizes the function so that variables that are not used outside of the @tf.function scope are not computed.
Will update the notebooks and should then getter more reasonable results :-) <|||||>And will definitely add a better error message<|||||>The speed tests seem much more reasonable now, if you check the notebooks :-) @LysandreJik
There seems to be a problem with GPU memory in TF now :-/ Will check tomorrow again<|||||>## GPU locally gives reasonable results of TF vs. PT.
All tests were run in this environment:
```
- transformers_version: 2.11.0
- python_version: 3.6.10
- system: Linux
- cpu: x86_64
- architecture: 64bit
- date: 2020-06-19
- time: 13:49:57.455208
- use_multiprocessing: True
- cpu_ram_mb: 32088
- use_gpu: True
- num_gpus: 1
- gpu: TITAN RTX
- gpu_ram_mb: 24217
- gpu_power_watts: 280.0
- gpu_performance_state: 2
```
for TF 2.2 and Pytorch 1.4.0
### PyTorch
`python run_benchmark.py --models gpt2 bert-base-cased --no_env_print --no_memory` gives:
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
gpt2 8 8 0.006
gpt2 8 32 0.007
gpt2 8 128 0.026
gpt2 8 512 0.104
bert-base-cased 8 8 0.006
bert-base-cased 8 32 0.006
bert-base-cased 8 128 0.021
bert-base-cased 8 512 0.094
--------------------------------------------------------------------------------
<|||||>### PyTorch FP16
`python run_benchmark.py --models gpt2 bert-base-cased --no_env_print --no_memory --fp16`
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
gpt2 8 8 0.006
gpt2 8 32 0.007
gpt2 8 128 0.009
gpt2 8 512 0.043
bert-base-cased 8 8 0.006
bert-base-cased 8 32 0.006
bert-base-cased 8 128 0.006
bert-base-cased 8 512 0.03 <|||||>### TF no eager modus
```python run_benchmark_tf.py --models gpt2 bert-base-cased --no_env_print --no_memory```
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
gpt2 8 8 0.005
gpt2 8 32 0.007
gpt2 8 128 0.029
gpt2 8 512 0.125
bert-base-cased 8 8 0.005
bert-base-cased 8 32 0.006
bert-base-cased 8 128 0.024
bert-base-cased 8 512 0.114
--------------------------------------------------------------------------------<|||||>### TF XLA
```python run_benchmark_tf.py --models gpt2 bert-base-cased --no_env_print --no_memory --use_xla```
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
gpt2 8 8 0.002
gpt2 8 32 0.006
gpt2 8 128 0.021
gpt2 8 512 0.095
bert-base-cased 8 8 0.003
bert-base-cased 8 32 0.005
bert-base-cased 8 128 0.019
bert-base-cased 8 512 0.087
--------------------------------------------------------------------------------
<|||||>## Memory measurements
They also seem reasonable for forward pass:.
### TF no eager mode (keeping in mind that nvidia-smi is not accurate here and TF always allocates more than it needs):
```python run_benchmark_tf.py --models gpt2 bert-base-cased --no_env_print --no_speed```
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Memory in MB
--------------------------------------------------------------------------------
gpt2 64 8 1704
gpt2 64 32 1704
gpt2 64 128 2728
gpt2 64 512 8872
bert-base-cased 64 8 1192
bert-base-cased 64 32 1192
bert-base-cased 64 128 1704
bert-base-cased 64 512 4776
--------------------------------------------------------------------------------
### PyTorch
```python run_benchmark.py --models gpt2 bert-base-cased --no_env_print --no_speed```
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Memory in MB
--------------------------------------------------------------------------------
gpt2 64 8 1150
gpt2 64 32 1384
gpt2 64 128 2290
gpt2 64 512 5890
bert-base-cased 64 8 1016
bert-base-cased 64 32 1104
bert-base-cased 64 128 1448
bert-base-cased 64 512 3224
--------------------------------------------------------------------------------
### PyTorch FP16
```python run_benchmark.py --models gpt2 bert-base-cased --no_env_print --no_speed --fp16```
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Memory in MB
--------------------------------------------------------------------------------
gpt2 64 8 1170
gpt2 64 32 1164
gpt2 64 128 1596
gpt2 64 512 3420
bert-base-cased 64 8 1066
bert-base-cased 64 32 1060
bert-base-cased 64 128 1108
bert-base-cased 64 512 2118
--------------------------------------------------------------------------------
|
transformers | 4,911 | closed | enable pickling for TF Bert models | this implements `__getstate__` for BERT models to enable pickling (without this PR the pickle attempt due to `weakref` errors). It also adds a unit test. | 06-10-2020 16:39:41 | 06-10-2020 16:39:41 | Hi! Why would you prefer using pickle rather than `save_pretrained`/`from_pretrained` or `torch.save`/`torch.save`?<|||||>HI @LysandreJik, pickle is mainly useful for parallel processing frameworks like `joblib` or `dask`. The use case is to parallelize some (embarassingly parallel) computation on multiple CPUs/GPUs. Usually, they use pickled objects to send to workers.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4911?src=pr&el=h1) Report
> Merging [#4911](https://codecov.io/gh/huggingface/transformers/pull/4911?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ac99217e92c43066af7ec96554054d75532565d7&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4911?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4911 +/- ##
=======================================
Coverage 76.99% 77.00%
=======================================
Files 128 128
Lines 21602 21607 +5
=======================================
+ Hits 16633 16639 +6
+ Misses 4969 4968 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4911?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.51% <100.00%> (+0.25%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.49% <0.00%> (-0.12%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.49% <0.00%> (+0.40%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4911?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4911?src=pr&el=footer). Last update [ac99217...dd7120b](https://codecov.io/gh/huggingface/transformers/pull/4911?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>This is cool! Could we also add it to the PyTorch models?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,910 | closed | Add more models to common tests | Follow-up to #4886, adds all existing pt models to common tests (with the exception of the longformer task-specific ones, because of some problem with output attention).
Most of them required some fixes in the model files which are also added.
For longformer, a few of the needed fixes are present but there was still a standing failing test. @patrickvonplaten will look into it. | 06-10-2020 16:26:04 | 06-10-2020 16:26:04 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4910?src=pr&el=h1) Report
> Merging [#4910](https://codecov.io/gh/huggingface/transformers/pull/4910?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ac99217e92c43066af7ec96554054d75532565d7&el=desc) will **increase** coverage by `0.09%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4910?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4910 +/- ##
==========================================
+ Coverage 76.99% 77.08% +0.09%
==========================================
Files 128 128
Lines 21602 21604 +2
==========================================
+ Hits 16633 16654 +21
+ Misses 4969 4950 -19
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4910?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/4910/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.50% <100.00%> (ø)` | |
| [src/transformers/modeling\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4910/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `76.09% <100.00%> (ø)` | |
| [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4910/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `93.01% <100.00%> (+0.01%)` | :arrow_up: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4910/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `94.76% <100.00%> (+0.02%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4910/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.49% <0.00%> (-0.12%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4910/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (+0.15%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4910/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.49% <0.00%> (+0.40%)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4910/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.61% <0.00%> (+2.96%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4910?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4910?src=pr&el=footer). Last update [ac99217...9e50a38](https://codecov.io/gh/huggingface/transformers/pull/4910?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>LGTM! |
transformers | 4,909 | closed | [All models] fix docs after adding output attentions to all forward functions | Added the same docs for to all models for `output_attentions` following PR: https://github.com/huggingface/transformers/pull/4538 .
This PR only touches the docs.
Pinging @LysandreJik @Bharat123rox for notification. | 06-10-2020 15:15:03 | 06-10-2020 15:15:03 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4909?src=pr&el=h1) Report
> Merging [#4909](https://codecov.io/gh/huggingface/transformers/pull/4909?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ac99217e92c43066af7ec96554054d75532565d7&el=desc) will **increase** coverage by `0.40%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4909?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4909 +/- ##
==========================================
+ Coverage 76.99% 77.40% +0.40%
==========================================
Files 128 128
Lines 21602 21602
==========================================
+ Hits 16633 16720 +87
+ Misses 4969 4882 -87
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4909?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.37% <ø> (ø)` | |
| [src/transformers/modeling\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `76.22% <ø> (ø)` | |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/4909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.40% <ø> (ø)` | |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.28% <ø> (ø)` | |
| [src/transformers/modeling\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/4909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jYW1lbWJlcnQucHk=) | `100.00% <ø> (ø)` | |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.63% <ø> (ø)` | |
| [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/4909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.50% <ø> (ø)` | |
| [src/transformers/modeling\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `76.09% <ø> (ø)` | |
| [src/transformers/modeling\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/4909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF1YmVydC5weQ==) | `84.00% <ø> (ø)` | |
| [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `85.43% <ø> (ø)` | |
| ... and [27 more](https://codecov.io/gh/huggingface/transformers/pull/4909/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4909?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4909?src=pr&el=footer). Last update [ac99217...58379b0](https://codecov.io/gh/huggingface/transformers/pull/4909?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>This is great! Thanks @patrickvonplaten |
transformers | 4,908 | closed | BartForQuestionAnswering | This PR adds `BartForQuestionAnswering`.
Decided to add this models as `BART` is intended for both NLU and NLG tasks and also achieves comparable performance to `ROBERTa` on SQuAD.
Also fine-tuned the model [here](https://colab.research.google.com/drive/1I5cK1M_0dLaf5xoewh6swcm5nAInfwHy?usp=sharing). The metrics are slightly worse than given in the paper. Got following metrics on SQuADv1
`{'exact_match': 86.80227057710502, 'f1': 92.73424907872341}`
@sshleifer , @patrickvonplaten
| 06-10-2020 15:02:38 | 06-10-2020 15:02:38 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4908?src=pr&el=h1) Report
> Merging [#4908](https://codecov.io/gh/huggingface/transformers/pull/4908?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ac99217e92c43066af7ec96554054d75532565d7&el=desc) will **increase** coverage by `0.03%`.
> The diff coverage is `94.11%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4908?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4908 +/- ##
==========================================
+ Coverage 76.99% 77.02% +0.03%
==========================================
Files 128 128
Lines 21602 21635 +33
==========================================
+ Hits 16633 16665 +32
- Misses 4969 4970 +1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4908?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/4908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <ø> (ø)` | |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/4908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.26% <93.93%> (-0.15%)` | :arrow_down: |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.40% <100.00%> (ø)` | |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.49% <0.00%> (-0.12%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (+0.15%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.49% <0.00%> (+0.40%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4908?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4908?src=pr&el=footer). Last update [ac99217...63eb191](https://codecov.io/gh/huggingface/transformers/pull/4908?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks for the contribution @patil-suraj !<|||||>> Hi! Very cool @patil-suraj.
>
> Could you also add `BartForQuestionAnswering` to the `all_model_classes` in `test_modeling_bart.py`?
Hi, @LysandreJik
After adding `BartForQuestionAnswering` in `all_model_classes` I also had to add `output_attention` parameter to `forward`.
Now for some reason `test_attention_outputs` is failing, I am not sure why, could you help me fix it ?
Thanks !<|||||>Awesome work @patil-suraj - I can help you with this test :-) <|||||>I see what the problem is...it's actually not related to your PR at all. Can we you for now just remove `BartForQuestionAnswering` from the all_models tuples in the tests. @LysandreJik @sshleifer I will open a new PR after this one to fix it :-) <|||||>> I see what the problem is...it's actually not related to your PR at all. Can we you for now just remove `BartForQuestionAnswering` from the all_models tuples in the tests. @LysandreJik @sshleifer I will open a new PR after this one to fix it :-)
Thank you @patrickvonplaten . I've removed it from `all_models` tuple for now |
transformers | 4,907 | closed | ModuleNotFoundError: No module named 'xml.sax'; 'xml' is not a package | I'm running this example:
`from transformers import pipeline
nlp = pipeline("sentiment-analysis")
print(nlp("I hate you"))
print(nlp("I love you"))`
I get this error:
`
Traceback (most recent call last):
File "ttt.py", line 1, in <module>
from transformers import pipeline as ppp
File "/usr/local/lib/python3.8/site-packages/transformers/__init__.py", line 99, in <module>
from .pipelines import (
File "/usr/local/lib/python3.8/site-packages/transformers/pipelines.py", line 36, in <module>
from .tokenization_auto import AutoTokenizer
File "/usr/local/lib/python3.8/site-packages/transformers/tokenization_auto.py", line 52, in <module>
from .tokenization_flaubert import FlaubertTokenizer
File "/usr/local/lib/python3.8/site-packages/transformers/tokenization_flaubert.py", line 23, in <module>
from .tokenization_xlm import XLMTokenizer
File "/usr/local/lib/python3.8/site-packages/transformers/tokenization_xlm.py", line 26, in <module>
import sacremoses as sm
File "/usr/local/lib/python3.8/site-packages/sacremoses/__init__.py", line 2, in <module>
from sacremoses.tokenize import *
File "/usr/local/lib/python3.8/site-packages/sacremoses/tokenize.py", line 10, in <module>
from sacremoses.util import is_cjk
File "/usr/local/lib/python3.8/site-packages/sacremoses/util.py", line 9, in <module>
from xml.sax.saxutils import escape, unescape
ModuleNotFoundError: No module named 'xml.sax'; 'xml' is not a package
` | 06-10-2020 14:56:07 | 06-10-2020 14:56:07 | Hello! Is `sacremoses` installed in your environment? Do you mind pasting the result of `pip list` in your environment?<|||||>Here you go:
`Package Version
---------------------- -----------
absl-py 0.9.0
astunparse 1.6.3
beautifulsoup4 4.9.1
bs4 0.0.1
cachetools 4.1.0
certifi 2020.4.5.1
chardet 3.0.4
click 7.1.2
filelock 3.0.12
future 0.18.2
gast 0.3.3
google-auth 1.16.1
google-auth-oauthlib 0.4.1
google-pasta 0.2.0
grpcio 1.29.0
h5py 2.10.0
idna 2.9
joblib 0.15.1
Keras-Preprocessing 1.1.2
Markdown 3.2.2
numpy 1.18.5
oauthlib 3.1.0
opt-einsum 3.2.1
packaging 20.4
Pillow 7.1.2
pip 20.0.2
protobuf 3.12.2
pyasn1 0.4.8
pyasn1-modules 0.2.8
pyparsing 2.4.7
regex 2020.6.8
requests 2.23.0
requests-oauthlib 1.3.0
rsa 4.0
sacremoses 0.0.43
scipy 1.4.1
sentencepiece 0.1.91
setuptools 46.0.0
six 1.15.0
soupsieve 2.0.1
tensorboard 2.2.2
tensorboard-plugin-wit 1.6.0.post3
tensorflow 2.2.0
tensorflow-estimator 2.2.0
termcolor 1.1.0
tokenizers 0.7.0
torch 1.5.0
torchvision 0.6.0
tqdm 4.46.1
transformers 2.11.0
urllib3 1.25.9
Werkzeug 1.0.1
wheel 0.34.2
wrapt 1.12.1 `<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,906 | closed | TypeError: export() got an unexpected keyword argument 'use_external_data_format' | Hi,
I tried to run `convert_graph_to_onnx.py` using
`convert(framework="pt", model="bert-base-uncased", output="onnx/bert-base-uncased.onnx", opset=11)`
But I get error in the
```
export(
nlp.model,
model_args,
f=output,
input_names=ordered_input_names,
output_names=output_names,
dynamic_axes=dynamic_axes,
do_constant_folding=True,
use_external_data_format=use_external_format,
enable_onnx_checker=True,
opset_version=opset,
)
```
the error is :
```
TypeError: export() got an unexpected keyword argument 'use_external_data_format'
TypeError: export() got an unexpected keyword argument 'enable_onnx_checker'
```
When I deleted these two lines, it does not report errors. Is it OK to remove these two lines?
Thanks,
ZLK | 06-10-2020 14:30:08 | 06-10-2020 14:30:08 | I just did this too and it seems to work :S <|||||>I looked into this and I believe the issue is that use_external_data_format is a recent [addition to PyTorch from the onnx team](https://github.com/pytorch/pytorch/commit/96989a2a114de9b77e7dd9495d62c4a8a549b40d). If you upgrade to torch>=1.5.0 it should work. Also I added a PR #5687 to make this issue more straightforward. |
transformers | 4,905 | closed | [How to] Carefully designing the head of a Transformer model? | # ❓ Questions & Help
While using the pre-trained any transformer model, what are the main things we normally should consider while designing the head? Simply like,
```python
distill = transformer.distilbert.... <----- slicing the first position
x = Dense(n, activation = ' ')(distill) <-------- simple classifer head
```
Is it really necessary to design an additional head? (I'm using `tensorflow` backend.) | 06-10-2020 13:42:20 | 06-10-2020 13:42:20 | |
transformers | 4,904 | closed | [ctrl] fix pruning of MultiHeadAttention | @sshleifer
Implemented the pruning logic. Fixes - #4798
After enabling `test_pruning` all the previously failing tests are passing. | 06-10-2020 12:53:56 | 06-10-2020 12:53:56 | Hi @aretius, some (if not all) the tests failing are unrelated to your PR, and should have been solved by the recently merged #4903. Do you mind rebasing on `master` and force pushing so that we may see if all the tests pass?<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4904?src=pr&el=h1) Report
> Merging [#4904](https://codecov.io/gh/huggingface/transformers/pull/4904?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ac99217e92c43066af7ec96554054d75532565d7&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `93.33%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4904?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4904 +/- ##
==========================================
+ Coverage 76.99% 77.01% +0.01%
==========================================
Files 128 128
Lines 21602 21615 +13
==========================================
+ Hits 16633 16647 +14
+ Misses 4969 4968 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4904?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4904/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `99.14% <93.33%> (+0.50%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4904?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4904?src=pr&el=footer). Last update [ac99217...bf94b7e](https://codecov.io/gh/huggingface/transformers/pull/4904?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>@sshleifer do you want to take a look? |
transformers | 4,903 | closed | Fix the CI | The CI was broken by the merge of #4886 since #4538 was merged between the moment #4886 was tested and the moment it was merged.
This PR fixes the tests. | 06-10-2020 12:39:42 | 06-10-2020 12:39:42 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4903?src=pr&el=h1) Report
> Merging [#4903](https://codecov.io/gh/huggingface/transformers/pull/4903?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0a375f5abdefcde1424639f712cf40247135cd64&el=desc) will **increase** coverage by `36.43%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4903?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4903 +/- ##
===========================================
+ Coverage 40.56% 76.99% +36.43%
===========================================
Files 128 128
Lines 21602 21602
===========================================
+ Hits 8762 16633 +7871
+ Misses 12840 4969 -7871
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4903?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (+0.63%)` | :arrow_up: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.69% <0.00%> (+0.93%)` | :arrow_up: |
| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/4903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `27.63% <0.00%> (+1.31%)` | :arrow_up: |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.37% <0.00%> (+1.44%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (+1.70%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.09% <0.00%> (+5.22%)` | :arrow_up: |
| [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/4903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `92.79% <0.00%> (+6.30%)` | :arrow_up: |
| [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/4903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `77.41% <0.00%> (+11.82%)` | :arrow_up: |
| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/4903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `76.28% <0.00%> (+14.33%)` | :arrow_up: |
| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/4903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `49.44% <0.00%> (+17.34%)` | :arrow_up: |
| ... and [44 more](https://codecov.io/gh/huggingface/transformers/pull/4903/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4903?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4903?src=pr&el=footer). Last update [0a375f5...0c9840c](https://codecov.io/gh/huggingface/transformers/pull/4903?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,902 | closed | [cleanup] Hoist ModelTester objects to toplevel | https://github.com/huggingface/transformers/pull/4046#issuecomment-628236744
many `ModelTester` objects are defined within classes. If we move them to the top level of the module, we can share code where possible and also have less complexity.
The task here is to move `ModelTester` objects to the top level.
Bonus: if the kwargs are never used, replace
```python
def__init(self, num_layers=2):
self.num_layers=num_layers
```
with
```python
def__init(self):
self.num_layers=2
```
| 06-10-2020 11:50:10 | 06-10-2020 11:50:10 | @sshleifer It would be beneficial to provide more context, apologies for the beginner question!
Would be happy to pick it up :)<|||||>Yeah sure. The high level goal is to reduce the amount of boilerplate code in the unittests.
For example, if you look at [T5ModelTester](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_t5.py#L43), there are a few code quality issues:
1) The class is defined inside `T5ModelTest` class and indented. It should be defined outside.
2) The class inherits from `object`. It should not.
3) the class has 18 lines of keyword arguments that are never used. They should be hardcoded. For example, instead of lines 47 (`batch_size=13`) and line 68, (`self.batch_size=batch_size`), we could simply set `self.batch_size = 13` in one line.
These 3 problems occurr in nearly all of the following files:
```bash
git grep "ModelTester(object)"
```
Results:
```bash
tests/test_modeling_albert.py: class AlbertModelTester(object):
tests/test_modeling_ctrl.py: class CTRLModelTester(object):
tests/test_modeling_distilbert.py: class DistilBertModelTester(object):
tests/test_modeling_electra.py: class ElectraModelTester(object):
tests/test_modeling_flaubert.py: class FlaubertModelTester(object):
tests/test_modeling_gpt2.py: class GPT2ModelTester(object):
tests/test_modeling_longformer.py:class LongformerModelTester(object):
tests/test_modeling_openai.py: class OpenAIGPTModelTester(object):
tests/test_modeling_roberta.py: class RobertaModelTester(object):
tests/test_modeling_t5.py: class T5ModelTester(object):
tests/test_modeling_tf_albert.py: class TFAlbertModelTester(object):
tests/test_modeling_tf_bert.py: class TFBertModelTester(object):
tests/test_modeling_tf_ctrl.py: class TFCTRLModelTester(object):
tests/test_modeling_tf_distilbert.py: class TFDistilBertModelTester(object):
tests/test_modeling_tf_electra.py: class TFElectraModelTester(object):
tests/test_modeling_tf_gpt2.py: class TFGPT2ModelTester(object):
tests/test_modeling_tf_openai_gpt.py: class TFOpenAIGPTModelTester(object):
tests/test_modeling_tf_roberta.py: class TFRobertaModelTester(object):
tests/test_modeling_tf_t5.py: class TFT5ModelTester(object):
tests/test_modeling_tf_transfo_xl.py: class TFTransfoXLModelTester(object):
tests/test_modeling_tf_xlm.py: class TFXLMModelTester(object):
tests/test_modeling_tf_xlnet.py: class TFXLNetModelTester(object):
tests/test_modeling_transfo_xl.py: class TransfoXLModelTester(object):
tests/test_modeling_xlm.py: class XLMModelTester(object):
tests/test_modeling_xlnet.py: class XLNetModelTester(object):
```
Once this is done, we can update the instructions in
```bash
templates/adding_a_new_model/tests/test_modeling_tf_xxx.py
templates/adding_a_new_model/tests/test_modeling_xxx.py
```<|||||>Indeed, code quality could be improved here! |
transformers | 4,901 | closed | Add MobileBert | Grabbed the code from https://github.com/lonePatient/MobileBert_PyTorch and added the question answering downstream task.
Should address #4185. Also got the backbone weights' representation in the Pytorch/transformers format (i.e. the `pytorch_model.bin`, `config.json` and `vocab.txt` files) via converting the original TF [uncased_L-24_H-128_B-512_A-4_F-4_OPT](https://storage.googleapis.com/cloud-tpu-checkpoints/mobilebert/uncased_L-24_H-128_B-512_A-4_F-4_OPT.tar.gz) checkpoint - need guidance on how to upload these if necessary.
The converted backbone weights as loaded in transformers via the PyTorch checkpoint loading method allow to reproduce the original paper's results for SST-2 somewhat - paper claims 92.8% accuracy, while I got 91.7% using the hyperparameters in https://github.com/lonePatient/MobileBert_PyTorch. Not sure if there is another fast way to confirm that these weights indeed correspond to the original pretrained MobileBert backbone. | 06-10-2020 10:32:19 | 06-10-2020 10:32:19 | Hi @vshampor, can you let me know when this is ready for review? Thanks!<|||||>@LysandreJik it is quite impossible to make the CI pass for the check_code_quality stage since locally the `black` and the `isort` commands seem to produce opposite changes when applied at once with `make style` and therefore cancel out; meanwhile neither check is satisfied on the CI side since the checks for `black` and `isort` are done separately.
Otherwise it's ok to review this now, I believe.<|||||>Ah yes, this happens when you have conflicting versions of black/isort. It's painful because isort should be installed from a specific commit.
Don't worry about it though, we'll fix that later on!<|||||>Credit goes to @lonePatient, I am merely integrating this to transformers because we at [nncf_pytorch](https://github.com/openvinotoolkit/nncf_pytorch) leverage this excellent repo for compression experiments with NLP models and would like to try out MobileBERT as well.
Will address the remarks and update the PR.
<|||||>> * Upload the checkpoints to S3. Seeing as there's a single checkpoint released by google, I guess it would be under the name `google/mobilebert-uncased` @julien-c ?
Yes! We can ping the authors and check that they're ok.<|||||>@LysandreJik @julien-c so may I upload the model to S3 already or should we wait for @saberkun's approval for this?<|||||>I think the conversion script lacks
```py
name = name.replace("bert", "mobilebert")
```
in order to work
I'd like to update this as well as the code quality, do you mind if I push directly on your fork?<|||||>> I think the conversion script lacks
>
> ```python
> name = name.replace("bert", "mobilebert")
> ```
>
> in order to work
>
> I'd like to update this as well as the code quality, do you mind if I push directly on your fork?
Sure.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4901?src=pr&el=h1) Report
> Merging [#4901](https://codecov.io/gh/huggingface/transformers/pull/4901?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f45e873910e60d89511ae0193711e71c5c710468&el=desc) will **increase** coverage by `0.72%`.
> The diff coverage is `91.26%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4901?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4901 +/- ##
==========================================
+ Coverage 77.19% 77.91% +0.72%
==========================================
Files 133 137 +4
Lines 22233 23470 +1237
==========================================
+ Hits 17163 18287 +1124
- Misses 5070 5183 +113
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4901?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/4901/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `88.74% <88.74%> (ø)` | |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/4901/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `93.32% <93.32%> (ø)` | |
| [src/transformers/configuration\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/4901/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX21vYmlsZWJlcnQucHk=) | `97.05% <97.05%> (ø)` | |
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/4901/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.19% <100.00%> (+0.02%)` | :arrow_up: |
| [src/transformers/configuration\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4901/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `93.18% <100.00%> (+0.15%)` | :arrow_up: |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4901/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `70.93% <100.00%> (+0.16%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4901/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `72.50% <100.00%> (+0.23%)` | :arrow_up: |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4901/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <100.00%> (+0.10%)` | :arrow_up: |
| [src/transformers/tokenization\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/4901/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbW9iaWxlYmVydC5weQ==) | `100.00% <100.00%> (ø)` | |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4901/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.51% <0.00%> (-1.39%)` | :arrow_down: |
| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/4901/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4901?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4901?src=pr&el=footer). Last update [f45e873...e73fde7](https://codecov.io/gh/huggingface/transformers/pull/4901?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Will add the documentation in the following commit and ping you for review then @patrickvonplaten @sgugger <|||||>I just pushed the TensorFlow implementation, and added several models: `MobileBertFor{MaskedLM, NextSentencePrediction, MultipleChoice, TokenClassification}` alongside their tests and documentation.
Will solve the two remaining tests on Monday, put the TensorFlow checkpoints on S3 and we'll be good to merge!<|||||>Thanks for your reviews @patrickvonplaten @sgugger! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.