repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 3,892 | closed | ❓ Summarization example : Why no shuffling ? | # ❓ Questions & Help
Usually, when loading using Dataloader, data is shuffled for the training set. But in the case of the summarization example, data is not shuffled for training set :
https://github.com/huggingface/transformers/blob/1dc9b3c7847269961458c059ad8ad443b26bf60d/examples/summarization/bart/finetune.py#L105-L108
---
**Why data is not shuffled ?** | 04-22-2020 02:28:51 | 04-22-2020 02:28:51 | No good reason.
Do you feel comfortable sending a PR that shuffles for train loader only?
|
transformers | 3,891 | closed | Allow one to return encoder attentions in seq2seq generation | # 🚀 Feature request
Please could we have the ability to return attention weights from the decoded generated tokens to the encoded source?
## Motivation
To attribute the decoded text. E.g in the summarization task we want to see where the decoder was paying attention to in the source.
## Your contribution
May be able to look into a PR but stretched for time at the minute.
FairSeq has implemented this capability I believe. | 04-21-2020 22:41:28 | 04-21-2020 22:41:28 | Hi @aced125, I agree that this functionality should be provided :-)
I think in the PR, one has to include a `output_attention` argument to the `generate()` function and then make sure that the output idx are correct!
Before starting this PR, this PR should probably be solved before: https://github.com/huggingface/transformers/issues/3880<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>want to take a look at this soon<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>#6735 is a first step to allow for this feature<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,890 | closed | Create model card | Model: TinyBERT-spanish-uncased-finetuned-ner | 04-21-2020 21:27:04 | 04-21-2020 21:27:04 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3890?src=pr&el=h1) Report
> Merging [#3890](https://codecov.io/gh/huggingface/transformers/pull/3890?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/eb5601b0a5a88824a2598956f96e06e7f2422bce&el=desc) will **decrease** coverage by `0.03%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3890?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3890 +/- ##
==========================================
- Coverage 78.57% 78.53% -0.04%
==========================================
Files 106 106
Lines 17962 17962
==========================================
- Hits 14113 14106 -7
- Misses 3849 3856 +7
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3890?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3890/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `91.28% <0.00%> (-1.32%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3890/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <0.00%> (+0.12%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3890?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3890?src=pr&el=footer). Last update [eb5601b...fe606bb](https://codecov.io/gh/huggingface/transformers/pull/3890?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,889 | closed | Update comparison table | 04-21-2020 21:24:46 | 04-21-2020 21:24:46 | ||
transformers | 3,888 | closed | encode_for_summarization function did actually add CLS and SEP to separate sentences | https://github.com/huggingface/transformers/blob/d32585a304107cb9f42ccb0e1278405aa3eb6c9c/examples/summarization/bertabs/utils_summarization.py#L130
Hi,
Could you please take a look at this part of the codes? I think this part might not act the function to separate the sentences. The function doesn't actually add CLS and SEP to sentences. Thank you in advance for your help! | 04-21-2020 20:20:59 | 04-21-2020 20:20:59 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,887 | closed | pytorch lightning examples doesn't work in multi gpu's with backend=dp | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: run_pl.sh (run_pl_glue.py)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: Glue
## To reproduce
Steps to reproduce the behavior:
1. run_pl.sh script with multi-gpu's (ex:8 gpu's)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Glue training should happen
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: Linux
- Python version: 3.7
- PyTorch version (GPU?): 1.4
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: DataParallel
| 04-21-2020 19:29:43 | 04-21-2020 19:29:43 | I get the below error:
```
Validation sanity check: 0%| | 0/5 [00:00<?, ?it/s]Traceback (most recent call last):
File "run_pl_glue.py", line 186, in <module>
trainer = generic_train(model, args)
File "/home/jupyter/transformers/examples/transformer_base.py", line 307, in generic_train
trainer.fit(model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 701, in fit
self.dp_train(model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 540, in dp_train
self.run_pretrain_routine(model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 843, in run_pretrain_routine
False)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 262, in _evaluate
output = self.evaluation_forward(model, batch, batch_idx, dataloader_idx, test_mode)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 430, in evaluation_forward
output = model(*args)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/overrides/data_parallel.py", line 66, in forward
return self.gather(outputs, self.output_device)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 165, in gather
return gather(outputs, output_device, dim=self.dim)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 68, in gather
res = gather_map(outputs)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 62, in gather_map
for k in out))
File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 62, in <genexpr>
for k in out))
File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 55, in gather_map
return Gather.apply(target_device, dim, *outputs)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/_functions.py", line 54, in forward
assert all(map(lambda i: i.is_cuda, inputs))
AssertionError
```
@nateraw @williamFalcon<|||||>update to the latest lightning version?
0.7.4rc1<|||||>@williamFalcon doesn't work with lightning version 0.7.4rc1, 0.7.4rc2 and even 0.7.3, 0.7.1
<|||||>ok, can you share a colab here? happy to take a look<|||||>@williamFalcon Thanks. I'm running the code as per the given instructions in https://github.com/huggingface/transformers/tree/master/examples/glue
I didn't make any changes, I just ran the same official example script in multi gpu's - https://github.com/huggingface/transformers/blob/master/examples/glue/run_pl.sh
It works in CPU and single GPU, but doesn't work in multi-gpu's <|||||>It is a bit unclear what is going on in there: the bash script installs lightning but the python code doesn't seem to use it?<|||||>I am also facing the error but on a different custom learning model. My code is working properly on a single GPU, however, if I increase the number of GPUs to 2, it gives me the above error. I checked both PL 0.7.3 and 0.7.4rc3
**Update: Interestingly when I changed ``distributed_backend`` to ``ddp`` then it worked perfectly without any error** I think there is an issue with the **``dp``** distributed_backend<|||||>
run_pl.sh runs fine.
I ran without ANY changes to the file. Did you guys change anything in the file?<|||||>@williamFalcon Didn't change anything, hope you ran it in multi-gpu's. The code seems to run fine in ddp, but not in dp, as mentioned by @mmiakashs .
When I debugged, I found that when using dp (DataParallel) with 8 gpu's, it generates 8 different losses and since the training_step can't gather 8 losses, it showed error like this:
``` TypeError: zip argument #1 must support iteration ```
<|||||>Ummm, yeah not sure... It looks ok to me.


Try running dp on 2 GPUs? This test is on 2 GPUs<|||||>It looks like hf sets ddp as the backend which is great because dp has a bunch of issues (this is a PyTorch problem, not lightning). Both PyTorch and lightning discourage dp use.
Just ran this with the default ddp and it works well (although the run_pl.sh script has a bunch of usability issues, ie: i need the data in a different part of the cluster but that script doesn't do that, so I had to run from that directory in the cluster. Ideally --data_dir solves this issue but it doesn't).<|||||>I can confirm that the issue occurs only when using multi-gpu's with dp as backend. Using ddp solves the issues.
I found one more issue. If I use fast tokenizers with ddp as backend, I get the below error:
```
INFO:lightning:GPU available: True, used: True
INFO:lightning:CUDA_VISIBLE_DEVICES: [0,1]
/opt/conda/lib/python3.7/site-packages/pytorch_lightning/utilities/warnings.py:18: RuntimeWarning: You have defined a `val_dataloader()` and have defined a `validation_step()`, you may also want to define `validation_epoch_end()` for accumulating stats.
warnings.warn(*args, **kwargs)
Traceback (most recent call last):
File "run_pl_glue.py", line 187, in <module>
trainer = generic_train(model, args)
File "/home/jupyter/transformers/examples/transformer_base.py", line 310, in generic_train
trainer.fit(model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 734, in fit
mp.spawn(self.ddp_train, nprocs=self.num_processes, args=(model,))
File "/opt/conda/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 162, in spawn
process.start()
File "/opt/conda/lib/python3.7/multiprocessing/process.py", line 112, in start
self._popen = self._Popen(self)
File "/opt/conda/lib/python3.7/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/opt/conda/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/opt/conda/lib/python3.7/multiprocessing/popen_fork.py", line 20, in __init__
self._launch(process_obj)
File "/opt/conda/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/opt/conda/lib/python3.7/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle Tokenizer objects
```<|||||>>
> I found one more issue. If I use fast tokenizers with ddp as backend, I get the below error:
>
@leslyarun I am also facing a similar issue with ddp backend (not exactly the same): [github issue](https://github.com/PyTorchLightning/pytorch-lightning/issues/1578)
My guess is that maybe there is an issue with the callback and the saving objects with pickle. At this moment I will try to manually save checkpoint without using the callbacks.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>@mmiakashs did that end up working?<|||||>> @mmiakashs did that end up working?
currently, I am using ddp_spwan mode and it is working fine. <|||||>@sshleifer can confirm A) the Lightning examples don't work at all with `dp` B) does run, but needs significant editing with `ddp`
For examples I've looked at it's not as simple as turning `ddp` on and all great. It seems whomever wrote the Lightning examples never tried multi-GPU. Happy to elaborate or share (though mine are not in great shape at the moment).
And `ddp_spawn` definitely does not work for me. Gives several spawn-based errors -- says my model is not compliant. <|||||>A) don't know but that sounds very likely. @williamFalcon told me "Dont use dp".
B) `examples/seq2seq/finetune.py` works in multigpu with two caveats:
(a) versions need to be transformers=master, pl=0.8.1.
(b) you cannot pass `--do_predict`. (`pl.Trainer.test` is broken for multi-gpu)
For the other two pl examples: ner, and glue, I haven't tested multi-gpu, but they should be at least close to working because they inherit from the same `BaseTransformer`. Which one of those were you trying to run/ are you interesting in running?
<|||||>Thanks @sshleifer. We're fine using `ddp` for everything -- only need one version to work, not multiple ways to do the same thing. Also according to the docs, `ddp` is the only one that works with FP16 anyway (have not tested yet, will do soon).
https://pytorch-lightning.readthedocs.io/en/latest/multi_gpu.html
I'm working off of `transformers` from GitHub... so should be a recent version. If that's not what you are saying couple you please be more specific?
We also don't necessarily "need" Lightning... but would be great if it worked (in single set of settings) for multi-GPU. As it is great having reasonable out of the box options for LR schedule, model synchronization, gradient accumulation, and all those other things I've grown tired of implementing for every project. <|||||>@moscow25 dp is NOT recommended by PyTorch
https://pytorch.org/docs/master/generated/torch.nn.DataParallel.html

2. The current base transformers has a few issues which I've submitted a PR for.
3. Please let me know what example you are using / what code i can look at to reproduce the issues.<|||||>> @sshleifer can confirm A) the Lightning examples don't work at all with `dp` B) does run, but needs significant editing with `ddp`
>
> For examples I've looked at it's not as simple as turning `ddp` on and all great. It seems whomever wrote the Lightning examples never tried multi-GPU. Happy to elaborate or share (though mine are not in great shape at the moment).
>
> And `ddp_spawn` definitely does not work for me. Gives several spawn-based errors -- says my model is not compliant.
ddp doesn't work for me and ddp_spawn gives a lot of errors. On using ddp, no error is shown but it doesn't start anything on the GPU - just the notebook cell being busy indefinitely. I am using the DistilBertTokenizer and DistilBertModel - has anyone been able to run pytorch lightning on multipe gpus with Distilbert?<|||||>I suspect that your issue is ddp+jupyter rather than distillbert. Try running your command from the terminal.<|||||>> I suspect that your issue is ddp+jupyter rather than distillbert. Try running your command from the terminal.
Why does running the code in Jupyter notebook create a problem? I was able to run the BertModels like SequenceClassification in the Jupyter notebook on multiple gpus without any problem - but running into this multiple gpu problem using pytorch lightning. It is nice to be able to use Pytorch lightning given all the built in options. It makes it easier to build the models interactively on the Jupyter notebook<|||||>> > I suspect that your issue is ddp+jupyter rather than distillbert. Try running your command from the terminal.
>
> Why does running the code in Jupyter notebook create a problem? I was able to run the BertModels like SequenceClassification in the Jupyter notebook on multiple gpus without any problem - but running into this multiple gpu problem using pytorch lightning. It is nice to be able to use Pytorch lightning given all the built in options. It makes it easier to build the models interactively on the Jupyter notebook
Looks like usage of ddp doesn't work in Jupyter notebook. and transformers don't work with dp parameter of pytorch lightning in Jupyter notebook. So looks like the only option to use pytorch lightning, multiple gpus and transformer is to run it as a python script.
https://pytorch-lightning.readthedocs.io/en/latest/multi_gpu.html
Jupyter Notebooks
Unfortunately any ddp_ is not supported in jupyter notebooks. Please use dp for multiple GPUs. This is a known Jupyter issue. If you feel like taking a stab at adding this support, feel free to submit a PR!<|||||>i believe @nateraw is almost done updating the examples with the latest version of PL.
can you share the model that does work with multiple gpus in a jupyter notebook?<|||||>I read somewhere on the pytorch lightning documents about being careful to checkpoint models when running on DDP mode - can't find that documentation now but is there something I need to be careful about checkpointing while running DDP on a single machine with 8 GPUs? It was something about the model getting split among multiple machines - not sure if that is valid if DDP used on a single machine. <|||||>nothing you have to worry about... we save the checkpoint correctly automatically <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,886 | closed | How to find a correct place of original word from the list of predicted words from GPT-2 model? | Hi,
I would like to calculate at which place correct word option lies in the top 5 predicted words from GPT-2 model?
For this purpose, I am using following code snippet:
```
subseq = "The car moves very" #sample sequence
orignal_word="fast"
sequence = tokenizer.encode(subseq, return_tensors="pt")
next_word_id = tokenizer.encode(orignal_word, return_tensors="pt")
next_word = tokenizer.decode(next_word_id[0])
next_word_logits = model(sequence)[0][0, -1].detach()
probabilities, word_ids = next_word_logits.topk(5) #Getting top 5 next word options
rank=1.0
for word_id in word_ids:
word = tokenizer.decode([word_id])
if word == next_word:
break;
rank=rank+1.0
print("Rank of Correct option is "+ str(rank))
```
I am not sure whether it is done perfectly or not as GPT-2 model uses BPE tokenizer. Am I doing it in a right way? Kindly share your thoughts, and correct me if I am doing something wrong in it.
| 04-21-2020 18:03:18 | 04-21-2020 18:03:18 | It won't be that easy since some words will be split into multiple tokens so you have to make two forward passes.
If you limit your `original_word` to just one token words (you can check that simply with `len(tokenizer.encode(original_word))==1`. Then your idea here should work.
If not it's gonna be trickier. Also this issue might be helpful:
https://github.com/huggingface/transformers/issues/2311<|||||>Thanks @patrickvonplaten for your response.
Yes, the code works for `len(tokenizer.encode(original_word))==1`, but not for those `original_word` , which consist of more than one tokens.
I look at the shared issue, but I am confused, which selected word id, should I pass to the model again, as `next_word_logits.topk(5)` gives me 5 token ids?
Can you please share any code snippet, which will work for the second part?<|||||>Hi @patrickvonplaten,
can u plz let me know about any update?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,885 | closed | Pretrain From Scratch using Google TPU | @julien-c @patrickvonplaten
I want to pretrain a model from scratch by utilizing Google Cloud TPU offered in kaggle. I can train the model without TPU but I want to train it on a TPU. Any help will be much appreciated.
Also what options do I have, if there is no straight forward approach ? | 04-21-2020 17:55:23 | 04-21-2020 17:55:23 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,884 | closed | Problem trying to run AlbertForMaskedLM on Colab TPU: TypeError: can't pickle torch._C.ScriptFunction objects when calling xm.send_cpu_data_to_device(model, dev) | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): AlbertForMaskedLM
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [X ] the official example scripts: (give details below)
* [X ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Related issues: https://github.com/pytorch/xla/issues/1909
The following has already been talked about here (https://github.com/huggingface/transformers/pull/3743) but I couldn't find a solution? Apologies if I'm posting about something that's already been dealt with, pretty new to all of this.
I am running the following code on colab on a TPU session taken from the example here: https://huggingface.co/transformers/model_doc/albert.html#albertformaskedlm
```
import os
import torch
import torch_xla
import torch_xla.core.xla_model as xm
assert os.environ['COLAB_TPU_ADDR']
dev = xm.xla_device()
from transformers import AlbertTokenizer, AlbertForMaskedLM
import torch
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
model = AlbertForMaskedLM.from_pretrained('albert-base-v2')
model = xm.send_cpu_data_to_device(model, dev)
model = model.to(dev)
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1
data = input_ids.to(dev)
outputs = model(data, masked_lm_labels=data)
loss, prediction_scores = outputs[:2]
```
I haven't done anything to the example code except move ```input_ids``` and ```model``` onto the TPU device using ```.to(dev)``` and ```xm.send_cpu_data_to_device(model, dev)```. It seems everything is moved to the TPU no problem as when I input ```data``` I get the following output: ```tensor([[ 2, 10975, 15, 51, 1952, 25, 10901, 3]], device='xla:1')```
However when I run this code I get the following error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-b7b68efc9620> in <module>()
11 tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
12 model = AlbertForMaskedLM.from_pretrained('albert-base-v2')
---> 13 model = xm.send_cpu_data_to_device(model, dev)
14 model = model.to(dev)
15 input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1
18 frames
/usr/lib/python3.6/copy.py in copy(x)
94 reductor = getattr(x, "__reduce_ex__", None)
95 if reductor:
---> 96 rv = reductor(4)
97 else:
98 reductor = getattr(x, "__reduce__", None)
TypeError: can't pickle torch._C.ScriptFunction objects
```
Anyone know what's going on?
## Expected behavior
I expected the AlbertForMaskedLM model to work on colab TPU without any errors.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.0a0+ab660ae (False)
- Tensorflow version (GPU?): 2.2.0-rc3 (False)
- Using GPU in script?: no, attempting to use TPU
- Using distributed or parallel set-up in script?: no
| 04-21-2020 17:11:13 | 04-21-2020 17:11:13 | Seems fixed now--delete transformers installed via pip and install by cloning this repo. |
transformers | 3,883 | closed | No longer able to fine-tune GPT2 using provided examples | A few months ago, I was able to run GPT2 on a Google Colab notebook. This was using the following script, which is based on the provided docs:
```
!git clone https://github.com/huggingface/transformers
%cd transformers
!pip install .
!pip install -r ./examples/requirements.txt
!python /content/transformers/examples/run_lm_finetuning.py \
--output_dir=output \
--model_type=gpt2 \
--model_name_or_path=gpt2 \
--do_train \
--train_data_file=/content/train.txt \
--do_eval \
--eval_data_file=/content/test.txt \
--per_gpu_train_batch_size=2
!python /content/transformers/examples/run_generation.py \
--model_type=gpt2 \
--model_name_or_path=gpt2 \
--length 500
```
Coming back to it after a little while, it no longer works. I realise that `run_lm_finetuning.py` has been replaced by `run_language_modeling.py`. However, running this file instead either produces a `command not found error`, or it asks me to provide details that I've already provided: `the following arguments are required: --train_data_file, --output_dir, --model_type`.
I appreciate that you guys perform a great service to the community by making these models available, and I thank you for doing so. I also understand that it's my responsibility to keep up with changes. All the same, any help in getting this functionality back on track would be appreciated!
| 04-21-2020 16:28:19 | 04-21-2020 16:28:19 | Never mind; this was an issue with the colab. It's sorted now. |
transformers | 3,882 | closed | Create model card for RoBERTa large fine-tuned on wsc | 04-21-2020 16:06:56 | 04-21-2020 16:06:56 | I there any problem with this card? |
|
transformers | 3,881 | closed | Fix Torch.hub + Integration test | - Torch.hub doesn't use pip-installed versions of modules, but uses a [custom importer instead](https://github.com/pytorch/pytorch/blob/master/torch/hub.py#L70-L83) (it imports `hubconf.py`) which means that:
- all imports from hubconf.py refer to src.transformers instead of transformers
- all imports inside the lib's code **must** be relative, i.e. shouldn't assume that the transformers module is installed (it's not)
- Added a GitHub action workflow to ensure that `hub.list` and `hub.help` always work. | 04-21-2020 14:57:26 | 04-21-2020 14:57:26 | (Hmm, GitHub was failing earlier today, and now it seems to have posted my comment multiple times. Sorry about that.)<|||||>See cbbb3c43c55d2d93a156fc80bd12f31ecbac8520 |
transformers | 3,880 | closed | Replace `config.output_attentions` parameter with function argument `output_attentions` | # 🚀 Feature request
Currently the user has to decide whether the model should output the attentions when she/he creates the config of a model: config.output_attentions = True/False. It would be nice if the user can decide this when calling the models `forward()` / `call()` with a flag `output_attentions`. This should be done for all TF and PT models that can output attentions.
A very similar recent change was done for the variable `config.output_past` -> see PR:#3734
## Motivation
The user has more flexibility when the hidden states should be output or not.
## Your contribution
If someone feels like contributing to the library, this would be a great first PR. I'm very happy to guide the contributor through the PR!
| 04-21-2020 14:00:23 | 04-21-2020 14:00:23 | Hi, I would like to work on this issue.<|||||>That's great :-) Do you want to open a PR and do the changes analogous to PR: #3734 ? <|||||>Is this still be worked on? If not, I'd be happy to make a first contribution here<|||||>First PR first serve ;-) Still an open issue<|||||>Any tips on how I should proceed? I was thinking of following the changes made for `config.output_past` (01c37dc), but for `config.output_attentions` instead.<|||||>Oh sorry @drjosephliu didn't notice the comment as I was working on it earlier today, my apologies 😞 <|||||>Hey @patrickvonplaten, i noticed this issue has been closed. Any updates on what changes were made and any updates to the PR i still need to make?<|||||>Hey @drjosephliu, I will take a closer look at your PR tomorrow :-) |
transformers | 3,879 | closed | Replace `config.output_hidden_states` parameter with function argument `output_hidden_states` | # 🚀 Feature request
Currently the user has to decide whether the model should output the hidden states when she/he creates the config of a model: `config.output_hidden_states = True/False`. It would be nice if the user can decide this when calling the models `forward()` / `call()` with a flag `output_hidden_states`. This should be done for all TF and PT models that can output hidden states.
A very similar recent change was done for the variable `config.output_past` -> see PR:https://github.com/huggingface/transformers/pull/3734
## Motivation
The user has more flexibility when the hidden states should be output or not.
## Your contribution
If someone feels like contributing to the library, this would be a great first PR. I'm very happy to guide the contributor through the PR! | 04-21-2020 13:57:38 | 04-21-2020 13:57:38 | Hi, @patrickvonplaten I want to take up this issue. Can I move forward with it? <|||||>I think this could have side effects for libraries that use `config.output_hidden_states`, so I'm cc'ing @Timoeller and @brandenchan, because this parameter is used in [FARM](https://github.com/deepset-ai/FARM).<|||||>> Hi, @patrickvonplaten I want to take up this issue. Can I move forward with it?
That would be great, feel free to open a PR and do a first model. The PR should be very similar to what was done in PR #3734<|||||>Hi, @patrickvonplaten as I am new here I might take some time to get acquainted with the codebase and come up with a PR. Is it okay?<|||||>Sorry @gaurav-singh1998, I saw this issue was still open so I made a PR for it.<|||||>Okay, no issues @drjosephliu I'll find some other good first issues to solve. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,878 | closed | When will ELECTRA pretraining from scratch will be available? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
Is pre training from scratch for electra available? couldnt find it.
Thanks!
| 04-21-2020 12:57:23 | 04-21-2020 12:57:23 | Working on it as we speak :).
I'd say it will be out in a few weeks at most.<|||||>Is Albert pretraining from scratch is available? @LysandreJik
<|||||>@LysandreJik do you think the update will be available by the end of the month ? Maybe it has been postponed due to the recent addition of the Trainer and the refactor of the language_modeling script ?<|||||>It has been postponed a bit due to the recent addition of the Trainer and the TPU work on it, but I definitely aim to have it out earlier than by the end of the month :)<|||||>Was looking for Albert pre-training from scratch but I think there is support for Bert, Roberta and distillbert only as of now.
@LysandreJik can you guide how can I do Albert pretraining from scratch?<|||||>@LysandreJik Is the code for pretraining Electra from scratch available now?
<|||||>> @LysandreJik Is the code for pretraining Electra from scratch available now?
Not yet. There's a PR about it;
https://github.com/huggingface/transformers/pull/4656<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Are there any updates to this, or plans to release the ELECTRA pre-training from scratch feature soon?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Hello.
Are there any updates?<|||||>Curious as well.<|||||>The development of the ELECTRA pretraining from scratch is in a stale state with no plans to work on it further, see https://github.com/huggingface/transformers/pull/4656#issuecomment-711082850
See https://discuss.huggingface.co/t/electra-training-reimplementation-and-discussion/1004 by @richarddwang for a PyTorch implementation of the ELECTRA pretraining. |
transformers | 3,877 | closed | ImportError: cannot import name 'MODEL_CLASSES' from 'run_glue' | # 🐛 Bug
## Information
I try to run the latest versions of the examples and got the error message(I have installed the main README mentioned procedure from source):
Traceback (most recent call last):
File "run_bertology.py", line 33, in <module>
from run_glue import ALL_MODELS, MODEL_CLASSES, load_and_cache_examples, set_seed
ImportError: cannot import name 'MODEL_CLASSES' from 'run_glue' (/home/stud-yantao/Transformer/transformers/examples/run_glue.py)
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
run_bertology.py
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
MNLI
## To reproduce
Steps to reproduce the behavior:
export TASK_NAME=mnli
python ./run_bertology.py --data_dir $GLUE_DIR/$TASK_NAME
--model_name bert-base-uncased
--task_name $TASK_NAME
--output_dir ./tmp/$TASK_NAME/
--try_masking
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Traceback (most recent call last):
File "run_bertology.py", line 33, in <module>
from run_glue import ALL_MODELS, MODEL_CLASSES, load_and_cache_examples, set_seed
ImportError: cannot import name 'MODEL_CLASSES' from 'run_glue' (/home/stud-yantao/Transformer/transformers/examples/run_glue.py)
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform:
- Python version:3.7
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:yes
- Using distributed or parallel set-up in script?:
| 04-21-2020 12:16:28 | 04-21-2020 12:16:28 | Should be fixed on master, please share if it does/your results |
transformers | 3,876 | closed | How to reduce random summary generation of BART Summarization models? | Currently BART model trained on CNN dataset is generating summaries which consist of new nouns which are not present in the input text.
How to control the randomness of these summaries. Is there any parameter like temperature in GPT-2 model which can control the degree to which the model can go off-topic?
| 04-21-2020 09:57:49 | 04-21-2020 09:57:49 | BART is used as a model for abstractive summarization so it can use different words than those used in the original text. But it should not go *off-topic*. You could use an extractive summarization model instead which does not generate new nouns. Also you might be interested in the methods from [*Controlling the Amount of Verbatim Copying in Abstractive Summarization*](https://arxiv.org/pdf/1911.10390.pdf) to control the degree of change in the summaries.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,875 | closed | T5 Translation Error | # 🐛 Bug
## Information
Model I am using (T5-base):
Language I am using the model on (T5-base):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
```
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained('t5-base')
model = T5ForConditionalGeneration.from_pretrained('t5-base')
data="translate English to German: Hello, my dog is cute"
input_ids = tokenizer.encode(data, return_tensors="pt") # Batch size 1
outputs = model.generate(input_ids, decoder_start_token_id = tokenizer.eos_token_id)
print(outputs)
```
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. pip install transformers
2. run the code mentioned above, which always produces tensor([[1, 1]])
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: linux
- Python version: python3.6
- PyTorch version (GPU?): torch 1.2.0 , with GPU
- Tensorflow version (GPU?): p40
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| 04-21-2020 09:48:37 | 04-21-2020 09:48:37 | Why do you use `decoder_start_token_id = tokenizer.eos_token_id` ? Is that stated in the examples somewhere?
If you do:
```
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained('t5-base')
model = T5ForConditionalGeneration.from_pretrained('t5-base')
data="translate English to German: Hello, my dog is cute"
input_ids = tokenizer.encode(data, return_tensors="pt") # Batch size 1
outputs = model.generate(input_ids, decoder_start_token_id = tokenizer.eos_token_id)
print(tokenizer.decode(outputs[0]))
```
The translation is correct. Also I recommend using the translation pipeline. This way T5 uses better generation paramaters.<|||||>@patrickvonplaten Thanks a lot ! I used previous configuration file from https://s3.amazonaws.com/models.huggingface.co/bert/t5-base-config.json ; After using the new configuration file, translation error is gone !
My code is :
```from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained('t5-base')
model = T5ForConditionalGeneration.from_pretrained('t5-base')
data="translate English to German: Hello, my dog is cute"
input_ids = tokenizer.encode(data, return_tensors="pt") # Batch size 1
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0])) |
transformers | 3,874 | closed | create readme for spentaur/yelp model | sorry, not sure if this is the right way to do this | 04-21-2020 01:47:53 | 04-21-2020 01:47:53 | It is! (though ideally you would add an example of use + details about training)
Will merge this unless you add more in the next 24 hours.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3874?src=pr&el=h1) Report
> Merging [#3874](https://codecov.io/gh/huggingface/transformers/pull/3874?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b1ff0b2ae7d368b7db3a8a8472a29cc195d278d8&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3874?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3874 +/- ##
=======================================
Coverage 78.57% 78.57%
=======================================
Files 106 106
Lines 17962 17962
=======================================
Hits 14114 14114
Misses 3848 3848
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3874?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3874?src=pr&el=footer). Last update [b1ff0b2...8a40fb1](https://codecov.io/gh/huggingface/transformers/pull/3874?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>that makes a lot of sense. i'll update that. thank you<|||||>[model page](https://huggingface.co/spentaur/yelp) |
transformers | 3,873 | closed | Call to torch.pow() passing integer as exponent isn't per PyTorch docs | # 🐛 Bug
## Information
I am running the GPT2 pytest and it fails to load in [PyTorch Glow](https://github.com/pytorch/glow) because the model calls _torch.pow()_ with an integer for the _exponent_ parameter.
Per the PyTorch documentation (https://pytorch.org/docs/master/torch.html?highlight=torch%20pow#torch.pow):
> exponent can be either a single float number or a Tensor with the same number of elements as input.
and
>exponent (float or tensor) – the exponent value
The test was run with the following modifications to enable Glow:
```
diff --git a/src/transformers/modeling_gpt2.py b/src/transformers/modeling_gpt2.py
index 12013996..d6f39007 100644
--- a/src/transformers/modeling_gpt2.py
+++ b/src/transformers/modeling_gpt2.py
@@ -265,7 +265,7 @@ class GPT2PreTrainedModel(PreTrainedModel):
if isinstance(module, (nn.Linear, nn.Embedding, Conv1D)):
# Slightly different from the TF version which uses truncated_normal for initialization
# cf https://github.com/pytorch/pytorch/pull/5617
- module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
+ module.weight.data.normal_(mean=0.0, std=0.02) #self.config.initializer_range)
if isinstance(module, (nn.Linear, Conv1D)) and module.bias is not None:
module.bias.data.zero_()
elif isinstance(module, nn.LayerNorm):
diff --git a/tests/test_modeling_common.py b/tests/test_modeling_common.py
index 1d11ef8c..9df209f7 100644
--- a/tests/test_modeling_common.py
+++ b/tests/test_modeling_common.py
@@ -27,6 +27,7 @@ from .utils import require_torch, slow, torch_device
if is_torch_available():
import torch
+ import torch_glow
import numpy as np
from transformers import (
@@ -209,6 +210,8 @@ class ModelTesterMixin:
inputs = inputs_dict["input_ids"] # Let's keep only input_ids
try:
+ torch_glow.enableFusionPass()
+ torch_glow.setGlowBackend('Interpreter')
traced_gpt2 = torch.jit.trace(model, inputs)
except RuntimeError:
self.fail("Couldn't trace module.")
```
## To reproduce
Steps to reproduce the behavior:
1. python -m pytest -v -k 'test_torchscript and not test_torchscript_' ./tests/test_modeling_gpt2.py
## Expected behavior
Expect exponent to be passed as a float per the documentation so that model loaders adhering to the docs will be able to load the model.
## Environment info
- `transformers` version: 2.8.0
- Platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-centos-7.5.1804-Core
- Python version: 3.6.8
- PyTorch version (GPU?): 1.5.0a0+8eaafbd (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
| 04-20-2020 23:35:28 | 04-20-2020 23:35:28 | I think you are correct. Any way you'd open a PR? Otherwise we'll get on it in the next few days/weeks |
transformers | 3,872 | closed | torchscript tests fail with RuntimeError: normal_ expects std > 0.0, but found std=0 | # 🐛 Bug
## Information
I am running the gpt2 torchscript test from master and a call to _normal_()_ fails because the _std_ parameter is zero. The error is not limited to the GPT2 model.
## To reproduce
Steps to reproduce the behavior:
1. python -m pytest -v -k 'test_torchscript and not test_torchscript_' ./tests/test_modeling_gpt2.py
The test fails with the following errors:
```
$ python -m pytest -v -k 'test_torchscript and not test_torchscript_' ./tests/test_m
odeling_gpt2.py
========================================================================================================= test session starts ==========================================================================================================
platform linux -- Python 3.6.8, pytest-5.2.0, py-1.8.1, pluggy-0.13.1 -- /local/mneilly/sw-platform-cawg/build/install-staging/sw-platform-sysroot/bin/python
cachedir: .pytest_cache
metadata: {'Python': '3.6.8', 'Platform': 'Linux-3.10.0-957.el7.x86_64-x86_64-with-centos-7.5.1804-Core', 'Packages': {'pytest': '5.2.0', 'py': '1.8.1', 'pluggy': '0.13.1'}, 'Plugins': {'forked': '1.1.3', 'html': '2.0.0', 'metadata'
: '1.8.0', 'xdist': '1.30.0'}}
rootdir: /local/mneilly/sw-platform-cawg/build/cawg-regression/pytorch-models/transformers/transformers/src/transformers
plugins: forked-1.1.3, html-2.0.0, metadata-1.8.0, xdist-1.30.0
collected 29 items / 28 deselected / 1 selected
tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript FAILED [100%]
=============================================================================================================== FAILURES ===============================================================================================================
____________________________________________________________________________________________________ GPT2ModelTest.test_torchscript ____________________________________________________________________________________________________
self = <tests.test_modeling_gpt2.GPT2ModelTest testMethod=test_torchscript>
def test_torchscript(self):
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
> self._create_and_check_torchscript(config, inputs_dict)
tests/test_modeling_common.py:186:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/test_modeling_common.py:207: in _create_and_check_torchscript
model = model_class(config=configs_no_init)
src/transformers/modeling_gpt2.py:353: in __init__
self.init_weights()
src/transformers/modeling_utils.py:392: in init_weights
self.apply(self._init_weights)
../../../../../../install-staging/sw-platform-sysroot/lib/python3.6/site-packages/torch/nn/modules/module.py:289: in apply
module.apply(fn)
../../../../../../install-staging/sw-platform-sysroot/lib/python3.6/site-packages/torch/nn/modules/module.py:290: in apply
fn(self)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GPT2Model(
(wte): Embedding(99, 32)
(wpe): Embedding(512, 32)
(drop): Dropout(p=0.1, inplace=False)
(h): Modul...pout): Dropout(p=0.1, inplace=False)
)
)
)
(ln_f): LayerNorm((32,), eps=1e-05, elementwise_affine=True)
)
module = Embedding(99, 32)
def _init_weights(self, module):
""" Initialize the weights.
"""
if isinstance(module, (nn.Linear, nn.Embedding, Conv1D)):
# Slightly different from the TF version which uses truncated_normal for initialization
# cf https://github.com/pytorch/pytorch/pull/5617
> module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
E RuntimeError: normal_ expects std > 0.0, but found std=0
src/transformers/modeling_gpt2.py:268: RuntimeError
=================================================================================================== 1 failed, 28 deselected in 1.97s ===================================================================================================
```
The test passes with the following modification:
```
diff --git a/src/transformers/modeling_gpt2.py b/src/transformers/modeling_gpt2.py
index 12013996..d6f39007 100644
--- a/src/transformers/modeling_gpt2.py
+++ b/src/transformers/modeling_gpt2.py
@@ -265,7 +265,7 @@ class GPT2PreTrainedModel(PreTrainedModel):
if isinstance(module, (nn.Linear, nn.Embedding, Conv1D)):
# Slightly different from the TF version which uses truncated_normal for initialization
# cf https://github.com/pytorch/pytorch/pull/5617
- module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
+ module.weight.data.normal_(mean=0.0, std=0.02) #self.config.initializer_range)
if isinstance(module, (nn.Linear, Conv1D)) and module.bias is not None:
module.bias.data.zero_()
elif isinstance(module, nn.LayerNorm):
```
Producing the following output:
```
$ python -m pytest -v -k 'test_torchscript and not test_torchscript_' ./tests/test_modeling_gpt2.py
========================================================================================================= test session starts ==========================================================================================================
platform linux -- Python 3.6.8, pytest-5.2.0, py-1.8.1, pluggy-0.13.1 -- /local/mneilly/sw-platform-cawg/build/install-staging/sw-platform-sysroot/bin/python
cachedir: .pytest_cache
metadata: {'Python': '3.6.8', 'Platform': 'Linux-3.10.0-957.el7.x86_64-x86_64-with-centos-7.5.1804-Core', 'Packages': {'pytest': '5.2.0', 'py': '1.8.1', 'pluggy': '0.13.1'}, 'Plugins': {'forked': '1.1.3', 'html': '2.0.0', 'metadata': '1.8.0', 'xdist': '1.30.0'}}
rootdir: /local/mneilly/sw-platform-cawg/build/cawg-regression/pytorch-models/transformers/transformers/src/transformers
plugins: forked-1.1.3, html-2.0.0, metadata-1.8.0, xdist-1.30.0
collected 29 items / 28 deselected / 1 selected
tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript PASSED [100%]
=========================================================================================================== warnings summary ===========================================================================================================
tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript
/local/mneilly/sw-platform-cawg/build/cawg-regression/pytorch-models/transformers/transformers/src/transformers/src/transformers/modeling_gpt2.py:146: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
w = w / math.sqrt(v.size(-1))
tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript
/local/mneilly/sw-platform-cawg/build/cawg-regression/pytorch-models/transformers/transformers/src/transformers/src/transformers/modeling_gpt2.py:148: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
mask = self.bias[:, :, ns - nd : ns, :ns]
-- Docs: https://docs.pytest.org/en/latest/warnings.html
```
## Expected behavior
Expected test to pass
## Environment info
- `transformers` version: 2.8.0
- Platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-centos-7.5.1804-Core
- Python version: 3.6.8
- PyTorch version (GPU?): 1.5.0a0+8eaafbd (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
| 04-20-2020 23:17:29 | 04-20-2020 23:17:29 | |
transformers | 3,871 | closed | Tokenizer could accept a string tensor | Currently, the [`batch_encode_plus`](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.batch_encode_plus) method from tokenizer, can return a Tensorflow tensor.
It can be done assigning `tf` to the optional parameter `return_tensors`.
It would be great if this method also accepted a Tensorflow string tensor in parameter ` batch_text_or_text_pairs`.
For instance, if someone has the following `sample_string_tensor`:
```python
import tensorflow as tf
batch_size = 4
sample_string_tensor = tf.convert_to_tensor(
["sãmple utf-8 stríng - " + str(i) for i in range(n_strings)]
)
# <tf.Tensor: shape=(4,), dtype=string, numpy=
# array([b's\xc3\xa3mple utf-8 str\xc3\xadng - 0',
# b's\xc3\xa3mple utf-8 str\xc3\xadng - 1',
# b's\xc3\xa3mple utf-8 str\xc3\xadng - 2',
# b's\xc3\xa3mple utf-8 str\xc3\xadng - 3'], dtype=object)>
```
the tokenization would be as simple as:
```python
tokenized_sample = tokenizer.batch_encode_plus(
sample_string_tensor,
max_length=max_length,
pad_to_max_length=True,
return_tensors="tf"
)
``` | 04-20-2020 22:42:07 | 04-20-2020 22:42:07 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,870 | closed | bert summarizer module import error | # 🐛
Running bert summarizer ([run_summarization.py](https://github.com/huggingface/transformers/blob/master/examples/summarization/bertabs/run_summarization.py)) gives the following error
```
Traceback (most recent call last):
File "run_summarization.py", line 15, in <module>
from .utils_summarization import (
ModuleNotFoundError: No module named '__main__.utils_summarization'; '__main__' is not a package
```
## Information
I've managed to fix this issue personally by changing "from .utils_summarization import" to "from utils_summarization import", though I don't know if this is due to a convention change in python module imports.
The problem arises when using:
* [x] the official example scripts: (give details below)
Running the following command yields the error
python3 run_summarization.py --documents_dir ".../bertabs/dataset/input" --summaries_output_dir ".../bertabs/dataset/output" --no_cuda false --batch_size 4 --min_length 50 --max_length 200 --beam_size 5 --alpha 0.95 --block_trigram true
## To reproduce
Steps to reproduce the behavior:
1. Followed the steps here https://github.com/huggingface/transformers/blob/5b396457e5035a8b16ddee14b205c098598fe6bb/examples/summarization/bertabs/README.md
Though I skipped the 'Reproduce the authors' ROUGE score' section, that should not have any effect on usage for different inputs.
2. Created custom data paths for testing a single input article.
3. Run the command given above.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
Traceback (most recent call last):
File "run_summarization.py", line 15, in <module>
from .utils_summarization import (
ModuleNotFoundError: No module named '__main__.utils_summarization'; '__main__' is not a package
```
## Expected behavior
It should import the module utils_summarization
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: Linux-4.15.0-74-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 04-20-2020 22:37:27 | 04-20-2020 22:37:27 | HELLO eeegnu, I'm also facing the same issue.

**Environment Info :**
Platform: Windows 10 64bit
Python version: 3.6.10
PyTorch version (GPU?): 1.4.0
Tensorflow version (GPU?): not installed (NA)
<|||||>Hi!
Change: `from .utils_summarization import (
CNNDMDataset,
build_mask,
compute_token_type_ids,
encode_for_summarization,
truncate_or_pad,
)`
To: `from utils_summarization import (
CNNDMDataset,
build_mask,
compute_token_type_ids,
encode_for_summarization,
truncate_or_pad,
)` <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,869 | closed | ImportError: cannot import name 'HfArgumentParser' from 'transformers' | Hi,
I installed tokenizers-0.5.2 transformers-2.8.0.
When I try to run run_bertology.py in the example dir calling it with
export TASK_NAME=mnli
python ./run_bertology.py --data_dir $GLUE_DIR/$TASK_NAME
--model_name bert-base-uncased
--task_name $TASK_NAME
--output_dir ./tmp/$TASK_NAME/
--try_masking
But it fails with
Traceback (most recent call last):
File "run_bertology.py", line 33, in <module>
from run_glue import ALL_MODELS, MODEL_CLASSES, load_and_cache_examples, set_seed
File "/Users/thomas/PycharmProjects/transformers/examples/run_glue.py", line 34, in <module>
from transformers import (
ImportError: cannot import name 'HfArgumentParser' from 'transformers' (/Users/thomas/opt/anaconda3/lib/python3.7/site-packages/transformers/__init__.py)
Please help, thanks. | 04-20-2020 19:30:22 | 04-20-2020 19:30:22 | Hi,
As mentioned [here](https://github.com/huggingface/transformers/tree/master/examples#examples) and in the main README you need to install from source in order to run the latest versions of the examples |
transformers | 3,868 | closed | unable to load model 'bert', tensor 'input_ids': the model expects 1 dimensions but the model configuration specified 2 dimensions | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
[https://github.com/NVIDIA/triton-inference-server/issues/1338](url)
how to support batchsize for bert onnx in triton-inference-server?
use bert from https://github.com/huggingface/transformers,
fellow https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/python/tools/bert#export-a-bert-model-from-pytorch.
gotten "model.onnx" with batchsize as first dim (netron).
i set dynamic_batching { preferred_batch_size: [ 4, 8, 32 ] max_queue_delay_microseconds: 100 } in "config.pbtxt"
wrong
max_batch_size : 1024 input [ { name: "input_ids" data_type: TYPE_INT64 dims: [-1, 128] } ]
wrong max_batch_size : 1024 input [ { name: "input_ids" data_type: TYPE_INT64 dims: [1, 128] reshape: { shape: [-1,128 ] } } ]
ok max_batch_size : 1024 input [ { name: "input_ids" data_type: TYPE_INT64 dims: [128] } ]
all latest two version.
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | 04-20-2020 17:34:00 | 04-20-2020 17:34:00 | I'm assuming you read https://blog.einstein.ai/benchmarking-tensorrt-inference-server/ ?<|||||>>
>
> I'm assuming you read https://blog.einstein.ai/benchmarking-tensorrt-inference-server/ ?
ok: torch.jit.trace(bert)
Failed :torch.jit.script
Failed:torch.jit.trace(bert+after)
Failed: to onnx:
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py", line 686, in forward
extended_attention_mask = extended_attention_mask.to(dtype=next(self.parameters()).dtype) # fp16 compatibility
UnboundLocalError: local variable 'extended_attention_mask' referenced before assignment
(pip transformers==2.2.0, fixed latested source)
solved by remove all none default option argments. minize uncertainty
|
transformers | 3,867 | closed | Tokenization issue with RoBERTa and DistilRoBERTa. | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
RoBERTa (roberta-base), DistilRoBERTa (distilroberta-base)
Language I am using the model on (English, Chinese ...):
English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
I am trying to encode the embeddings for the sentences, and I found a tokenization issue with a certain (type of) sentence which ends with ").". I noticed that the tokenizer cannot tokenize ')' from '.' and further causes issues with the sentence length.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
**Dataset: SemEval 2016 Task 5, SB1 EN-REST**
## To reproduce
Steps to reproduce the behavior:
See in the following codes:
```python
import torch
import numpy as np
from transformers import AutoModel, AutoTokenizer
text = '(Besides that there should be more restaurants like it around the city).'
for model_name in ['roberta-base', 'distilroberta-base']:
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
token_dict = tokenizer.encode_plus(text, None, return_tensors='pt')
print('model_name: {}'.format(model_name))
print("Token (str): {}".format(
tokenizer.convert_ids_to_tokens(token_dict['input_ids'][0])))
print("Token (int): {}".format(token_dict['input_ids']))
print("Type: {}".format(
token_dict['token_type_ids']))
print('Output Embeddings: {}\n'.format(
model(token_dict['input_ids'])[0].shape))
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Expected output:
```
model_name: roberta-base
Token (str): ['<s>', 'Ġ(', 'Besides', 'Ġthat', 'Ġthere', 'Ġshould', 'Ġbe', 'Ġmore', 'Ġrestaurants', 'Ġlike', 'Ġit', 'Ġaround', 'Ġthe', 'Ġcity', ')', 'Ġ.', '</s>']
Token (int): tensor([[ 0, 36, 41107, 14, 89, 197, 28, 55, 4329, 101,
24, 198, 5, 343, 43, 479, 2]])
Type: tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
Output Embeddings: torch.Size([1, 17, 768])
model_name: distilroberta-base
Token (str): ['<s>', 'Ġ(', 'Besides', 'Ġthat', 'Ġthere', 'Ġshould', 'Ġbe', 'Ġmore', 'Ġrestaurants', 'Ġlike', 'Ġit', 'Ġaround', 'Ġthe', 'Ġcity', ')', 'Ġ.', '</s>']
Token (int): tensor([[ 0, 36, 41107, 14, 89, 197, 28, 55, 4329, 101,
24, 198, 5, 343, 43, 479, 2]])
Type: tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
Output Embeddings: torch.Size([1, 17, 768])
```
<!-- A clear and concise description of what you would expect to happen. -->
Basically, the expected behavior is to tokenize ')' and '.' separately. ~~Furthermore, I am also curious about what these 'Ġ' characters are in the RoBERTa encoding? I checked the vocabulary and I found both the normal words and the words starting with this 'Ġ' character so I am a bit confused.~~
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.1
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.7.6
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
| 04-20-2020 16:49:12 | 04-20-2020 16:49:12 | >Furthermore, I am also curious about what these 'Ġ' characters are in the RoBERTa encoding?
It's a feature of byte-level BPE (an encoded space character)
[Ref-bart-fairseq](https://github.com/pytorch/fairseq/issues/1716), [Ref-openai-gpt](https://github.com/openai/gpt-2/issues/80)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,866 | closed | [examples] fix summarization do_predict | - by copying NER
- add type hints | 04-20-2020 14:35:39 | 04-20-2020 14:35:39 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3866?src=pr&el=h1) Report
> Merging [#3866](https://codecov.io/gh/huggingface/transformers/pull/3866?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a21d4fa410dc3b4c62f93aa0e6bbe4b75a101ee9&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3866?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3866 +/- ##
=======================================
Coverage 78.61% 78.62%
=======================================
Files 106 106
Lines 17953 17953
=======================================
+ Hits 14114 14115 +1
+ Misses 3839 3838 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3866?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3866/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <0.00%> (+0.12%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3866?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3866?src=pr&el=footer). Last update [a21d4fa...c84984c](https://codecov.io/gh/huggingface/transformers/pull/3866?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,865 | closed | Summarisation tuning | Hi everybody,
I’ve tried using BART summarisation code, and I had a question about finetune.py
Can SummarisationTrainer checkpoint be loaded as a BartForConditionalGeneration model from the evaluation script? | 04-20-2020 13:06:42 | 04-20-2020 13:06:42 | Great, thanks @sshleifer <|||||>(Duplicate of https://github.com/huggingface/transformers/issues/3853)
<|||||>sorry, mentioned wrong issue |
transformers | 3,864 | closed | Add language and license information to model cards | Should fix issues #3397 and #3357 | 04-20-2020 10:35:24 | 04-20-2020 10:35:24 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3864?src=pr&el=h1) Report
> Merging [#3864](https://codecov.io/gh/huggingface/transformers/pull/3864?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a21d4fa410dc3b4c62f93aa0e6bbe4b75a101ee9&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3864?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3864 +/- ##
=======================================
Coverage 78.61% 78.62%
=======================================
Files 106 106
Lines 17953 17953
=======================================
+ Hits 14114 14115 +1
+ Misses 3839 3838 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3864?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3864/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.76% <0.00%> (+0.16%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3864?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3864?src=pr&el=footer). Last update [a21d4fa...7bbf47b](https://codecov.io/gh/huggingface/transformers/pull/3864?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Hi @julien-c !
I hope you are well. My pull request is ready for review.
I have tried my best to add license and language information to all model cards. I have added a few model cards as well.
Note that my changes may have some downstream consequences:
- the addition of a "license" key, the value being an atomic list)
- the normalization of the "languages" key (I added the "s"), the value being a list of [ISO 639-1 codes](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes). For multilingual BERT, I had to simplify some rare languages by merging it with its ISO "macro code" (example: "South Azerbaijani" -> "az", "Bavarian" -> "de")
On the website, you may want to read those values and render them back in human-readable form.
Cheers,
Alex
<|||||>Hi Alex,
as mentioned in the previous issue, I'd rather use identifiers listed in https://help.github.com/en/github/creating-cloning-and-archiving-repositories/licensing-a-repository
I can probably search and replace though.
Also, you pushed whitespace changes which make reviewing the actual changes slightly tedious.<|||||>[EDIT] Sure, I have replaced licenses with these identifiers!
For whitespaces, I have autoformatting activated on sublime, that's why. Sorry for the inconvenience. <|||||>Good morning @julien-c,
I hope all is well. What do you think of this PR?
Cheers,
Alex<|||||>I'm doing a partial merge (retaining your authorship information, @alexcombessie) of the licenses, as the languages will require some backend changes.
(I'll do a search and replace at a later point)
Thank you for your contribution |
transformers | 3,863 | closed | Cannot convert RoBERTa to tflite model | # 🐛 Bug
## Information
Model I am using:
RoBERTa (roberta-base)
Language I am using the model on:
English
The problem arises when using:
Conversion based on https://github.com/huggingface/tflite-android-transformers/blob/master/models_generation/distilbert.py
The tasks I am working on is:
It is irrelevant on this step.
## To reproduce
1. Build python conversion script.
2. Run it.
**Conversion script**
```python
import tensorflow as tf
from transformers import TFRobertaForSequenceClassification
model = TFRobertaForSequenceClassification.from_pretrained('roberta-base')
input_spec = tf.TensorSpec([1, 384], tf.int32)
model._set_inputs(input_spec, training=False)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
# For conversion with hybrid quantization:
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
converter.experimental_new_converter = True
tflite_model = converter.convert()
open("test.tflite", "wb").write(tflite_model)
```
Error: **tf.Cumsum op is neither a custom op nor a flex op and needs a custom implementation**
## Expected behavior
No errors.
## Environment info
- Transformers version: 2.8.0
- Platform: Windows 10
- Python version: 3.7.0
- Tensorflow version: 2.1.0
| 04-20-2020 06:31:44 | 04-20-2020 06:31:44 | I believe this is because `tf.Cumsum` is not a supported operation and not an issue relating to this repo. Here is a link to the tensorflow documentation on supported ops. [https://www.tensorflow.org/lite/guide/ops_compatibility]
In the past, I've been able to get around unsupported ops by reimplementing the operator with supported ops or replacing the unsupported portion with another op. ie. `relu` in place of `gelu`.<|||||>Hey @will-rice, thank you for giving me an idea how to handle this issue. I managed to overcome this problem, by using custom _cumsum_ function implemented in pure python by @ibab in here https://github.com/tensorflow/tensorflow/issues/813.
I just changed it to sum over rows not columns, as the way it is done in the Roberta model.
Here is a cumsum function:
```python
def cumsum(xs):
values = tf.unstack(xs, axis=1)
out = []
prev = tf.zeros_like(values[0])
for val in values:
s = prev + val
out.append(s)
prev = s
result = tf.stack(out, axis=1)
return result
```
and it is used in the _modeling_tf_roberta.py_ file in line 69:
```python
# Original code / non tflite compatible way
incremental_indicies = tf.math.cumsum(mask, axis=1) * mask)
# My custom code / tflite compatible way
incremental_indicies = cumsum(mask) * mask
```
Hope it will help anyone aswell!<|||||>Also cc'ing @Pierrci <|||||>@julien-c any updates on this feature? Was browsing through the later releases but could not find any reference.
Thanks!<|||||>@dshahrokhian As mentioned by @will-rice, the issue is due to the lack of support for the `tf.Cumsum` operator by TFLite and thus not related to `transformers`. If you encounter the same problem you can implement the workaround posted by @kubux1 earlier, or implement a similar one if you're having this issue with a different operator.<|||||>@Pierrci thanks! It also seems to be have been solved in the latest release of `tf-nightly`: https://github.com/tensorflow/tensorflow/issues/42382#issuecomment-675000451 |
transformers | 3,862 | closed | New model added | The first model added to the repo | 04-20-2020 02:09:56 | 04-20-2020 02:09:56 | Thanks! [model page](https://huggingface.co/Hate-speech-CNERG/dehatebert-mono-english)<|||||>A few things to add to the model card if you can (happy to help!)
- which language(s) is it trained on?
- How can one use it, i.e. is this a sequence classifier? |
transformers | 3,861 | closed | How to do parameter sharing between two BERT models | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | 04-20-2020 01:40:10 | 04-20-2020 01:40:10 | You could do it very simply by passing the reference around:
```py
from transformers import BertModel
model = BertModel.from_pretrained("bert-base-cased")
model2 = BertModel.from_pretrained("bert-base-cased")
model2.embeddings = model.embeddings
print(model2.embeddings.word_embeddings.weight)
model.embeddings.word_embeddings.weight = torch.nn.Parameter(torch.zeros_like(model.embeddings.word_embeddings.weight))
print(model2.embeddings.word_embeddings.weight)
```
which outputs the result (note that I'm updating the `model.embeddings` and printing the `model2.embeddings`):
```py
Parameter containing:
tensor([[-0.0005, -0.0416, 0.0131, ..., -0.0039, -0.0335, 0.0150],
[ 0.0169, -0.0311, 0.0042, ..., -0.0147, -0.0356, -0.0036],
[-0.0006, -0.0267, 0.0080, ..., -0.0100, -0.0331, -0.0165],
...,
[-0.0064, 0.0166, -0.0204, ..., -0.0418, -0.0492, 0.0042],
[-0.0048, -0.0027, -0.0290, ..., -0.0512, 0.0045, -0.0118],
[ 0.0313, -0.0297, -0.0230, ..., -0.0145, -0.0525, 0.0284]],
requires_grad=True)
Parameter containing:
tensor([[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]], requires_grad=True)
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,860 | closed | Update README.md | Improved results from new hardware | 04-19-2020 17:50:49 | 04-19-2020 17:50:49 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3860?src=pr&el=h1) Report
> Merging [#3860](https://codecov.io/gh/huggingface/transformers/pull/3860?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a21d4fa410dc3b4c62f93aa0e6bbe4b75a101ee9&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3860?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3860 +/- ##
=======================================
Coverage 78.61% 78.61%
=======================================
Files 106 106
Lines 17953 17953
=======================================
Hits 14114 14114
Misses 3839 3839
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3860?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3860?src=pr&el=footer). Last update [a21d4fa...9e4fe33](https://codecov.io/gh/huggingface/transformers/pull/3860?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,859 | closed | ValueError: Unable to set proper padding strategy as the tokenizer does not have a padding token. | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
Using pad_token, but it is not set yet.
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
tokens = tokenizer.batch_encode_plus(
["This is a sample", "This is another longer sample text"],
pad_to_max_length=True ,max_length=10 ,return_attention_mask = True# First sentence will have some PADDED tokens to match second sequence length
)
for i in range(2):
print("Tokens (int) : {}".format(tokens['input_ids'][i]))
print("Tokens (str) : {}".format([tokenizer.convert_ids_to_tokens(s) for s in tokens['input_ids'][i]]))
print("Tokens (attn_mask): {}".format(tokens['attention_mask'][i]))
print()
| 04-19-2020 16:38:51 | 04-19-2020 16:38:51 | 
<|||||>tokenizer.pad_token = 0<|||||>You have to set the pad_token_id yourself as it's stated in the error message ;-). I would recommend using the `eos_token_id` as the `pad_token_id` for GPT2:
```python
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
tokenizer.pad_token = tokenizer.eos_token
```
as it's written in the error message ;-)<|||||>I hit the same issue after using [add_special_tokens](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.add_special_tokens) with `{"pad_token": "PAD"}` dictionary.
I understand based on the error and the documentation it should not raise the error, right? @patrickvonplaten should the issue be reopened?<|||||>For completeness:
- patrickvonplaten tip `tokenizer.pad_token = tokenizer.eos_token` solved it
- the whole error message for me was
```
ValueError: Unable to set proper padding strategy as the tokenizer does not have a padding token. In this case please set the `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)`
or add a new pad token via the function add_special_tokens if you want to use a padding strategy
```<|||||>You didn't set pad_token. You can set it like this:
```python
tokenizer.pad_token = "[PAD]"
``` |
transformers | 3,858 | closed | Write with transformers demo hardware | Hi,
Your package is great and your demo is cool too. I was wondering what kind of hardware does it take to generate text with the large language models. I've had a difficult time running GPT-2 with 1.5B params on a Tesla T4 GPU.
Any pointers will be greatly appreciated.
Thanks in advance,
Barak | 04-19-2020 12:45:56 | 04-19-2020 12:45:56 | Hi! We run on K80 and V100 GPUs. Running the 1.5B params model was very costly and hard to maintain, so we're not running it anymore, however. It should run fine on a single V100/Titan RTX however.
Did you have issues with the T4 GPU because of memory?<|||||>Thanks for you response. This is really beneficial. I think that it is due to memory.
Are you serving your models with a flask server, TF-serving, or a different serving framework?
Were you serving it using your PyTorch or Tensorflow implementation?
Thanks again<|||||>We're serving our models in PyTorch, using a mix of gunicorn/falcon to handle requests. You can see the detailers [here](https://medium.com/huggingface/scaling-a-massive-state-of-the-art-deep-learning-model-in-production-8277c5652d5f)!<|||||>Really clear blog post.
Thanks |
transformers | 3,857 | closed | [Pipelines] Encode to max length of input not max length of tokenizer for batch input | I don't see a reason why we have to pad to `tokenizer.max_length` when encoding. Tokenizers automatically encode until the longest `input_ids` which is much more efficient IMO. | 04-19-2020 12:37:20 | 04-19-2020 12:37:20 | Update: rm'ed inaccurate comment |
transformers | 3,856 | closed | Bug in optimization_tf create_optimizer | # 🐛 Bug
## Information
WHEN I am using optimization_tf(create_optimizer):
Problems with learning rate schedule
## To reproduce
```
from transformers.optimization_tf import create_optimizer
import matplotlib.pyplot as plt
%matplotlib inline
opt = create_optimizer(init_lr=5e-5, num_train_steps=100, num_warmup_steps=50)
lr = opt.learning_rate
results = [lr(i).numpy() for i in range(101)]
print(results[49:51])
plt.plot(results)
plt.show()
```
output [4.9e-05, 2.5e-05, 2.45e-05]
## Expected output
the max lr should be 5e-05
expected output [4.9e-05, 5e-05, 4.9e-05]
```
class WarmUp(tf.keras.optimizers.schedules.LearningRateSchedule):
def __call__(self, step):
with tf.name_scope(self.name or "WarmUp") as name:
# Implements polynomial warmup. i.e., if global_step < warmup_steps, the
# learning rate will be `global_step/num_warmup_steps * init_lr`.
global_step_float = tf.cast(step, tf.float32)
warmup_steps_float = tf.cast(self.warmup_steps, tf.float32)
warmup_percent_done = global_step_float / warmup_steps_float
warmup_learning_rate = self.initial_learning_rate * tf.math.pow(warmup_percent_done, self.power)
return tf.cond(
global_step_float < warmup_steps_float,
lambda: warmup_learning_rate,
lambda: self.decay_schedule_fn(step),
name=name,
)
```
change:
lambda: self.decay_schedule_fn(step) =>lambda: self.decay_schedule_fn(step-warmup_steps_float)
```
def create_optimizer(init_lr, num_train_steps, num_warmup_steps):
"""Creates an optimizer with learning rate schedule."""
# Implements linear decay of the learning rate.
learning_rate_fn = tf.keras.optimizers.schedules.PolynomialDecay(
initial_learning_rate=init_lr, decay_steps=num_train_steps, end_learning_rate=0.0
)
if num_warmup_steps :
learning_rate_fn = WarmUp(
initial_learning_rate=init_lr, decay_schedule_fn=learning_rate_fn, warmup_steps=num_warmup_steps
)
```
change:
PolynomialDecay decay_steps should be num_train_steps-num_warmup_steps
| 04-19-2020 11:04:11 | 04-19-2020 11:04:11 | I have raised https://github.com/huggingface/transformers/pull/4940, waiting for approval.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,855 | closed | Fix Documentation issue in BertForMaskedLM forward | Fix #3066 by interchanging positions of `ltr_lm_loss` and `masked_lm_loss` | 04-19-2020 10:36:18 | 04-19-2020 10:36:18 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3855?src=pr&el=h1) Report
> Merging [#3855](https://codecov.io/gh/huggingface/transformers/pull/3855?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a21d4fa410dc3b4c62f93aa0e6bbe4b75a101ee9&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3855?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3855 +/- ##
=======================================
Coverage 78.61% 78.62%
=======================================
Files 106 106
Lines 17953 17953
=======================================
+ Hits 14114 14115 +1
+ Misses 3839 3838 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3855?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/3855/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.40% <100.00%> (ø)` | |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3855/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.76% <0.00%> (+0.16%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3855?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3855?src=pr&el=footer). Last update [a21d4fa...06fb495](https://codecov.io/gh/huggingface/transformers/pull/3855?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Good catch @Bharat123rox - thanks for the PR :-) |
transformers | 3,854 | closed | Added electra-bahasa README | 04-19-2020 08:57:07 | 04-19-2020 08:57:07 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3854?src=pr&el=h1) Report
> Merging [#3854](https://codecov.io/gh/huggingface/transformers/pull/3854?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a21d4fa410dc3b4c62f93aa0e6bbe4b75a101ee9&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3854?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3854 +/- ##
=======================================
Coverage 78.61% 78.62%
=======================================
Files 106 106
Lines 17953 17953
=======================================
+ Hits 14114 14115 +1
+ Misses 3839 3838 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3854?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.76% <0.00%> (+0.16%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3854?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3854?src=pr&el=footer). Last update [a21d4fa...b5f2dc5](https://codecov.io/gh/huggingface/transformers/pull/3854?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Cherry-picked in 7f23af16840113fe137f42415a9daa7ce7f7f15f
Thank you, that looks great! cc @LysandreJik and @clarkkev |
|
transformers | 3,853 | closed | How to use fine-tuned BART for prediction? | # ❓ Questions & Help
## Details
I fine-tuned the BART model on a custom summarization dataset using the **transformers/examples/summarization/bart/finetune.py** and **transformers/examples/summarization/bart/run_train.sh** files in the repository for training (which generated three _checkpointepoch=*.ckpt_ files) and prediction (which generated a _.txt_ file with the test loss scores).
I have two questions on using this model for prediction:
- How can I modify _finetune.py_ to generate predictions for the test set, in addition to the loss scores? I see some test functions in _finetune.py_, but I'm not sure how to use these for generating a _.txt_ file with the predictions.
- How can I load the generated _.ckpt_ files into BartForConditionalGeneration()? A _config.json_ file was not generated along with the checkpoint files; there doesn't seem to be a TFBartForConditionalGeneration; and the _convert_tf_checkpoint_to_pytorch.py_ script in the repo doesn't seem to support BART yet.
Thank you for your time! | 04-18-2020 22:10:00 | 04-18-2020 22:10:00 | Facing a similar type of issue for T5. @sshleifer <|||||>The last ckpt file should be loaded into a `pl.LightningModule` if the --do_predict flag is specified.
There is a bug on master that messes up the loading, but it's fixed in #3866
To use that code immediately, you can run:
```
git fetch
git checkout examples-summ-do-predict
```
then your same `finetune.py` command
with `--do_predict` (and not --do_train) and the proper `--output_dir`.
Would love to know if that works!
cc: @ethanjperez.<|||||>Change is on master, let me know if this solves the problem!<|||||>Config.json is still not generated while training.<|||||>```python
def log_hyperparams(model: pl.LightningModule):
model.config.save_pretrained(model.hparams.output_dir)
with open(os.path.join(model.hparams.output_dir, "hparam.json")) as f:
json.dump(model.hparams, f)
```
You can call this somewhere in your code, if that's helpful.<|||||>@sshleifer, thank you - I can run ./run_train.sh with the --predict() option successfully.
Regarding my original question, could you please specify how to load the checkpoint into the LighteningModule?
After inspecting [transformer_base.py](https://github.com/huggingface/transformers/blob/master/examples/transformer_base.py), I think hparams is equivalent to the arguments provided in run_train.sh, so a separate hparams.json file does not need to be generated. Please correct me if I'm wrong.
I am receiving the following error with my current code:
`pytorch_lightning.utilities.exceptions.MisconfigurationException: Checkpoint contains hyperparameters but LightningModule's __init__ is missing the argument 'hparams'. Are you loading the correct checkpoint?`
I've been using the following code, based on the discussion in https://github.com/PyTorchLightning/pytorch-lightning/issues/525 and https://pytorch-lightning.readthedocs.io/en/latest/weights_loading.html:
```
# load model
import pytorch_lightning as pl
from argparse import Namespace
# usually these come from command line args
args = Namespace(data_dir='CE_data/',
model_type='bart',
model_name_or_path='bart-large',
learning_rate='3e-5',
train_batch_size=4,
eval_batch_size=4,
output_dir='transformers/examples/summarization/bart/bart_sum',
do_predict='do_predict')
pretrained_model = pl.LightningModule.load_from_checkpoint('bart_sum/checkpointepoch=2.ckpt', hparams=args)
pretrained_model.eval()
# or for prediction
out = model(inputs['input_ids'])
print(out)
``'
Thank you for your time.<|||||>Seems close to correct.
https://github.com/huggingface/transformers/blob/7d40901ce3ad9e1c79fd9bb117f5b84bff42c33f/examples/summarization/bart/finetune.py#L164-L175
is how we do it @riacheruvu<|||||>@sshleifer
1. Originally config.json is not created which is a requirement for prediction using fine-tuned model.
*As shown in the screenshot, I add this code in transformer_base.py in end, config and hparam files are created.
* Then try to predict with --do_predict, then it gives, ""We assumed '/content/t5' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.""
What are the requirements to use fine-tuned model?
<img width="696" alt="Screenshot 2020-04-21 at 5 50 10 PM" src="https://user-images.githubusercontent.com/30004110/79886728-c1d0bf80-83f9-11ea-90e5-400afc575da1.png">
----------------------------------------------------------------
2. To predict for a single instance using the fine-tuned model, do I need to specify the test.target file also. I want to predict unknown instance without calculating the loss value.
<|||||>@sshleifer, thank you. I've got to the point where I can load the model and generate "outputs" using the forward() function, but I can't decode the outputs - using tokenizer.decoder() results in an error. Should I be using model.generate() instead of model.forward()? If so, it seems SummarizationTrainer does not support model.generate?
Revised code:
```
tokenizer = BartTokenizer.from_pretrained('bart-large-cnn')
ARTICLE_TO_SUMMARIZE = "My friends are cool but they eat too many carbs."
inputs = tokenizer.batch_encode_plus([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors='pt')['input_ids']
checkpoints = list(sorted(glob.glob(os.path.join(args.output_dir, "checkpointepoch=*.ckpt"), recursive=True)))
model = model.load_from_checkpoint(checkpoints[-1])
model.eval()
model.freeze()
outputs = model(inputs)
print(outputs) #Successfully prints two 3D tensors in a tuple
#print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in outputs]) #Results in ValueError: only one element tensors can be converted to Python scalars
print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in outputs[0][0]])
```
The error I'm encountering
```
Traceback (most recent call last):
File "finetune.py", line 194, in <module>
main(args)
File "finetune.py", line 184, in main
print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in outputs[1][0]])
File "finetune.py", line 184, in <listcomp>
print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in outputs[1][0]])
File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/tokenization_utils.py", line 2141, in decode
sub_texts.append(self.convert_tokens_to_string(current_sub_text))
File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/tokenization_gpt2.py", line 235, in convert_tokens_to_string
text = "".join(tokens)
TypeError: sequence item 0: expected str instance, NoneType found
```<|||||>I found a solution. The model.generate() function is necessary to extract the predictions. I defined a separate function in the SummarizationTrainer() class to use self.model.generate(), and was able to use tokenizer.decoder() on the outputs.
I was encountering issues when using self.tokenizer, so I assume using 'bart-large-cnn' tokenizer for similar custom summarization datasets is okay.
@prabalbansal, I'm not sure if the same method will apply to T5, but it could work for predicting for a single instance, per one of your questions.
My code is below:
```
def text_predictions(self, input_ids):
generated_ids = self.model.generate(
input_ids=input_ids,
num_beams=1,
max_length=80,
repetition_penalty=2.5,
length_penalty=1.0,
early_stopping=True,
)
preds = [
self.tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True)
for g in generated_ids
]
return preds
...
# Optionally, predict on dev set and write to output_dir
if args.do_predict:
# See https://github.com/huggingface/transformers/issues/3159
# pl use this format to create a checkpoint:
# https://github.com/PyTorchLightning/pytorch-lightning/blob/master\
# /pytorch_lightning/callbacks/model_checkpoint.py#L169
tokenizer = BartTokenizer.from_pretrained('bart-large-cnn')
ARTICLE_TO_SUMMARIZE = "My friends are cool but they eat too many carbs."
inputs = tokenizer.batch_encode_plus([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors='pt')['input_ids']
checkpoints = list(sorted(glob.glob(os.path.join(args.output_dir, "checkpointepoch=*.ckpt"), recursive=True)))
model = model.load_from_checkpoint(checkpoints[-1])
model.eval()
model.freeze()
outputs = model.text_predictions(inputs)
print(outputs)
```
Thank you for the help, @sshleifer !<|||||>@riacheruvu Thank You. It works for T5 also.<|||||>I followed the steps given in this thread and am still facing an issue. I get an error saying the below when I try to use my fine-tuned model for prediction.
OSError: Can't load '/home/bart/bart_1/checkpointepoch=3.ckpt'. Make sure that:
- '/home/bart/bart_1/checkpointepoch=3.ckpt' is a correct model identifier listed on 'https://huggingface.co/models'
- or '/home/bart/bart_1/checkpointepoch=3.ckpt' is the correct path to a directory containing a 'config.json' file
<|||||>@sangeethabal15, with my model, files were only generated up till the 2nd epoch. Just to confirm, do you have a checkpointepoch=3.ckpt file?
Are you using the load_from_checkpoint() function?
<|||||>@riacheruvu yes I do have checkpoint=3.ckpt file. I gave my own number of epochs instead of the default 3.
Yes I am using the load_from_checkpoint() function<|||||>Ok. Could you share your code here, @sangeethabal15? It might be easier to help debug. <|||||>@riacheruvu This is my modified code -
# Optionally, predict on dev set and write to output_dir
if args.do_predict:
# See https://github.com/huggingface/transformers/issues/3159
# pl use this format to create a checkpoint:
# https://github.com/PyTorchLightning/pytorch-lightning/blob/master\
# /pytorch_lightning/callbacks/model_checkpoint.py#L169
examples = [" " + x.rstrip() for x in open("/home/bart/input/test.source").readlines()]
fout = Path("output.txt").open("w")
checkpoints = list(sorted(glob.glob(os.path.join(args.output_dir, "checkpointepoch=*.ckpt"), recursive=True)))
model = model.load_from_checkpoint(checkpoints[-1])
tokenizer = BartTokenizer.from_pretrained("bart-large")
max_length = 80
min_length = 5
for batch in tqdm(list(chunks(examples, 8))):
dct = tokenizer.batch_encode_plus(batch, max_length=1024, return_tensors="pt", pad_to_max_length=True)
summaries = model.generate(
input_ids=dct["input_ids"].to(device),
attention_mask=dct["attention_mask"],
num_beams=4,
length_penalty=2.0,
max_length=max_length + 2, # +2 from original because we start at step=1 and stop before max_length
min_length=min_length + 1, # +1 from original because we start at step=1
no_repeat_ngram_size=3,
early_stopping=True,
decoder_start_token_id=model.config.eos_token_id,
)
dec = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summaries]
for hypothesis in dec:
fout.write(hypothesis + "\n")
fout.flush()<|||||>Thank you, @sangeethabal15. From the error message you posted earlier, it seems load_from_checkpoint() is expecting a config.json file in the specified directory.
I have a few more debug questions:
- Do you have the latest version of the code?
- Does load_from_checkpoint() work with the checkpoint file for the 2nd epoch?
- If that fails, does your code run successfully if you use the default number of epochs?<|||||>@riacheruvu
- I do have the latest version of the code though I have not trained the model on the latest version of it.
- load_from_checkpoint doesn't work with the 2nd either and expects a config.json file
- and yes the code runs successfully on the default number of epochs as well.<|||||> import json
def log_hyperparams(model: pl.LightningModule):
model.config.save_pretrained(model.hparams.output_dir)
with open(os.path.join(model.hparams.output_dir, "hparam.json"),'w') as f:
json.dump(model.hparams.__dict__, f)
if args.do_train:
trainer.fit(model)
log_hyperparams(model)
@sangeethabal15 Could you add this at the end of transformer_base.py. This works for me.
<|||||>@prabalbansal this is for when I am training my model. Since I have already fine-tuned my model, is there any workaround for test time when I am trying to predict my outputs?<|||||>@riacheruvu I am currently working on a Text Summarization problem. I have collected a small dataset of my own. Implementing BART is very easy. I can generate a great summary. But I want to know how to how to use BART model for training my own custom dataset. Can you please kindly help me with this?
I have browsed through internet. But I cannot any find any helpful resources as it is relatively new compared to other Transfer learning models.<|||||>@murugeshmanthiramoorthi you can just use run_train.sh in the bart folder where you give in your parameters to run the fiinetune.py file<|||||>@sangeethabal15 Thank you so much for your reply mam. I am completely new to transfer learning mam. I can't get what you are upto. Can you kindly explain more elaborately or share a resource so that I can follow up?
Thanks in advance mam.
<|||||>@sangeethabal15 I somehow managed to load the dataset. I run the run_train.sh file. But it is showing me error "python3: can't open file 'finetune.py': [Errno 2] No such file or directory". I even tried changing the data set from my custom dataset to default CNN/daily news dataset. Still, I am getting the same error. Can anyone help me out?<|||||>@riacheruvu @prabalbansal did y'all finetune Bart on your own dataset?<|||||>@sangeethabal15, I fine-tuned BART on my own custom dataset. It's strange that your code runs successfully on the default number of epochs, but load_from_checkpoint() does not work with the 2nd epoch .ckpt file with the original configuration. Where did you modify the default number of epochs?
@murugeshmanthiramoorthi,
Per the instructions given in https://github.com/huggingface/transformers/tree/master/examples/summarization/bart:
The steps I followed are cloning the transformers repo, navigating to the examples/summarization/bart directory, copying over a folder containing the data files (train.target, train.source, val.target, val.source, test.target, and test.source files), and then modifying run_train.sh to use this folder for the data_dir and filling in the other parameters.
For your .source and .target files, you need to structure them similar to the CNN/DM dataset: The .source files should have an article on each line, and the .target files should have a target summary on each line (corresponding to the article in the .source file).<|||||>@riacheruvu I noticed that I get this warning for both training and while testing
INFO:transformers.modeling_utils:Weights from pretrained model not used in BartForConditionalGeneration: ['encoder.version', 'decoder.version']
Seems like my model hasn't been trained properly. Any idea how to go about this?
Also, I have the number of epochs in my run_train.sh. It is defined in the add_specific_args in the transformer_base.py<|||||>that warning doesn't matter.<|||||>@sangeethabal15, I agree that the warning does not matter as I saw that warning as well. It seems the issue might be when training the model with a different number of epochs compared to the default. @sshleifer, has the HuggingFace team tested the code with a different number of epochs before?<|||||>@riacheruvu Thank you so much for your help. But when I proceeded with those steps, I get the error
Traceback (most recent call last):
File "finetune.py", line 10, in <module>
from transformer_base import BaseTransformer, add_generic_args, generic_train, get_linear_schedule_with_warmup
ModuleNotFoundError: No module named 'transformer_base'
Do you have any idea solving this issue.<|||||>@murugeshmanthiramoorthi Follow the below steps and you should be able to run your code.
Important To run the latest versions of the examples, you have to install from source and install some specific requirements for the examples. Execute the following steps in a new virtual environment:
git clone https://github.com/huggingface/transformers
cd transformers
pip install .
pip install -r ./examples/requirements.txt
You can find the above in the readme section of https://github.com/huggingface/transformers/tree/cbbb3c43c55d2d93a156fc80bd12f31ecbac8520/examples<|||||>@murugeshmanthiramoorthi, I agree with @sangeethabal15, I followed the same steps as well.
After installing the dependencies, the code should run without errors about transformer_base - I believe the following line in run_train.sh ensures that:
`# Add parent directory to python path to access transformer_base.py
export PYTHONPATH=“../../“:”${PYTHONPATH}”
`
<|||||>@sshleifer @riacheruvu I keep running into an error every time I change the beam size, define min_length, skip_ngram, length_penalty during decoding time. Here is a snippet of the error
```
Traceback (most recent call last):
File "finetune1.py", line 189, in <module>
main(args)
File "finetune1.py", line 176, in main
outputs = model.text_predictions(inputs)
File "finetune1.py", line 80, in text_predictions
length_penalty=1.0,
File "/home/sangeethabal/.local/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 49, in decorate_no_grad
return func(*args, **kwargs)
File "/home/sangeethabal/.local/lib/python3.7/site-packages/transformers/modeling_utils.py", line 995, in generate
attention_mask=attention_mask,
File "/home/sangeethabal/.local/lib/python3.7/site-packages/transformers/modeling_utils.py", line 1338, in _generate_beam_search
past = self._reorder_cache(past, beam_idx)
File "/home/sangeethabal/.local/lib/python3.7/site-packages/transformers/modeling_bart.py", line 933, in _reorder_cache
((enc_out, enc_mask), decoder_cached_states) = past
ValueError: too many values to unpack (expected 2)
```
The function where I have defined all of this
def test(self, input_ids):
generated_ids = self.model.generate(
input_ids=input_ids,
num_beams=6,
max_length=60,
min_length=4,
no_repeat_ngram_size=3,
length_penalty=1.0,
)
preds = [
self.tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True)
for g in generated_ids
]
return preds
Any idea how to go about this?<|||||>@sangeethabal15, I have two ideas: Try explicitly setting use_cache=True in the generate() function to see if it resolves the error. If that does not work, could you try specifying the attention_mask parameter? I'm looking at modeling_utils.py and modeling_bart.py, and I think these are the two parameters that are linked to this issue.
Edit: It also seems evaluate_cnn.py demonstrates a similar configuration for the generate() function, although the parameters are slightly different. If the two ideas above don't work, you could try using specifying those parameters to confirm it's not an issue with the values of the parameters that were chosen.<|||||>Thank you so much @sangeethabal15 @riacheruvu I got it. Thanks a ton for your help.<|||||>@sshleifer when I use the exact same parameters as in the evaluate_cnn.py code` I still get the exact same error as below. There seems to be an issue with the values chosen for these parameters specified in evaluate_cnn.py
@riacheruvu I have tried the parameters you specified, same issue.
> @sshleifer @riacheruvu I keep running into an error every time I change the beam size, define min_length, skip_ngram, length_penalty during decoding time. Here is a snippet of the error
>
> ```
> Traceback (most recent call last):
> File "finetune1.py", line 189, in <module>
> main(args)
> File "finetune1.py", line 176, in main
> outp
> @sshleifer @riacheruvu I keep running into an error every time I change the beam size, define min_length, skip_ngram, length_penalty during decoding time. Here is a snippet of the error
>
> ```
> Traceback (most recent call last):
> File "finetune1.py", line 189, in <module>
> main(args)
> File "finetune1.py", line 176, in main
> outputs = model.text_predictions(inputs)
> File "finetune1.py", line 80, in text_predictions
> length_penalty=1.0,
> File "/home/sangeethabal/.local/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 49, in decorate_no_grad
> return func(*args, **kwargs)
> File "/home/sangeethabal/.local/lib/python3.7/site-packages/transformers/modeling_utils.py", line 995, in generate
> attention_mask=attention_mask,
> File "/home/sangeethabal/.local/lib/python3.7/site-packages/transformers/modeling_utils.py", line 1338, in _generate_beam_search
> past = self._reorder_cache(past, beam_idx)
> File "/home/sangeethabal/.local/lib/python3.7/site-packages/transformers/modeling_bart.py", line 933, in _reorder_cache
> ((enc_out, enc_mask), decoder_cached_states) = past
> ValueError: too many values to unpack (expected 2)
> ```
>
> The function where I have defined all of this
>
> ```
> def test(self, input_ids):
> generated_ids = self.model.generate(
> input_ids=input_ids,
> num_beams=6,
> max_length=60,
> min_length=4,
> no_repeat_ngram_size=3,
> length_penalty=1.0,
> )
> preds = [
> self.tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True)
> for g in generated_ids
> ]
> return preds
> ```
>
> Any idea how to go about this?<|||||>Try passing `use_cache=True`.
Note that the call [here](https://github.com/huggingface/transformers/blob/b0167632ce815cbdd256a0c8bfff57639748ea75/examples/summarization/bart/finetune.py#L70)
works. Only differences appear to be `attention_mask` and `use_cache`.<|||||>@sshleifer use_cache by default is set to true in the modeling_utils.py. But when I specify the parameter in my function and run the code it throws the following error
> Traceback (most recent call last):
> File "finetune1.py", line 191, in <module>
> main(args)
> File "finetune1.py", line 178, in main
> outputs = model.text_predictions(inputs)
> File "finetune1.py", line 82, in text_predictions
> use_cache=True,
> File "/home/sangeethabal/.local/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 49, in decorate_no_grad
> return func(*args, **kwargs)
> TypeError: generate() got an unexpected keyword argument 'use_cache'<|||||>This isn't enough information for me to diagnose. My guess with the limited info I have is that you didn't run `pip install -e .` from `transformers/`.
What does `pip freeze | grep transformers` say?<|||||>@sshleifer I did run pip install -e .
Here is the output of pip freeze | grep transformers
```
WARNING: pip is being invoked by an old script wrapper. This will fail in a future version of pip.
Please see https://github.com/pypa/pip/issues/5599 for advice on fixing the underlying issue.
To avoid this problem you can invoke Python with '-m pip' instead of running pip directly.
transformers==2.8.0
```
<|||||>Ok, output should look like `-e git+git@...`
try
```bash
git pull
pip install -e .
```
You should probably also upgrade pip, though that shouldn't matter much.<|||||>@riacheruvu hello , do you get <extra_id_0> in your generation output ? <|||||>@ArijRB, hi - I don’t remember seeing that in the output of the model.<|||||>@ArijRB I'm also getting `<extra_id_x>` generations. Were you able to solve that problem? I'm using a T5 model finetuned on my own dataset.<|||||>@riacheruvu How did you load the model in the line 'model.load_from_checkpoint(checkpoints[-1])' of the following code you posted?
> tokenizer = BartTokenizer.from_pretrained('bart-large-cnn')
> ARTICLE_TO_SUMMARIZE = "My friends are cool but they eat too many carbs."
> inputs = tokenizer.batch_encode_plus([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors='pt')['input_ids']
> checkpoints = list(sorted(glob.glob(os.path.join(args.output_dir, "checkpointepoch=*.ckpt"), recursive=True)))
> model = model.load_from_checkpoint(checkpoints[-1])
> model.eval()
> model.freeze()
> outputs = model(inputs)
> print(outputs) #Successfully prints two 3D tensors in a tuple
> #print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in outputs]) #Results in ValueError: only one element tensors can be converted to Python scalars
> print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in outputs[0][0]])
Is 'model' an instance of pl.LightningModule? I still have the error message that you got in the previous post:
```pytorch_lightning.utilities.exceptions.MisconfigurationException: Checkpoint contains hyperparameters but LightningModule's __init__ is missing the argument 'hparams'. Are you loading the correct checkpoint?``` <|||||>@claudiatin, model should be defined as an instance of the Summarization trainer class. You will need to have the following code (which is already under main() in fine tune.py):
```
model = SummarizationTrainer(args)
```
I am wondering if there is an easier way to go about generating the predictions though. I’ve tried calling the Summarization trainer from another python file so I can separate my prediction and training code files, but ran into some issues, so I needed to stick with using another version of finetune.py running with a clone of the repo. If anyone finds an easier way of accomplishing this or if the HuggingFace team can build this functionality in, that would be great.<|||||>@riacheruvu Thank you so much for your answer. I did the same you did, and then I save the .bin file and config.json so I can use 'BartForConditionalGeneration.from_pretrained'. I don't know if it is the best way actually.
```
# model checkpoints and save the model
model = SummarizationTrainer(args)
model = model.load_from_checkpoint('bart_sum/checkpointepoch=2.ckpt')
torch.save(model.state_dict(), args.output_dir + '/pytorch_model.bin')
model.config.to_json_file(args.output_dir + '/config.json')
# load the fine-tuned model and predict
model = BartForConditionalGeneration.from_pretrained('bart_sum')
summarizer = pipeline('summarization', model=model, tokenizer=tokenizer)
summarizer(ARTICLE_TO_SUMMARIZE, max_length=80, min_length=40)
```<|||||>@claudiatin, thank you!
Edit: Please ignore my previous response to your newest reply. I just went through the code again, and I was wrong about the inputs to the from_pretrained() function. I apologize for that.
I’ll try using the code block you provided!<|||||>I tried applying the code provided for T5 (I haven't tried it with BART, but I think it'll work successfully per @claudiatin's response) - I am including the results here for documentation and if anyone knows the solution:
```
from transformers import T5Model, pipeline
model = T5Model.from_pretrained('tfive_sum')
summarizer = pipeline("summarization", model=model, tokenizer="t5-base", framework="tf")
summarizer(ARTICLE_TO_SUMMARIZE, min_length=5, max_length=20)
```
I run into the error:
```
AttributeError: You tried to generate sequences with a model that does not have a LM Head.Please use another model class (e.g. `OpenAIGPTLMHeadModel`, `XLNetLMHeadModel`, `GPT2LMHeadModel`, `CTRLLMHeadModel`, `T5WithLMHeadModel`, `TransfoXLLMHeadModel`, `XLMWithLMHeadModel`, `BartForConditionalGeneration` )
```
I've tried importing T5WithLMHeadModel using from transformers import T5WithLMHeadModel and encounter an `ImportError: cannot import name 'T5WithLMHeadModel'`. I have the most up to date version of the transformers library installed, so I'm not sure if there's something wrong with my setup.<|||||>@riacheruvu, don't worry about the previous answer. For the sake of completeness 'bart_sum' is just the default name of the folder where the checkpoints are saved (the line `export OUTPUT_DIR_NAME=bart_sum` in the run_train.sh). The complete code in my notebook is the following:
```
%cd examples/summarization/bart
!bash run_train.sh # run_train.sh script has been changed in order to use a custom dataset
%cd ../..
from lightning_base import BaseTransformer
%cd summarization/bart
from finetune import SummarizationTrainer
import torch
from argparse import Namespace
args = Namespace(adam_epsilon=1e-08, cache_dir='', config_name='', data_dir='../../../../dataset', do_predict=False, do_train=True, eval_batch_size=2, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=1, learning_rate=3e-05, max_grad_norm=1.0, max_source_length=1024, max_target_length=56, model_name_or_path='bart-large', n_gpu=1, n_tpu_cores=0, num_train_epochs=3, output_dir='bart_sum', seed=42, tokenizer_name='', train_batch_size=2, warmup_steps=0, weight_decay=0.0)
model = SummarizationTrainer(args)
model = model.load_from_checkpoint('bart_sum/checkpointepoch=2.ckpt')
torch.save(model.state_dict(), args.output_dir + '/pytorch_model.bin')
model.config.to_json_file(args.output_dir + '/config.json') # NOW in the bart_sum folder I have checkpoints, pytorch_model.bin and config.json
```
In another notebook
```
import torch
from transformers import BartTokenizer, BartForConditionalGeneration
from transformers import pipeline
tokenizer = BartTokenizer.from_pretrained('bart-large-cnn')
# load the fine-tuned model
model = BartForConditionalGeneration.from_pretrained('transformers/examples/summarization/bart/bart_sum')
```
The code works but the performances are not good. I think this is because of my dataset:)<|||||>Thank you, @claudiatin, and thank you for sharing your code! <|||||>@claudiatin thanks for providing your code. I was able to load a finetuned version of facebook/bart-large-cnn into a pipeline using a far hackier way originally as well as your method.
Problem I'm running into which it sounds like maybe you were as well, is that the predictions from the pipeline after finetuning come out as pure gibberish, so something is being lost in translation. Example below:
> 'redistributionestonestoneston Hag Hag resultant resultant '
> 'resultantestoneston redistribution redistribution Hag Hag pressuring '
> 'pressuring redistribution redistribution alternate alternate alternate '
> 'pressuring pressuring Hag Hagestoneston Champions Champions Champions '
> 'redistribution redistribution sil sil sil redistribution redistributionbelt '
> 'redistribution redistributioniopiopiop redistribution redistribution carved '
> 'carved carved Hag Hag sil sil pressuring pressuring carved carved '
> 'compartment compartment compartment redistribution redistribution Voyager '
> 'Voyager Voyager redistribution redistribution pressuring pressuring '
I used the finetune.py script on the cnn tiny dataset found from the tiny version of the bash script in the examples folder. I even attempted to do this finetuning with nearly 0 (1e-10) learning rate, so that I knew I wasn't significantly changing the model. This still lead to gibberish predictions.
I tried a version where I loaded the pretrained model into the pipeline, saved it using pipeline.model.save_pretrained("path/to/dir") and in a new session, reloaded it using the second portion of the code provided by @claudiatin plus `bart_loaded = pipeline(task='summarization', model=model, device = 0, tokenizer=tokenizer)`
This worked correctly on predictions, however I did notice a significant change in inference time on the same article I tested (~3 seconds vs ~20 seconds). The only difference I could see vs using the config.json and pytorch_model.bin that came out of save_pretrained() vs the finetune.py checkpoint is that the save_pretrained() config.json contains the added key:value `"architectures": ["BartForConditionalGeneration"]`. I made this change to the config generated from my finetuned model, but it did not correct the gibberish generation problem.
@sshleifer , any ideas?<|||||>@gmlander, yes I have the same gibberish issue. It's not clear to me how to solve it. It would be nice to know that<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>> I found a solution. The model.generate() function is necessary to extract the predictions. I defined a separate function in the SummarizationTrainer() class to use self.model.generate(), and was able to use tokenizer.decoder() on the outputs.
>
> I was encountering issues when using self.tokenizer, so I assume using 'bart-large-cnn' tokenizer for similar custom summarization datasets is okay.
>
> @prabalbansal, I'm not sure if the same method will apply to T5, but it could work for predicting for a single instance, per one of your questions.
>
> My code is below:
>
> ```
> def text_predictions(self, input_ids):
> generated_ids = self.model.generate(
> input_ids=input_ids,
> num_beams=1,
> max_length=80,
> repetition_penalty=2.5,
> length_penalty=1.0,
> early_stopping=True,
> )
> preds = [
> self.tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True)
> for g in generated_ids
> ]
> return preds
> ...
> # Optionally, predict on dev set and write to output_dir
> if args.do_predict:
> # See https://github.com/huggingface/transformers/issues/3159
> # pl use this format to create a checkpoint:
> # https://github.com/PyTorchLightning/pytorch-lightning/blob/master\
> # /pytorch_lightning/callbacks/model_checkpoint.py#L169
> tokenizer = BartTokenizer.from_pretrained('bart-large-cnn')
> ARTICLE_TO_SUMMARIZE = "My friends are cool but they eat too many carbs."
> inputs = tokenizer.batch_encode_plus([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors='pt')['input_ids']
> checkpoints = list(sorted(glob.glob(os.path.join(args.output_dir, "checkpointepoch=*.ckpt"), recursive=True)))
> model = model.load_from_checkpoint(checkpoints[-1])
> model.eval()
> model.freeze()
> outputs = model.text_predictions(inputs)
> print(outputs)
> ```
>
> Thank you for the help, @sshleifer !
Hi @riacheruvu , I am facing a similar issue while tokenizing a piece of text in the QAGS repo. Line number 133 in https://github.com/W4ngatang/qags/blob/master/qg_utils.py gives me the same error which is due to ```tokenizer.decode()``` encountering a NoneType object. Would request if you can help. Please see the error log below:
```
Traceback (most recent call last):
File "qg_utils.py", line 169, in <module>
sys.exit(main(sys.argv[1:]))
File "qg_utils.py", line 166, in main
extract_gen_from_fseq_log(args.data_file, args.out_dir)
File "qg_utils.py", line 142, in extract_gen_from_fseq_log
gen = tokenizer.decode(tok_ids)
File "/home/test/miniconda3/envs/qags/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 3113, in decode
*kwargs,
File "/home/test/miniconda3/envs/qags/lib/python3.6/site-packages/transformers/tokenization_utils.py", line 753, in _decode
sub_texts.append(self.convert_tokens_to_string(current_sub_text))
File "/home/test/miniconda3/envs/qags/lib/python3.6/site-packages/transformers/models/gpt2/tokenization_gpt2.py", line 264, in convert_tokens_to_string
text = "".join(tokens)
TypeError: sequence item 0: expected str instance, NoneType found
```<|||||>Hi @mriganktiwari, in my case, I needed to use `model.generate()` as input to `tokenizer.decode()` to solve this issue. I had an older version of HuggingFace at the time, so this might not be true today.
You could consider first using `model.generate()` with `tok_ids`, followed by `tokenizer.decode()`. I could be wrong, and I'm not sure what the input data_file consists of, but I would try this to see if it helps. |
transformers | 3,852 | closed | TFT5: get_input_embeddings() and get_output_embeddings() | In class TFT5Model
1. The get_input_embeddings() and get_output_embeddings() methods do not have any documentation provided in them
2. Furthermore, the get_output_embeddings provides the same output as the get_input embeddings. This needs to be resolved. Or flagged with a NotImplementedError | 04-18-2020 20:41:48 | 04-18-2020 20:41:48 | From what I'm seeing, the `TFT5Model` **does** have [documentation](https://huggingface.co/transformers/model_doc/t5.html#transformers.TFT5Model.get_input_embeddings) for `get_input_embeddings` and `get_output_embeddings`.
I believe the output embeddings and input embeddings should actually be the same. The embeddings are shared between input and output. Wdyt @patrickvonplaten? <|||||>Agree that the documentation is not the greatest, could definitely be improved :-).
The idea is that both `get_input_embeddings()` and `get_output_embeddings` return the **same** (this should be made clearer in the docs) embeddings matrix of dimension Vocab_size x Hidden_size.
Now, to make the embeddings matrix work for both input and output, we need to be able to get a Vocab_size -> Hidden_size mapping (for the input embeddings) and Hidden_size -> Vocab_size mapping (for the output embeddings). In TF, we use a trick here, by wrapping the embedding in this layer: https://github.com/huggingface/transformers/blob/c53cc018de70436196858ca91c1a34f1b8947028/src/transformers/modeling_tf_utils.py#L1521
And then by calling the embedding with different modes ("linear" and "embedding"), we get the correct mapping. See https://github.com/huggingface/transformers/blob/c53cc018de70436196858ca91c1a34f1b8947028/src/transformers/modeling_tf_t5.py#L1074 for example.
So IMO, the code is fine, I agree with you @parthe, that the documention should be cleaner and explain the logic I explained here a bit.
If you feel like it @parthe, it would be amazing if you could open a PR to straighten up the documentation here (the docstring). <|||||>Hi - I think I have related questions since I couldn't find answers in the documentation for get_input_embeddings()....
I've been using the approach [here](https://mccormickml.com/2019/05/14/BERT-word-embeddings-tutorial/) to access the hidden states to obtain embeddings (I've also updated it to be based on `transformers` in my own notebook instead of `pytorch-pretrained-bert`) . I was wondering how the output of get_input_embeddings maps to the output of the hidden states there? I've not been able to figure that out. Also, what would be the advantage of using one over the other?
Thanks!<|||||>So I would recommend that you take the `hidden_states` by setting `config.output_hidden_states=True` (Note: this API will be changed soon, see PR: #4538).
Then you can map the `hidden_states` to `lm_logits` (non normalized scores for each word in the vocab) using:
```python
embed_tokens = self.get_output_embeddings()
lm_logits = embed_tokens(<your_hidden_states>, mode="linear")
```
Let me know if this isn't clear :-) <|||||>Hi @patrickvonplaten, referring to the quote below (from this [comment](https://github.com/huggingface/transformers/issues/3852#issuecomment-618852195)):
> The idea is that both `get_input_embeddings()` and `get_output_embeddings` return the **same** (this should be made clearer in the docs) embeddings matrix of dimension Vocab_size x Hidden_size.
>
> Now, to make the embeddings matrix work for both input and output, we need to be able to get a Vocab_size -> Hidden_size mapping (for the input embeddings) and Hidden_size -> Vocab_size mapping (for the output embeddings). In TF, we use a trick here, by wrapping the embedding in this layer:
>
> https://github.com/huggingface/transformers/blob/c53cc018de70436196858ca91c1a34f1b8947028/src/transformers/modeling_tf_utils.py#L1521
>
> And then by calling the embedding with different modes ("linear" and "embedding"), we get the correct mapping. See
>
> https://github.com/huggingface/transformers/blob/c53cc018de70436196858ca91c1a34f1b8947028/src/transformers/modeling_tf_t5.py#L1074
>
> for example.
Does this only apply to TFT5Model or is the same across all models which has `get_input_embeddings` and `get_output_embeddings` method?<|||||>It should be the same across models that share input and output embeddings :-) |
transformers | 3,851 | closed | How properly apply a tokenizer map function to a Tensorflow batched dataset? | Considering the following `batched_dataset`:
```python3
samples = ([{"query": "this is a query 1", "doc": "this is one relevant document regarding query 1"},
{"query": "this is a query 2", "doc": "this is one relevant document regarding query 2"},
{"query": "this is a query 3", "doc": "this is one relevant document regarding query 3"},
{"query": "this is a query 4", "doc": "this is one relevant document regarding query 4"},
])
dataset = tf.data.Dataset.from_generator(
lambda: samples, {"query": tf.string, "doc": tf.string})
batched_dataset = dataset.batch(2)
#{
#'doc': <tf.Tensor: shape=(2,), dtype=string, numpy=array(
# [b'this is one relevant document regarding query 1',
# b'this is one relevant document regarding query 2'], dtype=object)>,
#
#'query': <tf.Tensor: shape=(2,), dtype=string, numpy=array(
# [b'this is a query 1',
# b'this is a query 2'], dtype=object)>
#}
```
and a map function to tokenize this `batched_dataset`:
```python3
def tokenize(sample):
tokenized_query = tokenizer.batch_encode_plus(sample["query"].numpy().astype('str'), ...)
tokenized_doc = tokenizer.batch_encode_plus(sample["doc"].numpy().astype('str'), ...)
return (tokenized_query, tokenized_doc)
```
I could tokenize the entire batched_dataset using a for-loop:
```python3
for batch in batched_dataset:
tokenize(batch)
# (
# {'input_ids': <tf.Tensor: shape=(2, 8), dtype=int32, numpy=
# array([[ 101, 2023, 2003, 1037, 23032, 1015, 102, 0],
# [ 101, 2023, 2003, 1037, 23032, 1016, 102, 0]],
# dtype=int32)>,
# 'attention_mask': <tf.Tensor: shape=(2, 8), dtype=int32, numpy=
# array([[1, 1, 1, 1, 1, 1, 1, 0],
# [1, 1, 1, 1, 1, 1, 1, 0]], dtype=int32)>},
# {'input_ids': <tf.Tensor: shape=(2, 8), #dtype=int32, numpy=
# array([[ 101, 2023, 2003, 2028, 7882, 6254, 4953, 102],
# [ 101, 2023, 2003, 2028, 7882, 6254, 4953, 102]], dtype=int32)>,
# 'attention_mask': <tf.Tensor: shape=(2, 8), dtype=int32, numpy=
# array([[1, 1, 1, 1, 1, 1, 1, 1],
# [1, 1, 1, 1, 1, 1, 1, 1]], dtype=int32)>})
# ...
```
However, when using [`tf.data.Dataset.map`][1] the following error arises:
```python3
tokenized_dataset = batched_dataset.map(tokenize)
AttributeError: 'Tensor' object has no attribute 'numpy'
```
Then, how properly apply a tokenizer map function to a batched dataset?
**Note**: I published a working example on [`Google Colab`][2].
[1]: https://www.tensorflow.org/api_docs/python/tf/data/Dataset?version=nightly#map
[2]: https://colab.research.google.com/drive/1TUbWwEgbgPHwY1QjgRLIqLpjin310pdh | 04-18-2020 19:02:37 | 04-18-2020 19:02:37 | This seems like more of a TF-related question rather than a Transformers-related question. The issue seems to stem from your code trying to get the value of a tensor which is not eager, using numpy. I believe the `tf.data.Dataset.map` method must trace inputs, resulting in the Tensors not being eager.
Couldn't you build the `tf.data.Dataset` with already tokenized inputs instead?<|||||>The ideal would be to follow the pipeline (read from the file >> generate batches >> tokenize >> train >> evaluate). It is the most efficient approach as pointed in the [TensorFlow tutorial](https://www.tensorflow.org/tutorials/customization/performance).
Tensorflow when dealing with texts generates string tensors that are stored as byte string:
```python
<tf.Tensor: shape=(2,), dtype=string, numpy=array(
[b'Thê first utf-8 string of the batçh.',
b'Thê secônd utf-8 string of the batçh.'], dtype=object)>
```
However, I didn't find an efficient way to decode this kind of tensor as a list of strings. It's even worse if the byte string containing a non-ascii character.
What I really need is one of these two options:
1. a tokenizer which is able to accept aforementioned byte string tensor as input to tokenize; or
2. a vectorized approach to transforming a byte string tensor into a string list.
Thank you very much for all your help.<|||||>@Ceceu I am running into this exact issue as well, and am wondering if you had found a good solution?<|||||>@oja,
The best solution I could find was adapting an example from the Tensorflow tutorial: [Load Text](https://www.tensorflow.org/tutorials/load_data/text#encode_text_lines_as_numbers) which uses `tf.py_function`.
Let me know if I can help more.<|||||>@Ceceu got it, thank you!<|||||>Tokenizers can now output `numpy` arrays with `return_tensors='np'` so I think this should work now.<|||||>Thanks @thomwolf, I will check it out and if it works on TPU then it solves https://github.com/huggingface/transformers/issues/5066<|||||>> Thanks @thomwolf, I will check it out and if it works on TPU then it solves #5066
Did you check if it works on TPU?<|||||>It does not work on TPU<|||||>@oja, @Santosh-Gupta, @celsofranssa I too am facing this problem. Did you guys find any solution?<|||||>cc @Rocketknight1 <|||||>Bump, I'm still having this issue (on a CPU). |
transformers | 3,850 | closed | 'pad_to_max_length' in Pipeline should be set to True by default | # 🐛 Bug
pad_to_max_length is set by default False in Piplene class' _parse_and_tokenize() function
## Information
Model I am using (Bert):
Language I am using the model on (English):
The problem arises when using: my own modified scripts:
```
import numpy as np
from transformers import AutoTokenizer, pipeline, TFDistilBertModel
model = TFDistilBertModel.from_pretrained('distilbert-base-uncased')
# model = AutoModel.from_pretrained('distilbert-base-uncased')
tokenizer = AutoTokenizer.from_pretrained('distilbert-base-uncased', pad_to_max_length=True)
pipe = pipeline('feature-extraction', model=model, tokenizer=tokenizer)
features = pipe(train_data['comment_text'][:100].values.tolist())
features = np.squeeze(features)
print(features.shape)
```
As there are about 100 input of variable length, tokenizer should perform padding. But even after giving ```pad_to_max_length=True``` , padding operation is not perform.
I get the following error
```
~/anaconda3/envs/tf2/lib/python3.7/site-packages/transformers/pipelines.py in predict(self, X)
392 Scikit / Keras interface to transformers' pipelines. This method will forward to __call__().
393 """
--> 394 return self(X=X)
395
396 @contextmanager
~/anaconda3/envs/tf2/lib/python3.7/site-packages/transformers/pipelines.py in __call__(self, *args, **kwargs)
551
552 def __call__(self, *args, **kwargs):
--> 553 return super().__call__(*args, **kwargs).tolist()
554
555
~/anaconda3/envs/tf2/lib/python3.7/site-packages/transformers/pipelines.py in __call__(self, *texts, **kwargs)
465
466 def __call__(self, *texts, **kwargs):
--> 467 inputs = self._parse_and_tokenize(*texts, **kwargs)
468 return self._forward(inputs)
469
~/anaconda3/envs/tf2/lib/python3.7/site-packages/transformers/pipelines.py in _parse_and_tokenize(self, pad_to_max_length, *texts, **kwargs)
456 return_tensors=self.framework,
457 max_length=self.tokenizer.max_len,
--> 458 pad_to_max_length=pad_to_max_length,
459 )
460
~/anaconda3/envs/tf2/lib/python3.7/site-packages/transformers/tokenization_utils.py in batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, max_length, stride, truncation_strategy, pad_to_max_length, return_tensors, return_token_type_ids, return_attention_masks, return_overflowing_tokens, return_special_tokens_masks, return_offsets_mapping, return_input_lengths, **kwargs)
1260 raise ValueError(self.NO_PAD_TOKEN_FOR_BATCH_MSG)
1261 else:
-> 1262 raise ValueError(self.UNEVEN_SEQUENCES_FOR_BATCH_MSG)
1263 elif return_tensors == "pt" and is_torch_available():
1264 try:
ValueError: The sequences building the batch are not of the same size, no tensor can be built. Set `pad_to_max_length=True` to pad the smaller sequencesup to the larger sequence's length.
```
The tasks I am working on is:
* [ ] my own task or dataset: (give details below)
kaggle toxic tweet dataset.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Pipeline should perform padding operation.
<!-- A clear and concise description of what you would expect to happen. -->
When I made pad_to_max_length=True inside function _parse_and_tokenize() of class Pipeline,
I got expected result. Pipeline perform its task perfectly (padding operation was also done) and feature extraction was executed on all inputs (in my case 100)
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: Linux-5.3.0-46-generic-x86_64-with-debian-buster-sid
- Python version:3.8
- Tensorflow version (GPU?): 2.1
- Using GPU in script?: Yes | 04-18-2020 15:23:12 | 04-18-2020 15:23:12 | Hi @Akashdesarda, thanks for reporting this.
You should be able to pass `pad_to_max_length=True` when calling your pipeline:
`pipe(train_data['comment_text'][:100].values.tolist(), pad_to_max_length=True)`
Can you let us know if it works in your case ?
<|||||>Yes it worked, thanks for the solution. |
transformers | 3,849 | closed | Bug in run_glue | Hi
I am getting this error when running run_glue.py
ImportError: cannot import name 'TrainingArguments' from 'transformers' (/idiap/user/rkarimi/libs/anaconda3/envs/iborn/lib/python3.7/site-packages/transformers/__init__.py)
Traceback (most recent call last):
File "run_glue.py", line 34, in <module>
from transformers import (
ImportError: cannot import name 'HfArgumentParser' from 'transformers' (/idiap/user/rkarimi/libs/anaconda3/envs/iborn/lib/python3.7/site-packages/transformers/__init__.py)
To fix I searched in the repo on how you use the hf_argparser, I modified it as below:
from transformers.hf_argparser import HfArgumentParser
again, getting the error:
Traceback (most recent call last):
File "run_glue.py", line 44, in <module>
from transformers.hf_argparser import HfArgumentParser
ModuleNotFoundError: No module named 'transformers.hf_argparser'
however, this is how you call it in the tests but this does not work. Seems to me that things have been changed but the codes are not updated. thanks | 04-18-2020 12:57:58 | 04-18-2020 12:57:58 | You need to install from source as specified in the README. |
transformers | 3,848 | closed | Electra for question answering | # 🚀 Feature request
Electra for question answering
## Motivation
Electra is the highest rated single model (non essemble) on the Squad leaderboard
## Your contribution
I am not sure if I have the skills, but I'm willing to take a crack at it! Looking the other QA architectures, it seems that I'll need to put a single linear layer (two outputs) on top of the Electra discriminator ?
| 04-18-2020 02:32:39 | 04-18-2020 02:32:39 | You can basically copy+paste the code from BertForQuestionAnswering and just change it for ELECTRA. However, the original ELECTRA implementation to fine-tune on squad looks a bit different (it's more like in XLNet).
If you want to reproduce the official implementation it's probably best you take a look at the published code: https://github.com/google-research/electra/blob/master/finetune/qa/qa_tasks.py#L419<|||||>Any updates about this? I managed the creation of ElectraForQuestionAnswering on my own and the code works. If the progress of this thread is in stand-by I can proceed submitting my pull request<|||||>@volker42maru I implemented the same model as described in the official Electra Repository by google. I am still unable to reproduce the original paper results which is 75% EM on the squad v1 and 82% F1. The maximum I could get was 70% EM and 78% F1. @mmuffo94 Please let me know if you have successfully reproduced the results on the squad v1 dataset. It would be great help. I am using the Electra Small Model for now.<|||||>@ankur19030 you are refering to ELECTRA-Small I assume?
I actually finetuned and evaluated only on squad 2.0, but the score on squad 1.1 even with the Small model should be significantly higher. The different QA Head that is used for ELECTRA might not play such a big role in squad 1.1, because it's mostly used to get better predictions on answerability. You can try using a simple QA head first to check the performance on squad 1.1 with ELECTRA, e.g.:
```
class ElectraForQuestionAnswering(ElectraPreTrainedModel):
def __init__(self, config):
super(ElectraForQuestionAnswering, self).__init__(config)
self.num_labels = config.num_labels
self.electra = ElectraModel(config)
self.qa_outputs = nn.Linear(config.hidden_size, config.num_labels)
self.init_weights()
def forward(
self,
input_ids=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
start_positions=None,
end_positions=None,
):
outputs = self.electra(
input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds
)
sequence_output = outputs[0]
logits = self.qa_outputs(sequence_output)
start_logits, end_logits = logits.split(1, dim=-1)
start_logits = start_logits.squeeze(-1)
end_logits = end_logits.squeeze(-1)
outputs = (start_logits, end_logits,) + outputs[2:]
if start_positions is not None and end_positions is not None:
# If we are on multi-GPU, split add a dimension
if len(start_positions.size()) > 1:
start_positions = start_positions.squeeze(-1)
if len(end_positions.size()) > 1:
end_positions = end_positions.squeeze(-1)
# sometimes the start/end positions are outside our model inputs, we ignore these terms
ignored_index = start_logits.size(1)
start_positions.clamp_(0, ignored_index)
end_positions.clamp_(0, ignored_index)
loss_fct = torch.nn.CrossEntropyLoss(ignore_index=ignored_index)
start_loss = loss_fct(start_logits, start_positions)
end_loss = loss_fct(end_logits, end_positions)
total_loss = (start_loss + end_loss) / 2
outputs = (total_loss,) + outputs
return outputs # (loss), start_logits, end_logits, (hidden_states), (attentions)
```
However, if you try to use ELECTRA-Base or Large you also want to use `layerwise_lr_decay`, as used in the official implementation (https://github.com/google-research/electra/blob/master/model/optimization.py#L48). For me that made quite a big difference in score.
BTW, be sure to use `google/electra-small-discriminator` weights, not the generator.<|||||>@volker42maru I initially tried with the simple QA head only, the same as you described but could not reproduce the results, not even near them even though I am using the same hyper-parameters as the ones used in the official implementation except layer wise LR decay. My EM is <70 on squad v1 which is around 5% less than the results from official repo. But I can say for sure that simply Layerwise LR decay in small model should not cause this much difference, as I have already tried removing Layerwise LR decay from official implementation and it still give the same results. So I have no idea where is the gap ?<|||||>Are you sure that's for squad v1 and not v2?
I just trained ELECTRA Small for 1 epoch using the simple QA head from above and I get the following score on squad v1 dev: `'exact': 73.46263008514664, 'f1': 82.46777637449017`
I used mostly default parameters and no `layerwise_lr_decay`:
```
--num_train_epochs 1
--per_gpu_train_batch_size 24
--learning_rate 5e-5
```<|||||>@volker42maru I will cross check again then, Thanks <|||||>It's been added https://github.com/huggingface/transformers/pull/4913 |
transformers | 3,847 | closed | Share more details on fine-tuning GPT-2 on WikiText-2 ? | Hello! Regarding https://github.com/huggingface/transformers/tree/master/examples#gpt-2gpt-and-causal-language-modeling, would you mind sharing what hyper-parameters you use to get this result ? How many epochs, what's the batch size? etc... | 04-17-2020 21:35:44 | 04-17-2020 21:35:44 | @xihui-wu To get the hyperparameters specific to the model (in this case `gpt2`), you can check the config file of `gpt2` with the code below:
```
from transformers import GPT2Config
print(GPT2Config())
```
Some higher level hyperparameters are still not included here (e.g. "epochs"). These can be set explicitly as arguments when running the CLI `run_language_modeling.py`; otherwise, the default values are used.
You can find the hyperparameters and their default values at the beginning of the `main` function in [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py). For example, the default epoch count (`num_train_epochs`) is 1.
Hope this helps!<|||||>> @xihui-wu To get the hyperparameters specific to the model (in this case `gpt2`), you can check the config file of `gpt2` with the code below:
>
> ```
> from transformers import GPT2Config
> print(GPT2Config())
> ```
>
> Some higher level hyperparameters are still not included here (e.g. "epochs"). These can be set explicitly as arguments when running the CLI `run_language_modeling.py`; otherwise, the default values are used.
>
> You can find the hyperparameters and their default values at the beginning of the `main` function in [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py). For example, the default epoch count (`num_train_epochs`) is 1.
>
> Hope this helps!
Thanks a lot @enzoampil! Do you know what hyper-parameters to get the result: "This takes about half an hour to train on a single K80 GPU and about one minute for the evaluation to run. It reaches a score of ~20 perplexity once fine-tuned on the dataset." ?<|||||>@xihui-wu This result comes from running the default training script with no explicitly specified hyperparameters; therefore, the **default** hyperparameters will apply.
>You can find the hyperparameters and their default values at the beginning of the main function in [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py). For example, the default epoch count (num_train_epochs) is 1.
For reference, this is the code snippet that fine-tunes `gpt2` with the default hyperparameters.
```
export TRAIN_FILE=/path/to/dataset/wiki.train.raw
export TEST_FILE=/path/to/dataset/wiki.test.raw
python run_language_modeling.py \
--output_dir=output \
--model_type=gpt2 \
--model_name_or_path=gpt2 \
--do_train \
--train_data_file=$TRAIN_FILE \
--do_eval \
--eval_data_file=$TEST_FILE
```<|||||>> @xihui-wu This result comes from running the default training script with no explicitly specified hyperparameters; therefore, the **default** hyperparameters will apply.
>
> > You can find the hyperparameters and their default values at the beginning of the main function in [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py). For example, the default epoch count (num_train_epochs) is 1.
>
> For reference, this is the code snippet that fine-tunes `gpt2` with the default hyperparameters.
>
> ```
> export TRAIN_FILE=/path/to/dataset/wiki.train.raw
> export TEST_FILE=/path/to/dataset/wiki.test.raw
>
> python run_language_modeling.py \
> --output_dir=output \
> --model_type=gpt2 \
> --model_name_or_path=gpt2 \
> --do_train \
> --train_data_file=$TRAIN_FILE \
> --do_eval \
> --eval_data_file=$TEST_FILE
> ```
I got GPU memory error with k80 on this, what's the batch_size and how can I configure?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>> > @xihui-wu This result comes from running the default training script with no explicitly specified hyperparameters; therefore, the **default** hyperparameters will apply.
> > > You can find the hyperparameters and their default values at the beginning of the main function in [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py). For example, the default epoch count (num_train_epochs) is 1.
> >
> >
> > For reference, this is the code snippet that fine-tunes `gpt2` with the default hyperparameters.
> > ```
> > export TRAIN_FILE=/path/to/dataset/wiki.train.raw
> > export TEST_FILE=/path/to/dataset/wiki.test.raw
> >
> > python run_language_modeling.py \
> > --output_dir=output \
> > --model_type=gpt2 \
> > --model_name_or_path=gpt2 \
> > --do_train \
> > --train_data_file=$TRAIN_FILE \
> > --do_eval \
> > --eval_data_file=$TEST_FILE
> > ```
>
> I got GPU memory error with k80 on this, what's the batch_size and how can I configure?
You can use a per_device_train_batch_size=1, worked for me on a K80 |
transformers | 3,846 | closed | Roberta (and BERT) tokenization converts "do not" to "don't" | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Roberta
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Execute the following:
```python
import transformers
tokenizer = transformers.RobertaTokenizer.from_pretrained('roberta-base')
print(tokenizer.decode(tokenizer.encode('is not')))
print(tokenizer.decode(tokenizer.encode('do not')))
```
The output is
```
<s> don't</s>
<s> is not</s>
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The detokenization should not incorrectly introduce a contraction.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: Linux-5.5.3-arch1-1-x86_64-with-glibc2.2.5
- Python version: 3.8.1
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
| 04-17-2020 20:41:45 | 04-17-2020 20:41:45 | For anyone coming across this issue, you can disable all such transformations in `decode` by passing `clean_up_tokenization_spaces=False`. However, I maintain that this decoding behavior is not a sensible default.<|||||>fixed with #4024 |
transformers | 3,845 | closed | list index out of range error when I execute a command with examples/run_glue.py | Hi, I am new to transformers, and I got a list index out of range error when I execute a command for examples/run_glue.py.
I want to do Fine-Turning to classify Japanese words, and I modified some files following a web site.
**The process was:**
**1. I used these commands below to install transformers and to use examples following github's instructions.**
```
$ pip install transformers
$ git clone https://github.com/huggingface/transformers
$ cd transformers
$ pip install .
$ pip install -r ./examples/requirements.txt
```
**2. I changed two files(transformers/data/processors/glue.py, transformers/data/metrics/__init__.py)**
I will show you them last of this question.
**3. I made train.tsv and dev.tsv under the data/original/ directory after making this directory.**
I will show you them last of this question.
**4. I executed a command below.**
```
$ python ./examples/run_glue.py
--data_dir=./src/transformers/data/original/
--model_type=bert
--model_name_or_path=bert-base-japanese-whole-word-masking
--task_name=original
--do_train
--do_eval
--output_dir=output/original
```
**5. list index out of range error occurred.**
```
Traceback (most recent call last):
File "./examples/run_glue.py", line 562, in <module>
main()
File "./examples/run_glue.py", line 510, in main
train_dataset = load_and_cache_examples(args, args.task_name, tokenizer, evaluate=False)
File "./examples/run_glue.py", line 358, in load_and_cache_examples
processor.get_dev_examples(args.data_dir) if evaluate else processor.get_train_examples(args.data_dir)
File "/home/haoki/Bert1/lib/python3.6/site-packages/transformers/data/processors/glue.py", line 519, in get_train_examples
return self._create_examples(self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
File "/home/haoki/Bert1/lib/python3.6/site-packages/transformers/data/processors/glue.py", line 538, in _create_examples
label = line[1]
IndexError: list index out of range
```
**My environment:**
OS: linux
IDE: pycharm
python: python 3.6
I stacked almost for a day with problem... Pleeease help meeee.
--------------------------------------------------------------------------
**Codes of procedure 2 and 3:**
**transformers/data/processors/glue.py (Transformers are installed with pip):**
```
~~~
#added this class
class OriginalProcessor(DataProcessor):
"""Processor for the original data set."""
def get_example_from_tensor_dict(self, tensor_dict):
"""See base class."""
return InputExample(
tensor_dict["idx"].numpy(),
tensor_dict["sentence"].numpy().decode("utf-8"),
None,
str(tensor_dict["label"].numpy()),
)
def get_train_examples(self, data_dir):
"""See base class."""
return self._create_examples(self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
def get_dev_examples(self, data_dir):
"""See base class."""
return self._create_examples(self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev")
def get_labels(self):
"""See base class."""
return ["0", "1"]
def _create_examples(self, lines, set_type):
"""Creates examples for the training and dev sets."""
examples = []
for (i, line) in enumerate(lines):
# If TSV file has a header, it will take off it.
# if i == 0:
# continue
guid = "%s-%s" % (set_type, i)
text_a = line[0]
label = line[1]
examples.append(InputExample(guid=guid, text_a=text_a, text_b=None, label=label))
return examples
glue_tasks_num_labels = {
"cola": 2,
"mnli": 3,
"mrpc": 2,
"sst-2": 2,
"sts-b": 1,
"qqp": 2,
"qnli": 2,
"rte": 2,
"wnli": 2,
"original": 2, # added
}
glue_processors = {
"cola": ColaProcessor,
"mnli": MnliProcessor,
"mnli-mm": MnliMismatchedProcessor,
"mrpc": MrpcProcessor,
"sst-2": Sst2Processor,
"sts-b": StsbProcessor,
"qqp": QqpProcessor,
"qnli": QnliProcessor,
"rte": RteProcessor,
"wnli": WnliProcessor,
"original": OriginalProcessor, # added
}
glue_output_modes = {
"cola": "classification",
"mnli": "classification",
"mnli-mm": "classification",
"mrpc": "classification",
"sst-2": "classification",
"sts-b": "regression",
"qqp": "classification",
"qnli": "classification",
"rte": "classification",
"wnli": "classification",
"original": "classification", # added
}
```
**transformers/data/metrics/__init__.py (Transformers are installed with pip)**
```
def glue_compute_metrics(task_name, preds, labels):
assert len(preds) == len(labels)
if task_name == "cola":
return {"mcc": matthews_corrcoef(labels, preds)}
elif task_name == "sst-2":
return {"acc": simple_accuracy(preds, labels)}
elif task_name == "mrpc":
return acc_and_f1(preds, labels)
elif task_name == "sts-b":
return pearson_and_spearman(preds, labels)
elif task_name == "qqp":
return acc_and_f1(preds, labels)
elif task_name == "mnli":
return {"acc": simple_accuracy(preds, labels)}
elif task_name == "mnli-mm":
return {"acc": simple_accuracy(preds, labels)}
elif task_name == "qnli":
return {"acc": simple_accuracy(preds, labels)}
elif task_name == "rte":
return {"acc": simple_accuracy(preds, labels)}
elif task_name == "wnli":
return {"acc": simple_accuracy(preds, labels)}
# added
elif task_name == "original":
return {"acc": simple_accuracy(preds, labels)}
else:
raise KeyError(task_name)
````
**train.tsv**
```
面白かった 0 #interesting
楽しかった 0 #fun
退屈だった 1 #boring
悲しかった 1 #sad
```
**dev.tsv**
```
満喫した 0 #satisfied
辛かった 1 #hard
````
| 04-17-2020 15:55:00 | 04-17-2020 15:55:00 | Hi, have you fixed this error? I just got the same error. Any help will be grateful!<|||||>Hi, I have already fixed it. I made a mistake on tsv files. After converting them to correct format by using LibreOffice, glue.py ran correctly.
I hope this could help you!<|||||>Thanks! I fixed this error. There were some errors in my input file. |
transformers | 3,844 | closed | [TF T5] Higher tolerance for past testing in TF T5 | Higher tolerance to be certain that tests pass | 04-17-2020 15:20:01 | 04-17-2020 15:20:01 | |
transformers | 3,843 | closed | [T5] Higher tolerance for past testing in T5 | Higher tolerance to be certain that tests pass | 04-17-2020 15:19:36 | 04-17-2020 15:19:36 | |
transformers | 3,842 | closed | Fix bug in run_*.py scripts: double wrap into DataParallel during eval | This bug is present in several scripts in `examples`:
* `examples/run_language_modeling.py`
* `examples/run_multiple_choice.py`
* `examples/run_xnli.py`
* `examples/ner/run_ner.py`
* `examples/mm-imdb/run_mmimdb.py`
* `examples/hans/test_hans.py`
The problem is exactly the same as it was in #1801 and in #1504:
During the evaluation, we are trying to wrap the `model` into `DataParallel` second time (we did it already during training). As a result we have:
> "RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cuda:1" (ids of devices may differ)
The fix is straightforward:
Before:
```python
# multi-gpu eval
if args.n_gpu > 1:
model = torch.nn.DataParallel(model)
```
After:
```python
# multi-gpu eval
if args.n_gpu > 1 and not isinstance(model, torch.nn.DataParallel):
model = torch.nn.DataParallel(model)
```
| 04-17-2020 14:54:14 | 04-17-2020 14:54:14 | Merging this, though it will be rendered obsolete (for a subset of the script initially) by #3800 |
transformers | 3,841 | closed | Reproducing squad score with TFXLMRoberta? | # ❓ Questions & Help
Hello all,
first of all, thanks for the library that is very helpful. There have been several discussions regarding XLM-Roberta and questions answering (#3732 #3694 ). On my side, I added a TFXLMRobertaForQuestionAnswering but never reproduced a decent squad score (I was always below 50% f1). The base LM I was using was xlm-roberta-base converted to tf, or the community ones (jplu), I tried with type_vocab_size=1 or type_vocab_size=2 (in order to use segment_ids as for Bert, I did it by overwriting the create_token_type_ids_from_sequences in the tokenizer). This did not really change anything. I am using a AdamWeightDecay as for Bert finetuning and I start to believe that there is no problem with my code since I saw yesterday the PR #3812 which is basically the same code as mine. For info, I am not only training on squad, I am training on squad + mlqa. Using the exact same approach, I got very good score with Bert multilingual. Since I am a bit stuck I was wondering if someone here (from huggingface or not) managed to properly train XLMRoberta with tensorflow and get good squad results. If so, it would be super helpful for me if you can share the parameters that was used (learning rate, number of epochs, etc).
** | 04-17-2020 12:42:17 | 04-17-2020 12:42:17 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,840 | closed | Decoding predictions for masked language modeling task using custom BPE | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
https://stackoverflow.com/questions/61232399/decoding-predictions-for-masked-language-modeling-task-using-custom-bpe | 04-17-2020 11:39:48 | 04-17-2020 11:39:48 | Maybe @mfuntowicz ? :-) <|||||>Bump<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,839 | closed | Different output encode and encode_plus | Hi everyone,
I'm struggling with the following scenario:
I have the following input:
```
sentence = "Sheldon : If a photon is directed through a plane with two slits in it and either is observed . Sheldon : . it will not go through both . If unobserved , it will . UNKNAME : If it 's observed after it left the plane , before it hits its target . Sheldon : . it will not have gone through both slits . Agreed . Leonard : What 's your point ? Sheldon : There 's no point , I just think it 's a good idea for a T shirt . UNKNAME : Excuse me . Hang on . Leonard : One across is Aegean , eight down is Nabokov . Leonard : Twenty six across is MCM . Leonard : Fourteen down is . Leonard : Move your finger . UNKNAME : . phylum , which makes 14 across Port Au Prince . Leonard : See , Papa Doc 's capital idea , that 's Port Au Prince . Leonard : Haiti . UNKNAME : Can I help you ? Yes . UNKNAME : Um , is this the high IQ sperm bank ?"
```
I am using bert-base-uncased and I get different length of tokens depending if I use `tokenizer.encode` or `tokenizer.encode_plus`.
Below is an example:
```
test = tokenizer.encode_plus(sentence, add_special_tokens=True, max_length=512)["input_ids"]
print(test)
214
```
```
test2 = test2 = tokenizer.encode_plus(sentence, add_special_tokens=True, max_length=512)["input_ids"]
print(test2)
189
```
In above scenario I expect the amount of tokens to be the same length. I looked at the documentation but I cannot find an explanation for the difference. It is problematic for me because I need to use tokenizer.batch_encode_plus but for my model I expect and need the length of 189 instead of 214.
Can someone please explain why the output is different and how to make encode.plus output the same as encode?
Thanks in advance ;)
| 04-17-2020 06:54:46 | 04-17-2020 06:54:46 | I made a mistake in my code. The output is the same. |
transformers | 3,838 | closed | Cutom tokenizer not loaded in AutoTokenizer | I am training a language model from scratch. I trained **ByteLevelBPETokenizer** and then when I am trying to load this tokenizer using **AutoTokenizer**, it is giving me the following error.
OSError: Model name './MyRobertaConfig/' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, bert-base-finnish-cased-v1, bert-base-finnish-uncased-v1, bert-base-dutch-cased). We assumed './MyRobertaConfig/' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.
I have not added tokenizer_config.json file in config directory, I don't think it should be an issue. Now do I need to migrate or transform my custom tokenizer to make it compatible with transformers tokenizers or what. | 04-17-2020 05:24:36 | 04-17-2020 05:24:36 | If it's a BPETokenizer, you can load it with `RobertaTokenizer.from_pretrained("vocab.json", "merges.json")`<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,837 | closed | PretrainedTokenizer cleanup: Typehints, decode_batch | This is very very minor cleanup
- adds `decode_batch` which calls `.decode` on every entry in a list.
- A few cosmetic changes to tokenization_utils.py (type hints using defaultdict)
- adds type hints in files I touched. | 04-17-2020 04:33:03 | 04-17-2020 04:33:03 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3837?src=pr&el=h1) Report
> Merging [#3837](https://codecov.io/gh/huggingface/transformers/pull/3837?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f399c00610506325bc1690f0e68c6885e73395ec&el=desc) will **decrease** coverage by `0.00%`.
> The diff coverage is `84.21%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3837?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3837 +/- ##
==========================================
- Coverage 78.48% 78.48% -0.01%
==========================================
Files 106 106
Lines 17930 17934 +4
==========================================
+ Hits 14072 14075 +3
- Misses 3858 3859 +1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3837?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3837/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `87.50% <82.35%> (-0.07%)` | :arrow_down: |
| [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/3837/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `83.46% <100.00%> (ø)` | |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3837/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.76% <0.00%> (ø)` | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3837?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3837?src=pr&el=footer). Last update [f399c00...41502fa](https://codecov.io/gh/huggingface/transformers/pull/3837?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,836 | closed | Update camembert-base-README.md | 04-17-2020 02:27:01 | 04-17-2020 02:27:01 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3836?src=pr&el=h1) Report
> Merging [#3836](https://codecov.io/gh/huggingface/transformers/pull/3836?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f0c96fafd16d206b22a74fe76b251414f7314703&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3836?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3836 +/- ##
==========================================
+ Coverage 78.47% 78.48% +0.01%
==========================================
Files 106 106
Lines 17930 17930
==========================================
+ Hits 14071 14073 +2
+ Misses 3859 3857 -2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3836?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3836/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.92% <0.00%> (+0.32%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3836?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3836?src=pr&el=footer). Last update [f0c96fa...2142ca8](https://codecov.io/gh/huggingface/transformers/pull/3836?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Markdown for the table was broken so I fixed it in 60a42ef1c04591e0709429276ccbc02608b7d47d
Thank you @benjamin-mlr !<|||||>Thank you Julien !
On Sat, Apr 18, 2020 at 8:22 AM Julien Chaumond <[email protected]>
wrote:
> Markdown for the table was broken so I fixed it in 60a42ef
> <https://github.com/huggingface/transformers/commit/60a42ef1c04591e0709429276ccbc02608b7d47d>
>
> Thank you @benjamin-mlr <https://github.com/benjamin-mlr> !
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/pull/3836#issuecomment-615519941>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEHOJY2XTLDRZ255RTUWYSDRNDXDZANCNFSM4MKMNEYQ>
> .
>
--
Benjamin Muller
*Ms in Data Science, specialised in Deep Learning applied to NLP*
*www.linkedin.com/in/ <http://www.linkedin.com/in/>benjamin-muller-19796191*
<|||||>Hi @julien-c ,
A quick question regarding Pipeline and the new camembert checkpoints.
The pipeline "fill-mask" is not currently working for the new camembert checkpoints
e.g : camembert_fill_mask = pipeline("fill-mask",model="camembert/camembert-base-ccnet-4gb",tokenizer="camembert-base")
Error : "Model name 'camembert/camembert-base-ccnet-4gb' was not found in model name list..."
Should we do something to make it work ?
Thanks ! <|||||>@benjamin-mlr Works successfully for me. What's your version of transformers?<|||||>Hi Julien,
I got confused by the warning and the "not that accurate prediction" on the
masked sentence I tried out. It works, I can confirm now.
Thanks,
Benjamin
On Tue, Apr 21, 2020 at 9:28 AM Julien Chaumond <[email protected]>
wrote:
> @benjamin-mlr <https://github.com/benjamin-mlr> Works successfully for
> me. What's your version of transformers?
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/pull/3836#issuecomment-616895868>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEHOJY2K4DP2WI4XUM2QQKLRNTZDDANCNFSM4MKMNEYQ>
> .
>
--
Benjamin Muller
*Ms in Data Science, specialised in Deep Learning applied to NLP*
*www.linkedin.com/in/ <http://www.linkedin.com/in/>benjamin-muller-19796191*
<|||||>Yes I indeed noticed that that model was outputting weird predictions. |
|
transformers | 3,835 | closed | Transfo-XL cannot generate long texts. Using run_generation.py to generate texts | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Trying to generate long text with Transfo-XL Text-generator but I continuously get a
warning which may be the reason I am unable to generate the long text.
Here is the Warning: WARNING - transformers.modeling_utils - Setting `pad_token_id` to 0 (first `eos_token_id`) to generate sequence -->
This is how I run the code:
cd transformers/
python examples/run_generation.py --model_type transfo-xl --model_name_or_path transfo-xl-wt103 \
--prompt "China wants to take a victory lap over its handling of the coronavirus outbreak" --repetition 2.2 \
--length 500 \
--temperature 0.8 --k 8
| 04-16-2020 20:25:51 | 04-16-2020 20:25:51 | What is the error?<|||||>So, first, it does not generate long texts. It usually generates texts with
about 10 tokens. And I think it is because of the warning:
WARNING - transformers.modeling_utils - Setting `pad_token_id` to 0 (first
`eos_token_id`) to generate sequence
Other generators in run_generation.py are able to generate the length of
texts, specified in the command.
To recreate this error/warning. I loaded the git repo of huggingface,
installed transformers and then ran the following on the command line:
cd transformers/
python examples/run_generation.
py --model_type transfo-xl --model_name_or_path transfo-xl-wt103 \
--prompt
"China wants to take a victory lap over its handling of the
coronavirus outbreak"
--repetition 2.2 \
--length 500 \
--temperature 0.8 --k 8
What do you think I am missing?
I tried adding this argument to the command: -- pretrained_init_configuration
{"pad_token":0}
because in the other files transformers.modeling_utils, it has this command
and the commands where imported into the run_generation.py file. But as I
suspected, this resulted in a Non-recognition error.
On Fri, Apr 17, 2020 at 7:39 AM singhay <[email protected]> wrote:
> What is the error?
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/3835#issuecomment-615197924>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AHYMJAGIR2TKDRIEEZJC63DRNA5WTANCNFSM4MKD6TAA>
> .
>
--
*Adaku Uchendu*
*McNair Scholar*
*Mathematics major*
*Statistic minor *
*Math Lab Tutor*
*Pre-Calculus LA*
*University of Maryland, Baltimore County *
*Class of 2018*
<|||||>I mentioned this problem in a previous issue #3769.
If you read what Colanim said, basically, text generation stops at the eos token, and you can prevent that by specifying a min_length which forces the model to not generate a eos token until the min_length is reached.
However, the generate() method (the method run_generation uses to generate text) has another argument called max_length, which specifies the maximum length generated. If you look in the code, the length argument is equivalent to the max_length argument in the generate method, meaning length only specifies the maximum length not the minimum. For other models like XLNet, this is not a problem as it doesn't generate eos tokens (only eop and eod). But, for Transformer-XL this causes it to stop short.
You could fix this problem by editing the script changing the min_length argument equal to the length argument and max_length equal to length+1 (Otherwise it uses the default max_length of 20 and will still stop short).
However, right now both Transformer-XL and XLNet have an exponential time complexity, meaning if you want to generate a lot tokens it will take a long time, e.g., generating 4000 tokens will take 850 hours on a P100.
So, if you really need to generate long text check out [this](https://github.com/rusiaaman/XLnet-gen) repository, which uses XLNet and is able to generate a max of around 4000 tokens in 3 hours with 16 GB ram. If you are generating less than 1024 tokens you should use GPT-2 instead as it is faster, more coherent, and fine-tunable using the language modeling script.
<|||||>Thank you but how did you get the Transformer-XL to generate long
coherent texts. Now it generates long texts but the article that is not
coherent.
On Fri, Apr 17, 2020 at 10:21 PM urlocal12 <[email protected]> wrote:
> I mentioned this problem in a previous issue #3769
> <https://github.com/huggingface/transformers/issues/3769>.
>
> If you read what Colanim said, basically, text generation stops at the eos
> token, and you can prevent that by specifying a min_length which forces the
> model to not generate a eos token until the min_length is reached.
>
> However, the generate() method (the method run_generation uses to generate
> text) has another argument called max_length, which specifies the maximum
> length generated. If you look in the code, the length argument is
> equivalent to the max_length argument in the generate method, meaning
> length only specifies the maximum length not the minimum. For other models
> like XLNet, this is not a problem as it doesn't generate eos tokens (only
> eop and eod). But, for Transformer-XL this causes it to stop short.
>
> You could fix this problem by editing the script changing the min_length
> argument equal to the length argument and max_length equal to length+1
> (Otherwise it uses the default max_length of 20 and will still stop short).
>
> However, right now both Transformer-XL and XLNet have an exponential time
> complexity, meaning if you want to generate a lot tokens it will take a
> long time, e.g., generating 4000 tokens will take 850 hours on a P100.
>
> So, if you really need to generate long text check out this
> <https://github.com/rusiaaman/XLnet-gen> repository, which uses XLNet and
> is able to generate a max of around 4000 tokens in 3 hours with 16 GB ram.
> If you are generating less than 1024 tokens you should use GPT-2 instead as
> it is faster, more coherent, and fine-tunable using the language modeling
> script.
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/3835#issuecomment-615540583>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AHYMJAAUVFARGLTRN2BN6FDRNEFCLANCNFSM4MKD6TAA>
> .
>
--
*Adaku Uchendu*
*McNair Scholar*
*Mathematics major*
*Statistic minor *
*Math Lab Tutor*
*Pre-Calculus LA*
*University of Maryland, Baltimore County *
*Class of 2018*
<|||||>Closing for now since #3769 seems to be resolved.
Also note that the `text-generation` pipeline as shown here: https://huggingface.co/transformers/usage.html#text-generation should be used :-) |
transformers | 3,834 | closed | i want help to create saved model(.pth) from Pytorch Dump(pytorch_model.bin) if possible! | i have, PROJECT(folder)
├── pytorch_model.bin
├── bert_config.json
└── vocab.txt
i tried saving it with
`torch.save( pytorch_model.bin , PATH)`
but came with error
`-bash: syntax error near unexpected token `pytorch_model.bin,'`
what am i doing wrong ?
and please help me convert pretrained model to saved model( .pth)!!
| 04-16-2020 19:35:16 | 04-16-2020 19:35:16 | i used `transformers-cli convert` to make `python_model.bin` from checkpoints<|||||>Did you try to put in in quotes? If you have a model you should do `torch.save(model.state_dict(), PATH)`.
Please take a look at the [PyTorch documentation](https://pytorch.org/tutorials/beginner/saving_loading_models.html). We prefer using `model.save_pretrained(PATH)`, however, as it saves the configuration object alongside it which is necessary when loading the model afterwards. |
transformers | 3,833 | closed | Remove tqdm logging when using pipelines. | Attempt to fix #3744
Introduce `tqdm_enabled` parameter on `squad_convert_examples_to_features()` default to `True` and set to `False` in QA pipelines. | 04-16-2020 16:56:28 | 04-16-2020 16:56:28 | |
transformers | 3,832 | closed | The issue I met when do the NER task for Universal language by using XLM-R | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
Firstly, go through the roughly logic when we do slot tagging task for English query, take query “what’s the weather in Beijing” and detect location slot tagging as example:
1. We are using space to split the query and label it as “O O O O B-location” when generate the training data
2. By using sentence piece to tokenize the query to ['_what', "'", 's', '_the', '_weather', '_in', '_bei', 'jing'], and corresponding tagging mask [1, 0, 0, 1, 1, 1, 1, 0] (take first token in the word as 1, other tokens in the word as 0)
3. By using NER task to do the predict, and then extract the token’s prediction result as the word prediction result if the token’s mask is 1
When we do the NER task as above logic for CJK language, take “今天北京天气怎么样?” as example, after sentence piece tokenize, the tokens is ['▁', '北京', '今天', '天气', '怎么样', '?'], since there are no space in the query, so it will take the whole query as one slot and the corresponding tagging mask is [1, 0, 0, 0, 0, 0], so we can’t get the location slot “北京”
The cause is that the split methods for English and CJK are different, In French, “Je t’aime” is means “I love you”, “Je” means “I”, “t’” means “you”, “aime” means “love”, it is using “’” as splitter.
I think we can solve it by using below solutions:
1. Detect what language it is firstly for the query
2. For the language likes English, we don’t change the logic as above, for the language likes CJK, after split by using space, we can use tokens generated by sentence piece as word,
a. For example “北京今天天气怎么样,西雅图呢?”, split it as [“北京今天天气怎么样”, “西雅图呢?”] as space
b. For “北京今天天气怎么样”, after tokenized is: ['▁', '北京', '今天', '天气', '怎么样', ',’], we mark each token as word we need to predict, the correspond tagging mask is [0, 1, 1, 1, 1, 0]
By using NER task to do the predict, and then extract the token’s prediction result as the word prediction result if the token’s mask is 1, so we can predict ‘北京’ as location slot
But for this, it needs to have specific language knowledge and language classification detector, any better ideas do you have? | 04-16-2020 16:54:39 | 04-16-2020 16:54:39 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,831 | closed | AlbertModel output is not HalfTensor when using apex fp16 | # 🐛 Bug
Despite the fact that I turned the model to fp16 with apex, the hidden representation output is not half tensor (see code snippet for details) while class heads are half tensors.
## Information
Model I am using AlbertModel.
Language I am using the model on arbitrary:
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
```py
import torch
import torch.nn as nn
import torch.nn.functional as F
from transformers import AlbertModel, AlbertConfig
from transformers.modeling_albert import AlbertMLMHead
import apex
pad_token_id = 0
bos_token_id = 2
eos_token_id = 3
vocab_size = 20
config = {
"attention_probs_dropout_prob": 0.1,
"hidden_act": "gelu_new",
"hidden_dropout_prob": 0.1,
"embedding_size": 64,
"hidden_size": 256,
"initializer_range": 0.02,
"intermediate_size": 1024,
"max_position_embeddings": 512,
"num_attention_heads": 2, # smaller than usual
"num_hidden_layers": 2, # smaller than usual
"num_hidden_groups": 1,
"net_structure_type": 0,
"gap_size": 0,
"num_memory_blocks": 0,
"inner_group_num": 1,
"down_scale_factor": 1,
"type_vocab_size": 2,
"vocab_size": vocab_size,
"pad_token_id": pad_token_id,
"bos_token_id": bos_token_id,
"eos_token_id": eos_token_id,
}
albert_config = AlbertConfig(**config)
encoder: AlbertModel = AlbertModel(albert_config)
masked_lm = AlbertMLMHead(albert_config)
optimizer = torch.optim.Adam(encoder.parameters(), lr=0.0001)
encoder = encoder.cuda()
model, optimizer = apex.amp.initialize(encoder, optimizer, opt_level="O1", )
# When giving LongTensor as input, the class heads are half tensors,
# but hidden representations are not half!
long_input = torch.randint(1, 10, (10,5)).cuda()
f1, m1= encoder(long_input)
f1.type()
"""
'torch.cuda.FloatTensor'
"""
m1.type()
"""
'torch.cuda.HalfTensor'
"""
# When giving HalfTensor as input, it crashes
half_input = long_input.half()
f2, m2= encoder(half_input)
"""
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/mnt/castor/seas_home/r/rasooli/torch_env/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/mnt/castor/seas_home/r/rasooli/torch_env/lib64/python3.6/site-packages/transformers/modeling_albert.py", line 570, in forward
input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds
File "/mnt/castor/seas_home/r/rasooli/torch_env/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/mnt/castor/seas_home/r/rasooli/torch_env/lib64/python3.6/site-packages/transformers/modeling_bert.py", line 173, in forward
inputs_embeds = self.word_embeddings(input_ids)
File "/mnt/castor/seas_home/r/rasooli/torch_env/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/mnt/castor/seas_home/r/rasooli/torch_env/lib64/python3.6/site-packages/torch/nn/modules/sparse.py", line 114, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/mnt/castor/seas_home/r/rasooli/torch_env/lib64/python3.6/site-packages/torch/nn/functional.py", line 1484, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError:
"""
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.7.0
- Platform:
- Python version: Python 3.6.10
- PyTorch version (GPU?): 1.4.0
- Tensorflow version (GPU?):
- Using GPU in script?: Yes, NVIDIA-SMI 418.67; GeForce RTX 208
- Using distributed or parallel set-up in script?: Optional parallel
| 04-16-2020 16:23:29 | 04-16-2020 16:23:29 | Hi @rasoolims,
This seems correct. Basically, using `opt_level="O1"` means apex will add some downcasting to `float16` from a set of whitelisted operations, such as GEMM or Conv. This way, these operations will benefit from using Tensor Cores on latest devices, achieving higher throughput..
In the other hand, for some operations you want the whole spectrum of representable values to keep a very high accuracy in the output, this is true for many activation functions such as `Softmax` or `GeLU`.
What you observe here with your ouputs (`f1, m1`) is directly related to the dynamic downcasting of apex:
- `f1` : Comes from a GeLU operation from AlbertaLayer, which is not downcasted to `float16`
- `m1`: Comes from a Linear layer operation, which is implemented through `gemm` and would greatly benefits from using Tensor Cores.
In addition, you should not be doing any input type conversion / downcasting when using `opt_level="O1"`.<|||||>Hi @mfuntowicz
Thanks for the response.
Do you mean with the current setting, it is better to just use fp32? Or do you recommend changing the opt_level or activation function?
<|||||>It really depends what you want to do and which level of Mixed-Precision training you want to achieve.
With O1, only the operations which can expected improvements by running specialised CUDA/CuDNN fp16 kernels (on Tensor Cores) will be patched to have fp16 weights and input/output conversions.
With 02, all the weights of the model are going to be converted to fp16 with the exception of some layers like Batch Norm, so you have a quasi complete fp16 training.
With O3, everything is run through fp16.<|||||>#thanks @mfuntowicz |
transformers | 3,830 | closed | Faster mask computation | # 🚀 Feature request
Currently, the masking is done using the full prediction matrix which is both memory and computation inefficient. One example is in [here](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_albert.py#L675). I think with Pytorch indexing, first and foremost, we don't need to do full mask matrix construction. Second, it can be much faster. Fairseq does a similar thing in [here](https://github.com/pytorch/fairseq/blob/cce6dcb1cca85955a82879ea5064fe8202e8f412/fairseq/models/roberta/model.py#L217)
## Motivation
If input is [n,m], currently the code creates a clone of [n,m] mask where non-masked inputs are -100. Then it does a full output projection: if input in [n, m] and hidden representation is [n, m, h], the final output will be a huge [n, m, v] if v is the vocabulary size. Instead we can think of mask indices of size k<< n* m, and thus the we can extract a [k, h] submatrix from [n, m, h], then can have a smaller output result [k, v].
## Your contribution
This a sample code (I extracted from a bigger code of mine; sorry if some variables are not defined in the snippet):
```py
mask_prob = 0.15
mask = torch.empty(input_ids.size()).uniform_(0, 1) < mask_prob
mask[pads] = False # We should not mask pads.
masked_ids = input_ids[mask]
replacements = masked_ids.clone()
for i in range(len(replacements)):
r = random.random()
if r < 0.8:
replacements[i] = mask_id
elif r < 0.9:
# Replace with another random word.
random_index = random.randint(0, vocab_size - 1)
replacements[i] = random_index
else:
# keep the word
pass
input_ids[mask] = replacements
text_hidden, text_cls_head = albert_model(texts, attention_mask=pads)
masked_hidden_state = text_hidden[mask]
output_predictions = F.log_softmax(albertMLMHead(masked_hidden_state), dim=1)
```
| 04-16-2020 15:56:12 | 04-16-2020 15:56:12 | Hi @rasoolims ,
That sounds reasonable what you are saying! I wanted to take a look into masking optimization in a couple of weeks. If you feel like it, you could also open a PR and we take a look together :-) <|||||>Feel free to open a PR for this :-) Closing for now |
transformers | 3,829 | closed | Can't install transformers in conda environment | # 🐛 Bug
I tried to install transformers into a conda environment
```
pip install transformers
Collecting transformers
Using cached transformers-2.8.0-py3-none-any.whl (563 kB)
Collecting tokenizers==0.5.2
Downloading tokenizers-0.5.2-cp38-cp38-macosx_10_15_x86_64.whl (1.1 MB)
|████████████████████████████████| 1.1 MB 1.6 MB/s
Collecting tqdm>=4.27
Downloading tqdm-4.45.0-py2.py3-none-any.whl (60 kB)
|████████████████████████████████| 60 kB 23.6 MB/s
Collecting filelock
Downloading filelock-3.0.12-py3-none-any.whl (7.6 kB)
Collecting requests
Using cached requests-2.23.0-py2.py3-none-any.whl (58 kB)
Collecting regex!=2019.12.17
Using cached regex-2020.4.4.tar.gz (695 kB)
Collecting boto3
Using cached boto3-1.12.39-py2.py3-none-any.whl (128 kB)
Requirement already satisfied: numpy in ./Anaconda/anaconda3/envs/nlp/lib/python3.8/site-packages (from transformers) (1.18.2)
Collecting sentencepiece
Using cached sentencepiece-0.1.83.tar.gz (497 kB)
ERROR: Command errored out with exit status 1:
command: /Users/chen_bowen/Anaconda/anaconda3/envs/nlp/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/lz/65hfhw790_z85w_09kvvm37r0000gn/T/pip-install-lezphia4/sentencepiece/setup.py'"'"'; __file__='"'"'/private/var/folders/lz/65hfhw790_z85w_09kvvm37r0000gn/T/pip-install-lezphia4/sentencepiece/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /private/var/folders/lz/65hfhw790_z85w_09kvvm37r0000gn/T/pip-install-lezphia4/sentencepiece/pip-egg-info
cwd: /private/var/folders/lz/65hfhw790_z85w_09kvvm37r0000gn/T/pip-install-lezphia4/sentencepiece/
Complete output (7 lines):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/private/var/folders/lz/65hfhw790_z85w_09kvvm37r0000gn/T/pip-install-lezphia4/sentencepiece/setup.py", line 29, in <module>
with codecs.open(os.path.join('..', 'VERSION'), 'r', 'utf-8') as f:
File "/Users/chen_bowen/Anaconda/anaconda3/envs/nlp/lib/python3.8/codecs.py", line 905, in open
file = builtins.open(filename, mode, buffering)
FileNotFoundError: [Errno 2] No such file or directory: '../VERSION'
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.```
python version: Python 3.8.2
OS: Mac OSX 10.15.3
Anaconda version: conda 4.8.0 | 04-16-2020 15:41:34 | 04-16-2020 15:41:34 | Looking at the error message, you seem to be running into an error with the sentencepiece package, not transformers.
I looked at the sentencepiece GitHub repo and there is an open issue on this here:
https://github.com/google/sentencepiece/issues/452<|||||>Looks like this issue can be closed now. @chen-bowen can you confirm?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>This happened to me while installing Transformers. The issue is with sentnecepiece as stated above. I did the following steps:
- To install sentencepiece: `conda install -c powerai sentencepiece`
After, I did the usual _pip install transformers_.
Was able to get it set and running.
|
transformers | 3,828 | closed | Tanh torch warnings | This pull request the warning generated by using torch.nn.functional.tanh (which is deprecated). This pull request changes it to torch.tanh to remove the warning. | 04-16-2020 15:26:40 | 04-16-2020 15:26:40 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3828?src=pr&el=h1) Report
> Merging [#3828](https://codecov.io/gh/huggingface/transformers/pull/3828?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b1e2368b32f3af88a920dac47cfc02a869409b20&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3828?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3828 +/- ##
=======================================
Coverage 78.47% 78.47%
=======================================
Files 106 106
Lines 17924 17924
=======================================
Hits 14066 14066
Misses 3858 3858
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3828?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/3828/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `82.35% <ø> (ø)` | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3828?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3828?src=pr&el=footer). Last update [b1e2368...06daacd](https://codecov.io/gh/huggingface/transformers/pull/3828?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>LGTM |
transformers | 3,827 | closed | ModuleNotFoundError: No module named '__main__.utils_summarization'; '__main__' is not a package | # 🐛 Bug
## Information
I am using ./examples/summarization/bertabs/
`python run_summarization.py \
--documents_dir $data_dir\
--summaries_output_dir$output_dir \
--no_cuda true \
--batch_size 4 \
--min_length 50 \
--max_length 200 \
--beam_size 5 \
--alpha 0.95 \
--block_trigram true
`
returns
`ModuleNotFoundError: No module named '__main__.utils_summarization'; '__main__' is not a package`
My environment
- `transformers` version: 2.8.0
- Platform: Darwin-18.7.0-x86_64-i386-64bit
- Python version: 3.7.3
- PyTorch version (GPU?): 1.1.0 (False)
- Tensorflow version (GPU?): 2.0.0-beta1 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
Thank you!
| 04-16-2020 14:57:22 | 04-16-2020 14:57:22 | closing the issue
changed the from `.utils_summarization import ( `
to ` from utils_summarization import ( `
in run_summarization.py
and solved the issue |
transformers | 3,826 | closed | [readability] consolidate examples/summarization/bart and examples/summarization/t5 | This involves
- [ ] consolidating README.md
- [ ] consolidating evaluate_cnn.py scripts.
- [ ] evaluate_wmt.py should also work
- [ ] consolidating unittests
- [ ] updating bash scripts
- [ ] checking that consolidated stuff works, adding appropriate test coverage.
the resulting code should probably all be in `summarization/` with only a `bertabs/` subdirectory.
| 04-16-2020 14:53:38 | 04-16-2020 14:53:38 | Hey @sshleifer !
I would like to work on the issue. It's my first issue so I appreciate any help!<|||||>Awesome, helpful command to get quick feedback on whether your change is working:
```bash
pytest --tb=short -p no:warnings examples/summarization/bart
```
Make sure you tag @sshleifer when you send a PR and I will give a careful review :) |
transformers | 3,825 | closed | [readability] Consolidate prune_heads logic to PretrainedModel. | Many models have identical implementations of `prune_heads` it would be nice to store that implementation as a method on `PretrainedModel` and reduce the redundancy. | 04-16-2020 14:49:51 | 04-16-2020 14:49:51 | Hi @sshleifer ,
I am new to opensource and would like to help out with this issue. Could you please point me to some guide to setting up the project locally.
Thanks.<|||||>I got the contributing.md file. Thanks anyways. :)<|||||>Hi @sshleifer
I am looking for a contribution. Is the issue is still open?
Thanks and Regards<|||||>@Shandilya21 It looks open. I was trying to work on it and got stuck at a point. Do let me know if you would like to discuss<|||||>Hi @noelmat Okay, can you tell me where you got stuck? I am happy to discuss this.<|||||>Is anybody working on this? I'm new to open source but I'd like to give it a shot<|||||>Go for it!<|||||>> Is anybody working on this? I'm new to open source but I'd like to give it a shot
Yeah, I am in.. I also wanna work on this issue.
Issue is to implement `prune_heads` as a method in `PretrainedModel`<|||||>@yugaljain1999 made some progress on this?<|||||>I think this is done. Happy to find new bugs if anyone is on the hunt! |
transformers | 3,824 | closed | [examples] summarization/bart/finetune.py supports t5 | - we were passing attention_mask as an arg, not a kwarg, causing `test_step` to break.
- That case is now covered in the unittest, unittests also cover the t5 model.
- renamed run_bart_sum.py to finetune.py since it is model agnostic.
- The `bart/` and `t5` subdirectories should be consolidated in a future PR.
This took 15 mins because of underlying infrastructure: unittests for examples and tiny models on S3 :)
| 04-16-2020 14:45:18 | 04-16-2020 14:45:18 | Super! LGTM<|||||>Thanks @sshleifer for the quick fix. Just a small query where it will save the output sentences. A file is generated in output_dir with only losses specified when --do_predict is passed in Argument.
What if I want to generate for unknown inputs using fine-tuned model. |
transformers | 3,823 | closed | lowercasing on LM with cased models | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Execute 'run_language_modeling.py' with one of the following cased models.
(example of cased models: sciBert cased, BioBert, ClinicalBert)
2. Check the tokenizer's do_lower_case option.
## Expected behavior
I'm language modeling with my own data on the top of a pre-trained model. I'm using cased models, but the tokenizer lower case input data since its default lowercase option is True. When I used 'bert-base-cased', the tokenizer didn't lower case, but it happened with other cased models mentioned above.
- tokens with 'bert-base-cased' model
['[CLS]', 'This', 'text', 'is', 'included', 'to', 'make', 'sure', 'Uni', '##code',...
- tokens with 'scibert_scivocab_cased' model
['[CLS]', 'this', 'text', 'is', 'included', 'to', 'make', 'sure', 'unic', '##ode',...
Is it a bug? or am I missing something?
As an alternative, I'm using a workaround code by adding additional command parameter.
```python
parser.add_argument("--do_lower_case", action="store_true", help="Should be added for uncased models.")
tokenizer = AutoTokenizer.from_pretrained(args.model_name_or_path, cache_dir=args.cache_dir, do_lower_case=args.do_lower_case)
```
Thanks in advance.
| 04-16-2020 13:49:18 | 04-16-2020 13:49:18 | Unfortunately, this is because these models didn't get uploaded with the `tokenizer_config.json` specifying they shouldn't be lowercased.
cc @julien-c <|||||>I see. Thanks for the explanation. I think it would be helpful to mention that in the instruction or somewhere for newbies like me though. They may overlook it unless they actually test a model before training it.<|||||>Yes, if those models really are lowercase, we should add a `tokenizer_config.json` (and let their authors/uploaders know about it). Also cc @patrickvonplaten <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,822 | closed | getting random results when running run_glue | Hi
I am running run_glue.py script on RTE dataset with BERT base model, on different gpus and I am getting very random results, like changing a lot by changing the gpu. I am using python 3.6 and transformer version 2.5.0. I tried with two gpu types like
Kepler , GTX1080ti and P40, ...
such randomness really affects the benchmarking and appreciate your help. thanks
I have a deadline and appreciate your prompt response.
thanks.
Best
Rabeeh | 04-16-2020 13:42:47 | 04-16-2020 13:42:47 | Hi
Any comment on this? I also tested with these versions, run_glue with BERT gets fully random results. Like this I cannot run experiments, could you please have a look?
python 3.6.9 h265db76_0
pytorch 1.2.0 py3.6_cuda10.0.130_cudnn7.6.2_0 pytorch
torchvision 0.4.0 py36_cu100 pytorch
transformers 2.5.0 <pip>
thanks
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,821 | closed | Typo fix | 04-16-2020 12:53:56 | 04-16-2020 12:53:56 | ||
transformers | 3,820 | closed | #3787 Fixing the pip install issue by installing from git | #3787 | 04-16-2020 12:10:05 | 04-16-2020 12:10:05 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,819 | closed | Tokenizers Notebook Issue | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
Hello everyone,
While playing around with tokenizers notebook : https://github.com/huggingface/transformers/blob/master/notebooks/01-training-tokenizers.ipynb
`---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
in
----> 1 tokenizer = Tokenizer(BPE()) # byte-pair encoding model
2 # now using normalizers
3 tokenizer.normalizer = Sequence([
4 NFKC(),
5 Lowercase()
TypeError: cannot create 'BPE' instances`
Could not find a resolution for this. thanks | 04-16-2020 09:03:34 | 04-16-2020 09:03:34 | pip version of installation has a problem, dependencies are not defined so it installs tokenizers-0.5.2 with transformers 2.8.0.
Download from source and don't care warning of dependency, works fine :) |
transformers | 3,818 | closed | What are the GPU RAM requirements of popular models? | # ❓ Questions & Help
What are the GPU RAM requirement of `gpt2`, `gpt2-medium`, `distilgpt2`, `bert-base-uncased` and/or `distilroberta-base`
* for training?
* for inference?
Additionally, how do you calculate or find this information for other models?
original StackOverflow question: https://stackoverflow.com/questions/61226569/what-are-the-gpu-ram-requirements-of-popular-huggingface-transformers-models
related: #1750 | 04-16-2020 08:32:18 | 04-16-2020 08:32:18 | Hi @r0levrai,
Good question! We are actually thinking about a good visualization for exactly that. Maybe in 2,3 weeks :-)
We already have a very useful script to test RAM requirements which you can find here:
`https://github.com/huggingface/transformers/blob/master/examples/benchmarks.py`<|||||>It should work with any model for a given `batch_size` and `sequence_length`. Let me know if you encounter problems with the script!<|||||>any updates on that visualization? 👀
Also that link 404's now.<|||||>Hey @thesofakillers - note that we don't support benchmarking utils in Transformers anymore |
transformers | 3,817 | closed | [Examples, T5] Change newstest2013 to newstest2014 and clean up | This PR just adds a small change to #3802 to make the code quality happy. | 04-16-2020 07:36:15 | 04-16-2020 07:36:15 | |
transformers | 3,816 | closed | Aborted (core dumped) or Kernel dies | Whenever I am trying to import tranfomers my kernel dies off in jupyter notebook .
tranformer version - 2.8.0
python version -3.7.7
| 04-16-2020 06:45:39 | 04-16-2020 06:45:39 | Do you mind showing us the lines that crash? Do you have a reproducible example?<|||||>+1. @yashwatwani do you have resolved it? I have the same problem too.
If I install transformers 2.8.0, it will produce error:
```
[1] 11267 segmentation fault (core dumped) PYTHONPATH=. python apps/absa/main.py
```
If I upgrade to the latest version 2.11.0, no error happens.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,815 | closed | How to speed up getting answers? | Hi ,
I'm facing the issues using BertForQuestionAnswering ,can you please help me fixing these:
1. I'm trying to use BertForQuestionAnswering pretrained model to get answers from news,
answer_question is the function where in you pass the context and the question and get the relevant answers,but if I have 100 contexts its taking around 100 seconds to get the answer,may I please know by any chance I can get the answers in much lesser time.
2. Some times I get the answer where in start index and end index are pointing to [SEP],that means the whole context,can I avoid this.
```
BERT_SQUAD = 'bert-large-uncased-whole-word-masking-finetuned-squad'
model = BertForQuestionAnswering.from_pretrained(BERT_SQUAD)
tokenizer = BertTokenizer.from_pretrained(BERT_SQUAD)
def answer_question(question, context):
"""
Answer questions
"""
try:
print("Type:",type(context))
print(context)
encoded_dict = tokenizer.encode_plus(
question, context, # Sentence to encode.
add_special_tokens = True, # Add '[CLS]' and '[SEP]'
max_length = 256, # Pad & truncate all sentences.
pad_to_max_length = True,
return_attention_mask = True, # Construct attn. masks.
return_tensors = 'pt' # Return pytorch tensors.
)
print(encoded_dict)
input_ids = encoded_dict['input_ids'].to(torch_device)
token_type_ids = encoded_dict['token_type_ids'].to(torch_device) # segments
start_scores, end_scores = model(input_ids, token_type_ids=token_type_ids)
print('Start Scores:',start_scores)
print('End Scores:',end_scores)
all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0])
print(all_tokens)
answer = tokenizer.convert_tokens_to_string(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1])
answer = answer.replace('[CLS]', '')
#answer= answer.replace('[SEP]','')
print("Start index:",all_tokens[torch.argmax(start_scores)])
print("End index:",all_tokens[torch.argmax(end_scores)])
print(answer)
except ValueError:
print("Error in fetching answer")
answer=''
return answer
```
Thanks in advance!!
| 04-16-2020 05:45:39 | 04-16-2020 05:45:39 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,814 | closed | A bug in the padding of input examples in the NER fine-tuning example | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Roberta
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. TODO
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
https://github.com/huggingface/transformers/blob/c59b1e682d6ebaf7295c63418d4570228904e690/examples/ner/utils_ner.py#L123
This line is supposed to return 3 for Roberta models but it's just returning 2 causing the length of the input_ids to be more than the max_seq_len.
This might be the reason for that: https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_roberta.py#L288
TODO: Share the notebook.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.2.0-rc2 (True)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
| 04-15-2020 23:13:27 | 04-15-2020 23:13:27 | This PR https://github.com/huggingface/transformers/pull/3803 might be related to the bug but my initial thought is that it will not fix it.<|||||>Hello! Why do you think it should return 3 for RoBERTa models?
It should return 2, single sequences for RoBERTa are built like:
`<s> tok_0 ... tok_n </s>`,
with only two special tokens added.
For sequence pairs however, 4 tokens are added:
`<s> tok_0 ... tok_n </s></s> tok_(n + 1) ... tok2n </s>`<|||||>> Hello! Why do you think it should return 3 for RoBERTa models?
>
> It should return 2, single sequences for RoBERTa are built like:
>
> `<s> tok_0 ... tok_n </s>`,
>
> with only two special tokens added.
>
> For sequence pairs however, 4 tokens are added:
>
> `<s> tok_0 ... tok_n </s></s> tok_(n + 1) ... tok2n </s>`
Well, this line suggested this:
https://github.com/huggingface/transformers/blob/c59b1e682d6ebaf7295c63418d4570228904e690/examples/ner/utils_ner.py#L122
Additionally, the current code produced lists of length > `max_seq_length` so for sure there is a problem there.<|||||>I had an issue with the running the NER model. In this commit https://github.com/huggingface/transformers/commit/96ab75b8dd48a9384a74ba4307a4ebfb197343cd `num_added_tokens` got changed into `num_special_tokens_to_add`. Just changing the name of the variable in the `utils_ner.py` fixed the issue for me. However, I had an issue with variable name not being found. Let me know if this fixes you problem.<|||||>> I had an issue with the running the NER model. In this commit [96ab75b](https://github.com/huggingface/transformers/commit/96ab75b8dd48a9384a74ba4307a4ebfb197343cd) `num_added_tokens` got changed into `num_special_tokens_to_add`. Just changing the name of the variable in the `utils_ner.py` fixed the issue for me. However, I had an issue with variable name not being found. Let me know if this fixes you problem.
Hi @TarasPriadka
Yes, the edit you have suggested solved the problem.
I have found that you have already reported the issue before (https://github.com/huggingface/transformers/issues/3686).
Don't you think that we should open a simple Pull Request to fix this problem?<|||||>@AMR-KELEG I think it got fixed just now with this huggingface/transformers#3800 PR<|||||>@TarasPriadka, @AMR-KELEG
I had a similar issue using `preprocess.py` on an NER dataset.
```
Traceback (most recent call last):
File "preprocess.py", line 12, in <module>
max_len -= tokenizer.num_special_tokens_to_add()
AttributeError: 'BertTokenizer' object has no attribute 'num_special_tokens_to_add'
```
I think the PyPi file hasn't been updated, so `pip install transformers` won't have the files you need. I built from source and the errors went away. If you try building from source, I think your problem might go away too. <|||||>> @TarasPriadka, @AMR-KELEG
>
> I had a similar issue using `preprocess.py` on an NER dataset.
>
> ```
> Traceback (most recent call last):
> File "preprocess.py", line 12, in <module>
> max_len -= tokenizer.num_special_tokens_to_add()
> AttributeError: 'BertTokenizer' object has no attribute 'num_special_tokens_to_add'
> ```
>
> I think the PyPi file hasn't been updated, so `pip install transformers` won't have the files you need. I built from source and the errors went away. If you try building from source, I think your problem might go away too.
Well, I was using the source version but as said before, seems like the bug was there and got fixed in later commits.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,813 | closed | T5 prediction using fine-tuned model | After fine-tuning the T5 model on my own dataset, when I use the fine-tuned model to predict for test set using the following command:
python '/content/transformers-master/examples/summarization/bart/run_bart_sum.py' --data_dir='/content/drive/My Drive/two_keywords/' --model_type=t5 --output_dir=/content/t5 --do_predict --model_name_or_path=t5-small
Error generated:
<img width="1010" alt="Screenshot 2020-04-13 at 6 18 47 PM" src="https://user-images.githubusercontent.com/30004110/79394604-31692980-7f78-11ea-8c87-3c04e542e962.png">
# ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | 04-15-2020 22:21:46 | 04-15-2020 22:21:46 | @sshleifer @patrickvonplaten <|||||>Hi, I am curious about how you fine-tined T5. Did you used the run_bart_sum.py script by changing the model type from bart to T5? Thanks!<|||||>@sshleifer - could you take a look at this if you find some time? `T5` should more or less work out-of-the-box with the `run_bart_sum` script no? <|||||>@MichaelZhouwang yes. Please look at this. #3576 |
transformers | 3,812 | closed | Question Answering support for Albert and Roberta in TF | This PR simply adds `TFRobertaForQuestionAnswering` and `TFAlbertForQuestionAnswering` classes (I needed them to do some model conversions!) | 04-15-2020 21:12:58 | 04-15-2020 21:12:58 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3812?src=pr&el=h1) Report
> Merging [#3812](https://codecov.io/gh/huggingface/transformers/pull/3812?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/301bf8d1b43d99efe1fdb5ba15871e975b3cb6cf&el=desc) will **increase** coverage by `0.04%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3812?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3812 +/- ##
==========================================
+ Coverage 78.27% 78.31% +0.04%
==========================================
Files 106 106
Lines 17964 17996 +32
==========================================
+ Hits 14061 14094 +33
+ Misses 3903 3902 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3812?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/3812/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.98% <ø> (ø)` | |
| [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/3812/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `68.62% <ø> (ø)` | |
| [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/3812/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `86.25% <100.00%> (+0.67%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3812/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `100.00% <100.00%> (ø)` | |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3812/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.00% <0.00%> (+0.16%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3812?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3812?src=pr&el=footer). Last update [301bf8d...44c92f3](https://codecov.io/gh/huggingface/transformers/pull/3812?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,811 | closed | Pre-trained BART performance on XSum lower than expected | Greetings,
I am trying to reproduce BART's results on xsum using 'bart-large-xsum' and modified `examples/summarization/bart/evaluate_cnn.py` (max_length=60, min_length=10, beam=6, lenpen=1) but got lower ROUGE scores than reported.
I first obtained comparable results on CNNDM using 'bart-large-cnndm' and the dataset on s3:
CNNDM | R-1 | R-2 | R-L
-- | -- | -- | --
BART (Lewis et al., 2019) | 44.16 | 21.28 | 40.9
BART (ours) | 44.32 | 21.12 | 41.13
I then obtained the raw xsum dataset from the original authors and saved them to test.source and test.target (cased) as for CNNDM. Then I ran evaluate_cnn.py with the new parameters above. Is there anything that I am missing? Thank you!
XSum | R-1 | R-2 | R-L
-- | -- | -- | --
BART (Lewis et al., 2019) | 45.14 | 22.27 | 37.25
BART (ours) | 44.7 | 21.04 | 35.64
| 04-15-2020 16:34:31 | 04-15-2020 16:34:31 | I'm having the [exact same issue](https://github.com/pytorch/fairseq/issues/1971), with the official BART code on fairseq.
The author is currently looking into it.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I downloaded data from [here](https://s3.amazonaws.com/datasets.huggingface.co/summarization/xsum.tar.gz) and was able to get 45.37 / 22.30 / 37.19 using facebook/bart-large-xsum model
<|||||>> I downloaded data from [here](https://s3.amazonaws.com/datasets.huggingface.co/summarization/xsum.tar.gz) and was able to get 45.37 / 22.30 / 37.19 using facebook/bart-large-xsum model
Hi @swethmandava , this dataset seems to have different train/valid/test split from the original dataset. Can you reproduce the scores with the original dataset? |
transformers | 3,810 | closed | run_glue.py example doesn't work for distilbert models | # 🐛 Bug
## Information
Hi all,
I am succesfully able to run the run_glue.py example with BERT, XLNet and other architectures. However, when I try distilbert I got the following error:
```
Traceback (most recent call last):
File "run_glue.py", line 562, in <module>
main()
File "run_glue.py", line 510, in main
train_dataset = load_and_cache_examples(args, args.task_name, tokenizer, evaluate=False)
File "run_glue.py", line 373, in load_and_cache_examples
all_token_type_ids = torch.tensor([f.token_type_ids for f in features], dtype=torch.long)
TypeError: an integer is required (got type NoneType)
```
Model I am using (Bert, XLNet ...):
distilbert (distilbert-base-cased)
Language I am using the model on (English, Chinese ...):
English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: GLUE/SST-2
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
export GLUE_DIR=./glue/glue_data
export TASK_NAME=SST-2
CUDA_VISIBLE_DEVICES=2,3 python run_glue.py \
--model_type DistilBERT \
--model_name_or_path distilbert-base-cased \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--data_dir $GLUE_DIR/$TASK_NAME \
--max_seq_length 128 \
--per_gpu_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir ./output/$TASK_NAME/
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
```
- `transformers` version: 2.8.0
- Platform: Linux-5.5.15-200.fc31.x86_64-x86_64-with-fedora-31-Thirty_One
- Python version: 3.6.5
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: NO
```
| 04-15-2020 16:05:26 | 04-15-2020 16:05:26 | I think I might have broken this on `master` when merging #3688 🤔
Hi @ereday could you please try from the `trainer` branch described in PR #3800?
Otherwise, hotfixing this in your code should be easy (just remove the `all_token_type_ids` line) |
transformers | 3,809 | closed | Roberta Tokenizer crashes when tokenizing empty string in 2.8.0 | ```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("roberta-large")
tokenzer.tokenize("")
```
File "/scratch/wh629/nlu/env/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 1054, in encode_plus
first_ids = get_input_ids(text)
File "/scratch/wh629/nlu/env/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 1028, in get_input_ids
tokens = self.tokenize(text, add_special_tokens=add_special_tokens, **kwargs)
File "/scratch/wh629/nlu/env/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 759, in tokenize
text = self.prepare_for_tokenization(text, **kwargs)
File "/scratch/wh629/nlu/env/lib/python3.7/site-packages/transformers/tokenization_roberta.py", line 239, in prepare_for_tokenization
if add_prefix_space and not text[0].isspace():
IndexError: string index out of range | 04-15-2020 15:46:08 | 04-15-2020 15:46:08 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Fixed by #4209 |
transformers | 3,808 | closed | typo: fine-grained token-leven | Changing from "fine-grained token-leven" to "fine-grained token-level" | 04-15-2020 15:25:20 | 04-15-2020 15:25:20 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3808?src=pr&el=h1) Report
> Merging [#3808](https://codecov.io/gh/huggingface/transformers/pull/3808?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/301bf8d1b43d99efe1fdb5ba15871e975b3cb6cf&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3808?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3808 +/- ##
=======================================
Coverage 78.27% 78.27%
=======================================
Files 106 106
Lines 17964 17964
=======================================
Hits 14061 14061
Misses 3903 3903
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3808?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3808?src=pr&el=footer). Last update [301bf8d...7a12e87](https://codecov.io/gh/huggingface/transformers/pull/3808?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,807 | closed | isort ignores examples directory | Temporary solution while we wait for an isort release.
I have my local alias hacked in this way, but I figure new contributors might get confused by the circleci/local isort discrepancy. | 04-15-2020 14:01:35 | 04-15-2020 14:01:35 | |
transformers | 3,806 | closed | [cleanup] factor out get_head_mask, invert_attn_mask, get_extended_attention_mask | This is three changes applied all over:
1) `get_head_mask` from @LysandreJik is used instead of redundant snippet
2) `get_extended_attention_mask` is used instead of redundant snippet (that also makes causal mask)
3) `invert_attention_mask` is used instead of redundant snippet that doesn't make causal mask.
These changes make the forward passes more readable and allow us to update common logic in one place moving forward! I was reading code last night to try to get a sense of what all the models/tokenizers do and was frustrated with the amount of time spent scrolling through this stuff. Especially for new people, it makes getting to the meet of the `forward` pass much harder to have 100 lines repeated lines of input manipulation at the beginning.
Very open to suggestions.
Open to doing this for TF in a separate PR.
Also if `prune_heads` or other opportunities catch your eye, let me know.
| 04-15-2020 11:39:23 | 04-15-2020 11:39:23 | This looks nice<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3806?src=pr&el=h1) Report
> Merging [#3806](https://codecov.io/gh/huggingface/transformers/pull/3806?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/01c37dcdb529bff38aadd51001cb5812e5fe9b21&el=desc) will **increase** coverage by `0.15%`.
> The diff coverage is `90.14%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3806?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3806 +/- ##
==========================================
+ Coverage 78.26% 78.42% +0.15%
==========================================
Files 106 106
Lines 17964 17864 -100
==========================================
- Hits 14060 14009 -51
+ Misses 3904 3855 -49
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3806?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_mmbt.py](https://codecov.io/gh/huggingface/transformers/pull/3806/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tbWJ0LnB5) | `22.11% <40.00%> (+4.51%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3806/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.94% <92.00%> (-0.03%)` | :arrow_down: |
| [src/transformers/modeling\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/3806/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `75.31% <100.00%> (+0.06%)` | :arrow_up: |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/3806/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.40% <100.00%> (+0.24%)` | :arrow_up: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3806/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `97.81% <100.00%> (+0.78%)` | :arrow_up: |
| [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/3806/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `98.15% <100.00%> (+0.56%)` | :arrow_up: |
| [src/transformers/modeling\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/3806/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `73.20% <100.00%> (+0.55%)` | :arrow_up: |
| [src/transformers/modeling\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/3806/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF1YmVydC5weQ==) | `84.49% <100.00%> (+0.67%)` | :arrow_up: |
| [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/3806/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.25% <100.00%> (+0.31%)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3806/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.78% <100.00%> (+0.24%)` | :arrow_up: |
| ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/3806/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3806?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3806?src=pr&el=footer). Last update [01c37dc...2e7f6f4](https://codecov.io/gh/huggingface/transformers/pull/3806?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,805 | closed | Using fill-mask pipeline to get the “score” for a result it didn't suggest | Hi lovely huggingface people,
I'm trying to use your fill-mask pipeline in order to get the score for a result it didn't suggest.
For example, if my sentence is `"I ate bacon and <mask> for breakfast"` I can use `pipeline('fill-mask')` to get back predictions and their scores e.g. it might give me back `["eggs", 0.1]`. But what I would like to do is **provide my own guess and then get back the score it assigns to my own guess.** e.g. i might want to know what score it gives to the word "pancakes" in the situation.
Is this possible? If not can I register it as a feature request?
Stack overflow [question](https://stackoverflow.com/questions/61168513/using-huggingface-fill-mask-pipeline-to-get-the-score-for-a-result-it-didnt-s) | 04-15-2020 07:32:59 | 04-15-2020 07:32:59 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,804 | closed | Calculated offsets are wrong in squad.py | - when tokenizer.padding_side == "left" and if tokenizer.pad_token_id in span["input_ids"]:
doc_offset should add last_padding_id_position+1 instead of 0.
start_position = 0
end_position = 0
if is_training and not span_is_impossible:
# For training, if our document chunk does not contain an annotation
# we throw it out, since there is nothing to predict.
doc_start = span["start"]
doc_end = span["start"] + span["length"] - 1
out_of_span = False
if not (tok_start_position >= doc_start and tok_end_position <= doc_end):
out_of_span = True
if out_of_span:
start_position = cls_index
end_position = cls_index
span_is_impossible = True
else:
if tokenizer.padding_side == "left":
doc_offset = 0
else:
doc_offset = len(truncated_query) + sequence_added_tokens
start_position = tok_start_position - doc_start + doc_offset
end_position = tok_end_position - doc_start + doc_offset | 04-15-2020 05:06:20 | 04-15-2020 05:06:20 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,803 | closed | Fix bug in max_seq_length for preprocessing in ner example | **Summary**
If you try to use a value for `max_seq_length` that is less than 128 in the NER example, the maximum sequence length is exceeded when the predictions are made. There is a warning logged for this "Maximum sequence length exceeded: No prediction for.." and predictions cannot be made for these tokens.
**Changes**
Two changes are made:
- In `preprocess.py`, `subword_len_counter` is set to `current_subwords_len` when a blank line is inserted to split up sequences that exceed the maximum sequence length.
- `tokenizer.num_added_tokens()` is subtracted from `max_len` to account for the additional tokens inserted by the BERT tokenizer.
| 04-15-2020 04:40:14 | 04-15-2020 04:40:14 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3803?src=pr&el=h1) Report
> Merging [#3803](https://codecov.io/gh/huggingface/transformers/pull/3803?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/01c37dcdb529bff38aadd51001cb5812e5fe9b21&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3803?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3803 +/- ##
=======================================
Coverage 78.26% 78.27%
=======================================
Files 106 106
Lines 17964 17964
=======================================
+ Hits 14060 14061 +1
+ Misses 3904 3903 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3803?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3803/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.84% <0.00%> (+0.16%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3803?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3803?src=pr&el=footer). Last update [01c37dc...8f77ccd](https://codecov.io/gh/huggingface/transformers/pull/3803?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>You might want to open this PR on https://github.com/stefan-it/fine-tuned-berts-seq instead, as the script is hosted there right now (cc @stefan-it)<|||||>> You might want to open this PR on https://github.com/stefan-it/fine-tuned-berts-seq instead, as the script is hosted there right now (cc @stefan-it)
I thought it would be easier just to have the script in the example, but can close this pr and open it there if you prefer?<|||||>What do you think @stefan-it? Are you ok with us including the script here?<|||||>Hi, sorry for the late reply! I've fixed some errors in the script last week. Would be great if @r-tinn could check the latest Version! If it's working then you can of course integrate it into Transformers :)<|||||>Looks good to me, the problem seems to be fixed<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,802 | closed | Fix examples/translation/t5 to use newstest2014 rather than newstest2013 | Resolves #3759, in addition to minor nits: fixed a bug with argparse arguments + more pythonic file handling + formatted with black and isort.
Please note that I have not yet run the evaluation script against the full newstest2014 test set, as it is rather compute intensive, so the disclaimer at the top of the README.md about the score gap between the pre-trained and fine-tuned models is only ostensibly accurate to the score gap on newstest2013, not newstest2014. | 04-15-2020 04:18:24 | 04-15-2020 04:18:24 | Hey @tholiao,
Thanks a lot for the PR :-) This looks good so far. Don't worry about running the script against the evaluation set - we can do this once this is merged!
Can you make sure that the `run_examples_torch` pass? Don't worry too much about the `check_code_quality` test - there have been some issues with `isort` and I can manually fix that later. <|||||>Should be fine now. <|||||>Hi @tholiao, I went into your PR branch and checked the `make style` issues. It seems like you have other params set up for `black` than this lib. Also `isort` seems to have some problems with the imports here.
I added the tiny changes, I suggested above and correctly formatted everything (black and isort) in this PR https://github.com/huggingface/transformers/pull/3817. The PR uses your commits, so you are an author of the commit :-) |
transformers | 3,801 | closed | Fix bug in GLUE example for models that do not require token_type_ids | If you try to run the `run_glue.py` example with e.g. roberta from a fresh install of the library, it errors out with the following error:
```
Traceback (most recent call last):
File "examples/run_glue.py", line 564, in <module>
main()
File "examples/run_glue.py", line 512, in main
train_dataset = load_and_cache_examples(args, args.task_name, tokenizer, evaluate=False)
File "examples/run_glue.py", line 373, in load_and_cache_examples
all_token_type_ids = torch.tensor([f.token_type_ids for f in features], dtype=torch.long)
TypeError: an integer is required (got type NoneType)
```
To reproduce, run e.g.
`python examples/run_glue.py --model_name_or_path roberta-base --task_name SST-2 --per_gpu_eval_batch_size=8 --per_gpu_train_batch_size=8 --data_dir ./glue_data/SST-2/ --output_dir ./blah --model_type roberta --do_train --do_eval --max_seq_length 128 --learning_rate 2e-5 --num_train_epochs 3.0`
The reason is obviously that roberta does not have segment ids so `token_type_ids` is set to null in the data loader, causing `torch.tensor` to freak out. There's probably a more elegant long-term solution for this, but it's easy to fix by just setting it to 0 instead of null for those models. | 04-15-2020 03:49:01 | 04-15-2020 03:49:01 | Hi @douwekiela, thanks for the PR. This should be fixed soon in a more stable way in the soon-to-be-merged #3800
Let us know if it works.<|||||>Should be fixed on master by #3800, please open a new issue if you encounter other problems. |
transformers | 3,800 | closed | Trainer | This is a bottom-up refactor of the example scripts (currently `run_glue.py`, `run_language_modeling.py`, `run_ner.py` and `run_multiple_choice.py`) into a Trainer class and associated utilities, as described in [trainer-proposal](https://github.com/julien-c/trainer-proposal).
# Regression test and benchmark on `run_glue.py`
📊 All training metrics are in [**this TensorBoard**](https://tensorboard.dev/experiment/7LuNpqw3Q8y147WxfyOqdA/#scalars&_smoothingWeight=0&runSelectionState=eyJkYXRhcGFyYWxsZWxfMl9ncHUiOnRydWUsImRkcF8yX2dwdSI6dHJ1ZX0%3D):
<a href="https://tensorboard.dev/experiment/7LuNpqw3Q8y147WxfyOqdA/#scalars&_smoothingWeight=0&runSelectionState=eyJkYXRhcGFyYWxsZWxfMl9ncHUiOnRydWUsImRkcF8yX2dwdSI6dHJ1ZX0%3D"><img width="1559" alt="Screenshot 2020-04-14 at 21 23 26" src="https://user-images.githubusercontent.com/326577/79289007-3655ac80-7e96-11ea-8737-3db1839725c2.png"></a>
The Trainer class supports PyTorch's backends for parallel/distributed training so we performed the following experiment:
- Experiment: MNLI
- Train set: 100_000 first samples
- Dev set: Full (~9_800 samples)
- Backends: DataParallel, DistributedDataParallel, single-GPU, CPU
You can compare speed of convergence by clicking on "Relative" in the TensorBoard and comparing loss and accuracy curves:
<img width="360" alt="Screenshot 2020-04-14 at 21 40 04" src="https://user-images.githubusercontent.com/326577/79289930-baa92f00-7e98-11ea-900f-106f0ef22681.png">
## Results
### DataParallel
```
--model_name_or_path distilbert-base-cased
--task_name mnli
--data_dir ./data/glue_data/MNLI
--output_dir ./models/dataparallel_2_gpu
--overwrite_output_dir
--do_train
--do_eval
--num_train_epochs 1
--per_gpu_train_batch_size 32
--per_gpu_eval_batch_size 128
--logging_steps 100
--logging_dir ./runs/dataparallel_2_gpu
--logging_first_step
--save_steps 1000
--evaluate_during_training
```
1 Epoch = 21 mins
```
04/14/2020 23:16:34 - INFO - __main__ - ***** Eval results mnli *****
04/14/2020 23:16:34 - INFO - __main__ - acc = 0.7406011207335711
04/14/2020 23:16:34 - INFO - __main__ - loss = 0.6281169515389663
04/14/2020 23:17:02 - INFO - __main__ - ***** Eval results mnli-mm *****
04/14/2020 23:17:02 - INFO - __main__ - acc = 0.7507119609438568
04/14/2020 23:17:02 - INFO - __main__ - loss = 0.6062953961201203
```
### DistributedDataParallel
```
python -m torch.distributed.launch --nproc_per_node 2 ./examples/run_glue.py
--model_name_or_path distilbert-base-cased
--task_name mnli
--data_dir ./data/glue_data/MNLI
--output_dir ./models/ddp_2_gpu
--overwrite_output_dir
--do_train
--do_eval
--num_train_epochs 1
--per_gpu_train_batch_size 32
--per_gpu_eval_batch_size 128
--logging_steps 100
--logging_dir ./runs/ddp_2_gpu
--logging_first_step
--save_steps 1000
--evaluate_during_training
```
Speed: about same speed as DataParallel on this workload and machine.
Pre-existing issue (to fix in future PR): when using DDP, the eval is not GPU-parallelized.
### single-GPU
`CUDA_VISIBLE_DEVICES=0 python ...`
<details>
<pre>
04/15/2020 00:52:24 - INFO - __main__ - ***** Eval results mnli *****
04/15/2020 00:52:24 - INFO - __main__ - acc = 0.7383596535914416
04/15/2020 00:52:24 - INFO - __main__ - loss = 0.631212914144838
04/15/2020 00:53:16 - INFO - __main__ - ***** Eval results mnli-mm *****
04/15/2020 00:53:16 - INFO - __main__ - acc = 0.7534580960130187
04/15/2020 00:53:16 - INFO - __main__ - loss = 0.6002480050960144
</pre>
</details>
Speed: Twice slower
### CPU
`--no_cuda`
Speed: too slow to benchmark
# Regression test on `run_ner.py`
The arguments below:
```
--model_name_or_path bert-base-multilingual-cased
--data_dir ./data/germeval
--labels ./data/germeval/labels.txt
--max_seq_length 128
--output_dir ./models/ner_dp_2_gpu
--overwrite_output_dir
--do_train
--do_eval
--do_predict
--num_train_epochs 3
--per_gpu_train_batch_size 32
--logging_dir ./runs/ner_dp_2_gpu
--logging_steps 100
--evaluate_during_training
--save_steps 750
--seed 1
```
yield the following results, consistent with the ones in the README:
```
04/17/2020 16:12:30 - INFO - __main__ - f1 = 0.8634538152610443
04/17/2020 16:12:30 - INFO - __main__ - loss = 0.07145964812514359
04/17/2020 16:12:30 - INFO - __main__ - precision = 0.8434379457917262
04/17/2020 16:12:30 - INFO - __main__ - recall = 0.8844427823485415
```
Shape of F1 and eval loss:
<img width="951" alt="Screenshot 2020-04-17 at 16 38 50" src="https://user-images.githubusercontent.com/326577/79623824-4c5caa80-80ec-11ea-866c-411600b62bb1.png">
# Regression test on `run_language_modeling.py`
Reproducing the training described in the [how to train blogpost](https://huggingface.co/blog/how-to-train):
```
--train_data_file ./data/oscar.eo.txt
--eval_data_file ./data/oscar.eo.eval.txt
--evaluate_during_training
--output_dir ./models/EsperBERTo-small-v1
--overwrite_output_dir
--mlm
--config_name ./models/EsperBERTo-small
--tokenizer_name ./models/EsperBERTo-small
--do_train
--do_eval
--line_by_line
--logging_first_step
--logging_steps 10
--logging_dir ./runs/EsperBERTo
--num_train_epochs 1
--save_total_limit 2
--save_steps 2000
--per_gpu_train_batch_size 16
--seed 42
```
# Regression test on `run_multiple_choice.py`
```
--model_name_or_path distilroberta-base
--task swag
--data_dir ./data/swag
--output_dir ./models/swag_dp_2_gpu
--overwrite_output_dir
--do_train
--do_eval
--per_gpu_train_batch_size 32
--per_gpu_eval_batch_size 512
--logging_dir ./runs/swag_dp_2_gpu
--logging_steps 100
--logging_first_step
--evaluate_during_training
``` | 04-15-2020 01:34:46 | 04-15-2020 01:34:46 | So, thinking about it more, I've re-run a MNLI training with the previous default max_seq_length of 128 (instead of the "new" default of tokenizer.max_len – _soon to be renamed_), and training is naturally way faster with the smaller sequence length (light blue line below in relative time):
<img width="1473" alt="Screenshot 2020-04-15 at 18 47 23" src="https://user-images.githubusercontent.com/326577/79398369-8ee88080-7f4e-11ea-843f-84b59c8f3ad5.png">
So I'm thinking of reverting the default to 128. Does it make sense? Are people familiar with GLUE mostly training models on shorter sequence length? (@VictorSanh @srush @thomwolf @LysandreJik)
Or do they debug their trainings with short seq lengths, and then train with the model's max length?<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3800?src=pr&el=h1) Report
> Merging [#3800](https://codecov.io/gh/huggingface/transformers/pull/3800?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b7cf9f43d259fbad45d899c1769110aafc9f410a&el=desc) will **decrease** coverage by `0.16%`.
> The diff coverage is `63.48%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3800?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3800 +/- ##
==========================================
- Coverage 78.26% 78.10% -0.17%
==========================================
Files 106 111 +5
Lines 17928 18459 +531
==========================================
+ Hits 14032 14417 +385
- Misses 3896 4042 +146
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3800?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/3800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `43.59% <43.59%> (ø)` | |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/3800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `71.90% <52.63%> (-3.81%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `68.86% <78.26%> (+0.86%)` | :arrow_up: |
| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/3800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `27.15% <80.00%> (+2.46%)` | :arrow_up: |
| [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/3800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `87.69% <80.00%> (-12.31%)` | :arrow_down: |
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/3800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `89.23% <89.23%> (ø)` | |
| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/3800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `90.19% <90.19%> (ø)` | |
| [...rc/transformers/data/datasets/language\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/3800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `91.83% <91.83%> (ø)` | |
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/3800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.01% <100.00%> (+0.02%)` | :arrow_up: |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `97.01% <100.00%> (ø)` | |
| ... and [11 more](https://codecov.io/gh/huggingface/transformers/pull/3800/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3800?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3800?src=pr&el=footer). Last update [b7cf9f4...d1db901](https://codecov.io/gh/huggingface/transformers/pull/3800?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>> So, thinking about it more, I've re-run a MNLI training with the previous default max_seq_length of 128 (instead of the "new" default of tokenizer.max_len – _soon to be renamed_), and training is naturally way faster with the smaller sequence length (light blue line below in relative time):
>
> So I'm thinking of reverting the default to 128. Does it make sense? Are people familiar with GLUE mostly training models on shorter sequence length? (@VictorSanh @srush @thomwolf @LysandreJik)
>
> Or do they debug their trainings with short seq lengths, and then train with the model's max length?
I can answer about MNLI dataset. The sequence pairs in the dataset are quite short in MNLI both in training, dev and test with similar length distributions. The vast majority of sequences are under 128 tokens. So 128 is fine for MNLI.
For QNLI, 256 is more suitable.<|||||>The revamps look awesome! Really looking forward to the merge and can't wait to try out the Trainer modules (nothing against argparse :joy:)<|||||>Ok, updated the PR summary above with regression tests on `run_ner.py` and `run_language_modeling.py` that show that we reproduce the documented results.
So this should be ready to merge! 🎉 |
transformers | 3,799 | closed | Clarification about GPT2LMHeadModel lm_head weights | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
Each time the GPT2LMHeadModel is loaded from pretrained weights, the following is logged:
```
Weights of GPT2LMHeadModel not initialized from pretrained model: ['lm_head.weight']
```
Just to clarify, is this OK because we tie the output (`lm_head`) weights to the input weights? | 04-15-2020 00:15:09 | 04-15-2020 00:15:09 | yes exactly - this should not be a problem :-) |
transformers | 3,798 | closed | Error when using run_generation.py to generate texts with long prompts, specifically for models -XLM and Openai-GPT | # 🐛 Bug
## Information
Model I am using (XLM, OPENAI-GPT):
Language I am using the model on (English):
The problem arises when using:
* [ ] The models to generate texts with long prompts
The tasks I am working on is:
* [ ] Text generation
## To reproduce
Steps to reproduce the behavior:
1. cd transformers/
python examples/run_generation.py --model_type xlm --model_name_or_path xlm-mlm-en-2048 \
--prompt "China wants to take a victory lap over its handling of the coronavirus outbreak" \
--repetition 2.2 --k 5 \
--length 500
2.
3.
<!-- Error: RuntimeError: The size of tensor a (513) must match the size of tensor b (512) at non-singleton dimension 3. \
This leads to the next error - RuntimeError: CUDA error: device-side assert triggered.-->
## Expected behavior
<!-- The expected behavior is a generated piece of texts of about 500 words -->
## Environment info
<!-- cd transformers/
python examples/run_generation.py --model_type xlm --model_name_or_path xlm-mlm-en-2048 \
--prompt "China wants to take a victory lap over its handling of the coronavirus outbreak" \
--repetition 2.2 --k 5 \
--length 500 -->
I think the problem is that there is a bug somewhere in the input_embed or input_id or vocabulary because seems to out of index given certain prompts. This may suggest that the vocab list is limited or maybe not.
- `transformers` version:
- Platform: Google Colab
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0
- Tensorflow version (GPU?): no
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
| 04-14-2020 22:21:14 | 04-14-2020 22:21:14 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,797 | closed | [Config, Serialization] more readable config serialization | Given the discussion in PR: #3433, we want to make the serialized model conf more readable.
### Problem:
E.g. `bert-base-cased` has the following config on S3:
```
{
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"max_position_embeddings": 512,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"type_vocab_size": 2,
"vocab_size": 28996
}
```
But when saved all default params are saved as well (which is unnecessary):
```
{
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"max_position_embeddings": 512,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"type_vocab_size": 2,
"vocab_size": 28996
}
(which is readable imo) and once it's saved it now looks like this:
{
"_num_labels": 2,
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": null,
"do_sample": false,
"early_stopping": false,
"eos_token_id": null,
"finetuning_task": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"intermediate_size": 3072,
"is_decoder": false,
"is_encoder_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-12,
"length_penalty": 1.0,
"max_length": 20,
"max_position_embeddings": 512,
"min_length": 0,
"model_type": "bert",
"no_repeat_ngram_size": 0,
"num_attention_heads": 12,
"num_beams": 1,
"num_hidden_layers": 12,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pad_token_id": 0,
"pruned_heads": {},
"repetition_penalty": 1.0,
"temperature": 1.0,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"type_vocab_size": 2,
"use_bfloat16": false,
"vocab_size": 28996
}
```
### Solution:
We should only save the difference of the actual config to either **v1**) to the model class' config or **v2**) to the PretrainedConfig() (which contains most of the unnecessary default params).
This PR implements either **v1**) or **v2**) - up for discussion!
**v1**) for `bert-base-cased` would look like this:
```
{
"architectures": [
"BertForMaskedLM"
],
"vocab_size": 28996
}
```
**v2**) for `bert-base-cased` would look like this:
```
{
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 0,
"type_vocab_size": 2,
"vocab_size": 28996
}
```
| 04-14-2020 20:50:21 | 04-14-2020 20:50:21 | I would prefer **v2**) because the parameters saved in each of the model configuration files are important for the models behavior and would be nice to see in the config (also to compare to other models' configs)<|||||>Need to look into it more but on principle this is very nice. (If Configs were `dataclass`-backed it would be even cleaner to implement – might be too big of a change though)
I agree that v2 is probably better, but will think about it some more.
For config hosted on our S3, what should we do? Update the official ones, but not the user-uploaded ones? Or just do all of them? :)<|||||>I would update all of them by downloading, save_pretrained() and upload. I already a very similar script I would only need to adapt a tiny bit<|||||>Does this impact the process for changing the default config?<|||||>Sounds good!<|||||>Ok, **v2** is now implemented. I agree with @julien-c that the `save_pretrained() `method should be kept as clean as possible and I think we can keep backward compatibility (for very special edge cases) by allowing a boolean argument to the `to_json_file()` method. <|||||>Awesome, merging this |
transformers | 3,796 | closed | Calculated offsets are wrong | This is on the latest `master` (from 2020-04-13):
```Python
import transformers
text = 'A, <mask> AllenNLP sentence.'
t = transformers.AutoTokenizer.from_pretrained("roberta-base", use_fast=True, add_special_tokens=True)
x2 = t.encode_plus(text, return_offsets_mapping=True)
print(repr(t.convert_ids_to_tokens(x2['input_ids']))
print(repr([text[start:end] for start, end in x2['offset_mapping']]))
```
This prints (with some manual alignment):
```
['<s>', 'ĠA', ',', '<mask>', 'ĠAllen', 'N', 'LP', 'Ġsentence', '.', '</s>']
['', 'A', ',', ', <mask>', ' Alle', 'n', 'NL', 'P sentenc', 'e', '']
``` | 04-14-2020 20:48:24 | 04-14-2020 20:48:24 | It's probably normal since Roberta's tokenizers is a byte-level tokenizer which split words at byte-level (ie. possibly smaller than the character unit).
Cc @n1t0 <|||||>There is a bug indeed, the offsets shouldn't be shifted after the `<mask>` token. I should be able to fix this.
I'm not sure I'll be able to have the right offsets for the `<mask>` token though as this one is tricky.<|||||>This is now fixed on the latest `master`, with the output being
```
['<s>', 'ĠA', ',', '<mask>', 'ĠAllen', 'N', 'LP', 'Ġsentence', '.', '</s>']
['', 'A', ',', '<mask>', 'Allen', 'N', 'LP', 'sentence', '.', '']
```
The spaces are not part of the offsets because the `trim_offsets` option is `True` by default. |
transformers | 3,795 | closed | [Pipelines] Clean pipelines test and remove unnecessary code | This PR cleans up pipelines a bit:
1) Fixes non-working pipeline creation test
2) Remove unnecessary code (due to PR #3116) in pipelines as discussed with @thomwolf in PR #3413
Note: Tested on QA pipelines slow tests. | 04-14-2020 20:08:24 | 04-14-2020 20:08:24 | |
transformers | 3,794 | closed | Getting large alloc error while evaluating bert-base on NER task | # 🐛 Bug
## Information
Model I am using (Bert-base-multilingual-cased,):
Language I am using the model on (English):
The problem arises when using:
Evaluation
The tasks I am working on is:
My own custom dataset having the same as GermEval task format.
Running on **Colab**.
I believe this is due to memory error. But why should there be memory **error while testing when things were fine during training **?
Max_seq_length during train was 128 and batch_size 8
04/14/2020 19:06:13 - INFO - transformers.modeling_utils - loading weights file germeval-model/checkpoint-21750/pytorch_model.bin
04/14/2020 19:07:03 - INFO - __main__ - Loading features from cached file ./cached_dev_bert-base-multilingual-cased_128
04/14/2020 19:07:05 - INFO - __main__ - ***** Running evaluation *****
04/14/2020 19:07:05 - INFO - __main__ - Num examples = 22026
04/14/2020 19:07:05 - INFO - __main__ - Batch size = 2
Evaluating: 0% 3/11013 [00:01<1:07:38, 2.71it/s]tcmalloc: large alloc 1110605824 bytes == 0x3e706000 @ 0x7f207d0051e7 0x7f207a3995e1 0x7f207a3fde88 0x7f207a3fdfa3 0x7f207a49c098 0x7f207a49c8f4 0x7f207a49ca42 0x5678b3 0x5a067e 0x7f207a3e970d 0x50a8af 0x50c5b9 0x508245 0x50a080 0x50aa7d 0x50d390 0x508245 0x58958c 0x5a067e 0x7f207a3e970d 0x50a8af 0x50c5b9 0x508245 0x50a080 0x50aa7d 0x50d390 0x508245 0x50a080 0x50aa7d 0x50d390 0x508245
| 04-14-2020 19:10:38 | 04-14-2020 19:10:38 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I'm having the same issue. Training is fine but get this error when evaluate. |
transformers | 3,793 | closed | [Bert] remove hard-coded pad token id | tiny change to remove hard coded `pad_token_id` in Bert. | 04-14-2020 18:44:44 | 04-14-2020 18:44:44 | LGTM |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.