repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
β | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 4,493 | closed | Use args.per_gpu_train_batch_size instead of args.train_batch_size in⦠| ⦠Trainer.
It appears that this is preferred, per https://github.com/huggingface/transformers/blob/master/src/transformers/training_args.py. This also matches the calculation which is printed referring to batch size at https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L432.
As a side note, the GPT-2 example in https://github.com/huggingface/transformers/blob/master/examples/language-modeling/README.md no longer works. There is a default `per_gpu_train_batch_size=8`, which throws OOM on a Tesla V100 with 32GB RAM. I ran it successfully with `--per_gpu_train_batch_size=1`, and it used 7GB of RAM. So we probably want to add that hyperparameter to the example command. | 05-21-2020 03:15:34 | 05-21-2020 03:15:34 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4493?src=pr&el=h1) Report
> Merging [#4493](https://codecov.io/gh/huggingface/transformers/pull/4493?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/865d4d595eefc8cc9cee58fec9179bd182be0e2e&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `0.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4493?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4493 +/- ##
=======================================
Coverage 77.90% 77.91%
=======================================
Files 123 123
Lines 20472 20472
=======================================
+ Hits 15949 15950 +1
+ Misses 4523 4522 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4493?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.76% <0.00%> (ΓΈ)` | |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.53% <0.00%> (+0.11%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4493?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4493?src=pr&el=footer). Last update [865d4d5...7ffd712](https://codecov.io/gh/huggingface/transformers/pull/4493?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Things could be clearer, but the only case where train_batch_size is different from per_gpu_train_batch_size is in `nn.DataParallel`.
And in DataParallel, your dataloader's apparent batch size will be scattered amongst the devices, so I believe the `batch_size=self.args.train_batch_size` is correct
<|||||>(Note that DataParallel is not really recommended anymore as a way to utilize multiple GPUs, vs. torch.distributed)<|||||>Gotcha, thanks for the context. Is the user expected to pass both `--train_batch_size` and `--per_gpu_train_batch_size` together then? In `examples/run_language_modeling.py` as it stands, the `--train_batch_size` affects the true batch size, but `--per_gpu_train_batch_size` is what is printed to stdout here: https://github.com/huggingface/transformers/blob/a08652772791fdaeed6f263b1a99926ca64be5dc/src/transformers/trainer.py#L419<|||||>No, user would not pass a `--train_batch_size` β was it documented somewhere that they should?<|||||>My misunderstanding then. There are [a few instances](https://grep.app/search?q=--train_batch_size&filter[repo][0]=huggingface/transformers) through the codebase where that arg is expected, but I see that in this example it's a derived property. Thanks for the help, closing the issue. |
transformers | 4,492 | closed | Cannot load reformer-enwik8 tokenizer | # π Bug
## Information
Model I am using (Bert, XLNet ...): Reformer tokenizer
## To reproduce
Steps to reproduce the behavior:
1. Try to load the pretrained reformer-enwik8 tokenizer with `AutoTokenizer.from_pretrained("google/reformer-enwik8")`
This is the error I got:
```
OSError Traceback (most recent call last)
<ipython-input-51-ab9a64363cc0> in <module>
----> 1 AutoTokenizer.from_pretrained("google/reformer-enwik8")
~/.virtualenvs/sparseref/lib/python3.7/site-packages/transformers-2.9.0-py3.7.egg/transformers/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
198 return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
199 else:
--> 200 return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
201
202 raise ValueError(
~/.virtualenvs/sparseref/lib/python3.7/site-packages/transformers-2.9.0-py3.7.egg/transformers/tokenization_utils.py in from_pretrained(cls, *inputs, **kwargs)
896
897 """
--> 898 return cls._from_pretrained(*inputs, **kwargs)
899
900 @classmethod
~/.virtualenvs/sparseref/lib/python3.7/site-packages/transformers-2.9.0-py3.7.egg/transformers/tokenization_utils.py in _from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1001 ", ".join(s3_models),
1002 pretrained_model_name_or_path,
-> 1003 list(cls.vocab_files_names.values()),
1004 )
1005 )
OSError: Model name 'google/reformer-enwik8' was not found in tokenizers model name list (google/reformer-crime-and-punishment). We assumed 'google/reformer-enwik8' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.
```
I tried with and without `google/`, same result. However, it did print the download progress bar. Trying to load the `crime-and-punishment` Reformer tokenizer works.
- `transformers` version: 2.9.0
- Platform: macOS
- Python version: 3.7
- PyTorch version (GPU?): 1.4.0, no GPU
- Using distributed or parallel set-up in script?: no
| 05-20-2020 23:16:46 | 05-20-2020 23:16:46 | Hi. This is not a bug but is expected: since the model works on the character level, a tokenizer is not "required". You can read more in [the model card](https://huggingface.co/google/reformer-enwik8#reformer-language-model-on-character-level-and-trained-on-enwik8) on how you can encode/decode your data.<|||||>@erickrf can you share how you got to train the "reformer" model. IΒ΄m trying to utilize the "google/reformer-enwik8" to train a Portuguese model but I just got the same error of
`Model name 'google/reformer-enwik8' was not found in tokenizers`<|||||>@bratao I answered this in my comment... Open the link thzt I posted and scroll down. They tell you how to do tokenisation. No need to load a tokenizer as usual. <|||||>@BramVanroy
my code is below
```shell
python examples/seq2seq/finetune_trainer.py --model_name_or_path google/reformer-enwik8 --do_train --do_eval --task translation_en_to_de --data_dir /lustre/dataset/wmt17_en_de/ --output_dir /home2/zhenggo1/checkpoint/reformer_translation --per_device_train_batch_size=4 --per_device_eval_batch_size=4 --overwrite_output_dir --predict_with_generate
```
and the bug is below,so what the reason? thks!
```shell
Traceback (most recent call last):
File "examples/seq2seq/finetune_trainer.py", line 367, in <module>
main()
File "examples/seq2seq/finetune_trainer.py", line 206, in main
cache_dir=model_args.cache_dir,
File "/home2/zhenggo1/LowPrecisionInferenceTool/examples/pytorch/huggingface_transformers/src/transformers/models/auto/tokenization_auto.py", line 385, in from_pretrained
return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/home2/zhenggo1/LowPrecisionInferenceTool/examples/pytorch/huggingface_transformers/src/transformers/tokenization_utils_base.py", line 1760, in from_pretrained
raise EnvironmentError(msg)
OSError: Can't load tokenizer for 'google/reformer-enwik8'. Make sure that:
- 'google/reformer-enwik8' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'google/reformer-enwik8' is the correct path to a directory containing relevant tokenizer files
```
<|||||>@LeopoldACC Please post a new issue so that some one can have a look. |
transformers | 4,491 | closed | Windows: Can't find vocabulary file for MarianTokenizer | # π Bug MarianTokenizer.from_pretrained() fails in Python 3.6.4 in Windows 10
## Information
Occurs with using the example here: [https://huggingface.co/transformers/model_doc/marian.html?highlight=marianmtmodel#transformers.MarianMTModel](huggingface.co/transformers/model_doc/marian.html?highlight=marianmtmodel#transformers.MarianMTModel)
Model I am using (Bert, XLNet ...): MarianMTModel
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [X ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X ] my own task or dataset: (give details below)
## To reproduce
Paste code from example and run:
```Python
from transformers import MarianTokenizer, MarianMTModel
from typing import List
src = 'fr' # source language
trg = 'en' # target language
sample_text = "oΓΉ est l'arrΓͺt de bus ?"
mname = f'Helsinki-NLP/opus-mt-{src}-{trg}'
model = MarianMTModel.from_pretrained(mname)
tok = MarianTokenizer.from_pretrained(mname)
batch = tok.prepare_translation_batch(src_texts=[sample_text]) # don't need tgt_text for inference
gen = model.generate(**batch) # for forward pass: model(**batch)
words: List[str] = tok.batch_decode(gen, skip_special_tokens=True) # returns "Where is the the bus stop ?"
print(words)
```
Steps to reproduce the behavior:
1. Run the example
2. Program terminates on `tok = MarianTokenizer.from_pretrained(mname)`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```Python
stdbuf was not found; communication with perl may hang due to stdio buffering.
Traceback (most recent call last):
File "C:\Program Files\Python\lib\site-packages\transformers\tokenization_utils.py", line 1055, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "C:\Program Files\Python\lib\site-packages\transformers\tokenization_marian.py", line 89, in __init__
self._setup_normalizer()
File "C:\Program Files\Python\lib\site-packages\transformers\tokenization_marian.py", line 95, in _setup_normalizer
self.punc_normalizer = MosesPunctuationNormalizer(self.source_lang)
File "C:\Program Files\Python\lib\site-packages\mosestokenizer\punctnormalizer.py", line 47, in __init__
super().__init__(argv)
File "C:\Program Files\Python\lib\site-packages\toolwrapper.py", line 64, in __init__
self.start()
File "C:\Program Files\Python\lib\site-packages\toolwrapper.py", line 108, in start
env=env,
File "C:\Program Files\Python\lib\subprocess.py", line 709, in __init__
restore_signals, start_new_session)
File "C:\Program Files\Python\lib\subprocess.py", line 997, in _execute_child
startupinfo)
FileNotFoundError: [WinError 2] The system cannot find the file specified
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:/Development/Research/COVID-19-Misinfo2/src/translate_test_2.py", line 9, in <module>
tok = MarianTokenizer.from_pretrained(mname)
File "C:\Program Files\Python\lib\site-packages\transformers\tokenization_utils.py", line 902, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "C:\Program Files\Python\lib\site-packages\transformers\tokenization_utils.py", line 1058, in _from_pretrained
"Unable to load vocabulary from file. "
OSError: Unable to load vocabulary from file. Please check that the provided vocabulary is accessible and not corrupted.
```
## Expected behavior
prints ["Where is the the bus stop ?"]
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.9.1
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.6.4
- PyTorch version (GPU?): 1.5.0+cu101 (True)
- Tensorflow version (GPU?): 2.1.0 (True)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
| 05-20-2020 22:47:34 | 05-20-2020 22:47:34 | I cannot reproduce this. This works for me (same environment except Python 3.8 which should not make a difference). Can you try again but force_overwrite potentially corrupt files?
```python
tok = MarianTokenizer.from_pretrained(mname, force_download=True)
```<|||||>Hi,
I rebased the transformers project just before running this and updated
with "pip install --upgrade ." in the root transformers directory.
Here is the code as run:
from transformers import MarianTokenizer, MarianMTModel
from typing import List
src = 'fr' # source language
trg = 'en' # target language
sample_text = "oΓΉ est l'arrΓͺt de bus ?"
mname = f'Helsinki-NLP/opus-mt-{src}-{trg}'
model = MarianMTModel.from_pretrained(mname, force_download=True)
tok = MarianTokenizer.from_pretrained(mname, force_download=True)
# batch = tok.prepare_translation_batch(src_texts=[sample_text]) #
don't need tgt_text for inference
# gen = model.generate(**batch) # for forward pass: model(**batch)
# words: List[str] = tok.batch_decode(gen, skip_special_tokens=True) #
returns "Where is the the bus stop ?"
Here is the terminal output:
2020-05-22 05:45:15.204824: I
tensorflow/stream_executor/platform/default/dso_loader.cc:44]
Successfully opened dynamic library cudart64_101.dll
Downloading: 100%|ββββββββββ| 1.13k/1.13k [00:00<00:00, 568kB/s]
Downloading: 100%|ββββββββββ| 301M/301M [00:32<00:00, 9.34MB/s]
Downloading: 100%|ββββββββββ| 802k/802k [00:00<00:00, 5.85MB/s]
Downloading: 100%|ββββββββββ| 778k/778k [00:00<00:00, 5.71MB/s]
Downloading: 100%|ββββββββββ| 1.34M/1.34M [00:00<00:00, 6.69MB/s]
Downloading: 100%|ββββββββββ| 42.0/42.0 [00:00<00:00, 13.8kB/s]
stdbuf was not found; communication with perl may hang due to stdio
buffering.
Traceback (most recent call last):
File "C:\Program
Files\Python\lib\site-packages\transformers\tokenization_utils.py", line
1055, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "C:\Program
Files\Python\lib\site-packages\transformers\tokenization_marian.py",
line 89, in __init__
self._setup_normalizer()
File "C:\Program
Files\Python\lib\site-packages\transformers\tokenization_marian.py",
line 95, in _setup_normalizer
self.punc_normalizer = MosesPunctuationNormalizer(self.source_lang)
File "C:\Program
Files\Python\lib\site-packages\mosestokenizer\punctnormalizer.py", line
47, in __init__
super().__init__(argv)
File "C:\Program Files\Python\lib\site-packages\toolwrapper.py", line
64, in __init__
self.start()
File "C:\Program Files\Python\lib\site-packages\toolwrapper.py", line
108, in start
env=env,
File "C:\Program Files\Python\lib\subprocess.py", line 709, in
__init__
restore_signals, start_new_session)
File "C:\Program Files\Python\lib\subprocess.py", line 997, in
_execute_child
startupinfo)
FileNotFoundError: [WinError 2] The system cannot find the file
specified
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File
"C:/Development/Research/COVID-19-Misinfo2/src/translate_test_2.py",
line 9, in <module>
tok = MarianTokenizer.from_pretrained(mname, force_download=True)
File "C:\Program
Files\Python\lib\site-packages\transformers\tokenization_utils.py", line
902, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "C:\Program
Files\Python\lib\site-packages\transformers\tokenization_utils.py", line
1058, in _from_pretrained
"Unable to load vocabulary from file. "
OSError: Unable to load vocabulary from file. Please check that the
provided vocabulary is accessible and not corrupted.
Process finished with exit code 1
I also tried this with 'Helsinki-NLP/opus-mt-ROMANCE-en' and had the
same results. I also stepped through the code in the debugger and
manually downloaded the files using my browser and pointed the
*.from_retrained() methods to that directory. Here is the relevant code:
model_name = 'Helsinki-NLP/opus-mt-ROMANCE-en'
# see tokenizer.supported_language_codes for choices
model =
MarianMTModel.from_pretrained("./models/opus-mt-ROMANCE-en/model")
#model.save_pretrained("./models/opus-mt-ROMANCE-en/model")
tokenizer =
MarianTokenizer.from_pretrained("./models/opus-mt-ROMANCE-en/model")
#tokenizer.save_pretrained("./models/opus-mt-ROMANCE-en/tokenizer")
And here is the directory list. I've also attached all these files
except the pytorch.model.bin. If there is a problem with these files,
please send me the correct ones and I can try this locally
Directory:
C:\Development\Research\COVID-19-Misinfo2\src\models\opus-mt-ROMANCE-en\model
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a---- 5/20/2020 5:52 PM 1163 config.json
-a---- 5/20/2020 5:52 PM 312086495 pytorch_model.bin
-a---- 5/20/2020 6:05 PM 800087 source.spm
-a---- 5/20/2020 6:08 PM 265 tokenizer_config.json
-a---- 5/20/2020 6:07 PM 1460304 vocab.json
This had the same effect as the remote download
2020-05-22 05:58:34.251856: I
tensorflow/stream_executor/platform/default/dso_loader.cc:44]
Successfully opened dynamic library cudart64_101.dll
dir = C:\Development\Research\COVID-19-Misinfo2\src
Traceback (most recent call last):
File
"C:/Development/Research/COVID-19-Misinfo2/src/translate_test_1.py",
line 15, in <module>
tokenizer =
MarianTokenizer.from_pretrained("./models/opus-mt-ROMANCE-en/model")
File "C:\Program
Files\Python\lib\site-packages\transformers\tokenization_utils.py", line
902, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "C:\Program
Files\Python\lib\site-packages\transformers\tokenization_utils.py", line
1055, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "C:\Program
Files\Python\lib\site-packages\transformers\tokenization_marian.py",
line 84, in __init__
self.spm_target = load_spm(target_spm)
File "C:\Program
Files\Python\lib\site-packages\transformers\tokenization_marian.py",
line 236, in load_spm
spm.Load(path)
File "C:\Program Files\Python\lib\site-packages\sentencepiece.py",
line 118, in Load
return _sentencepiece.SentencePieceProcessor_Load(self, filename)
TypeError: not a string
Process finished with exit code 1
I have downloaded and used the GPT-2 model without these problems using
very similar code
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
Hope this helps,
Phil Feldman
---
On 2020-05-22 05:34, Bram Vanroy wrote:
> I cannot reproduce this. This works for me (same environment except Python 3.8 which should not make a difference). Can you try again but force_overwrite potentially corrupt files?
>
> tok = MarianTokenizer.from_pretrained(mname, force_download=True)
>
> --
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub [1], or unsubscribe [2].
Links:
------
[1]
https://github.com/huggingface/transformers/issues/4491#issuecomment-632597198
[2]
https://github.com/notifications/unsubscribe-auth/ABPRJH7JIRH4PIEONBAXAULRSZBJXANCNFSM4NGLYESA<|||||>Hi @pgfeldman, I initally faced the same error but was able to resolve it by downloading the model to a specified location using the below steps
```
cache_dir = "/home/transformers_files/"
cache_dir_models = cache_dir + "default_models/"
cache_dir_tokenizers = cache_dir + "tokenizers/"
model_name = 'Helsinki-NLP/opus-mt-ROMANCE-en'
tokenizer = MarianTokenizer.from_pretrained(model_name, cache_dir=cache_dir_tokenizers)
model = MarianMTModel.from_pretrained(model_name, cache_dir=cache_dir_models)
```<|||||>Hi! I had the same issue after installing the mosestokenizer (as recommended) on Windows with Python 3.6. After I uninstalled it, it seemed to work fine! I think more investigation is needed there.<|||||>@BramVanroy did it work for you on windows? I also can't reproduce.<|||||>> @BramVanroy did it work for you on windows? I also can't reproduce.
I still cannot reproduce this. I tried uninstall/reinstalling mosestokenizer and it works in both cases.
For everyone having problems, can you run the following and post its output here so that we can find similarities? @jpcorb20 @SAswinGiridhar @pgfeldman
**This requires you to be on the latest master branch (on Windows at least) so install from source!**
```bash
transformers-cli env
```<|||||>I deleted and re-installed transformers and installed from source
Copy-and-paste the text below in your GitHub issue and FILL OUT the two
last points.
- `transformers` version: 2.11.0
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.6.4
- PyTorch version (GPU?): 1.5.0+cu101 (True)
- Tensorflow version (GPU?): 2.1.0 (True)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
I'm also attaching my package list
[deleted by moderator for length]<|||||>Hello, here's mine :
- `transformers` version: 2.11.0
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.6.7
- PyTorch version (GPU?): 1.5.0+cu101 (True)
- Tensorflow version (GPU?): 2.0.0 (False)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
<|||||>Does
```python
tokenizer = XLMRobertaTokenizer.from_pretrained("xlm-roberta-base")
tokenizer.batch_encode_plus(['stuff'])
```
work?<|||||>Yes!
Here's the code as run:
from transformers import XLMRobertaTokenizer
tokenizer = XLMRobertaTokenizer.from_pretrained("xlm-roberta-base")
tokenizer.batch_encode_plus(['stuff'])
print("done")
Here's the output
"C:\Program Files\Python\python.exe"
C:/Users/Phil/AppData/Roaming/JetBrains/IntelliJIdea2020.1/scratches/transformers_error_2.py
2020-06-08 17:44:17.768004: I
tensorflow/stream_executor/platform/default/dso_loader.cc:44]
Successfully opened dynamic library cudart64_101.dll
Downloading: 100%|ββββββββββ| 5.07M/5.07M [00:00<00:00, 9.57MB/s]
done
Process finished with exit code 0
Hope this helps,
Phil
---
On 2020-06-08 17:13, Sam Shleifer wrote:
> Does
>
> tokenizer = XLMRobertaTokenizer.from_pretrained("xlm-roberta-base")
> tokenizer.batch_encode_plus(['stuff'])
>
> work?
>
> --
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub [1], or unsubscribe [2].
Links:
------
[1]
https://github.com/huggingface/transformers/issues/4491#issuecomment-640889916
[2]
https://github.com/notifications/unsubscribe-auth/ABPRJHZZ3BPH7DFC36FOYJTRVVIAVANCNFSM4NGLYESA<|||||>Working for me too<|||||>Can anyone help with this issue: #5040 ?<|||||>> Can anyone help with this issue: #5040 ?
Please don't spam other topics like this in the future. We do our best to help where and when we can. Posting duplicate comments on different topics adds more noise than it is helpful.<|||||>I think this bug may be fixed on master, but I can't verify because I don't have windows. Could 1 person check and post their results? Remember to be up to date with master, your git log should contain `3d495c61e Sam Shleifer: Fix marian tokenizer save pretrained (#5043)`<|||||>Doesn't work on my PC, but I changed the library for the moses tokenizer in _setup_normalizer and it works:
```
def _setup_normalizer(self):
try:
from sacremoses import MosesPunctNormalizer
self.punc_normalizer = MosesPunctNormalizer(lang=self.source_lang).normalize
except ImportError:
warnings.warn("Recommended: pip install sacremoses")
self.punc_normalizer = lambda x: x
```<|||||>Hi Sam,
I just rebased, verified the gitlog, and installed using "pip install
--upgrade ." I'm attaching the console record of the install.
I still get the same error(s)
2020-06-17 05:40:43.980254: I
tensorflow/stream_executor/platform/default/dso_loader.cc:44]
Successfully opened dynamic library cudart64_101.dll
stdbuf was not found; communication with perl may hang due to stdio
buffering.
Traceback (most recent call last):
File "C:\Program
Files\Python\lib\site-packages\transformers\tokenization_utils_base.py",
line 1161, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "C:\Program
Files\Python\lib\site-packages\transformers\tokenization_marian.py",
line 81, in __init__
self._setup_normalizer()
File "C:\Program
Files\Python\lib\site-packages\transformers\tokenization_marian.py",
line 87, in _setup_normalizer
self.punc_normalizer = MosesPunctuationNormalizer(self.source_lang)
File "C:\Program
Files\Python\lib\site-packages\mosestokenizer\punctnormalizer.py", line
47, in __init__
super().__init__(argv)
File "C:\Program Files\Python\lib\site-packages\toolwrapper.py", line
64, in __init__
self.start()
File "C:\Program Files\Python\lib\site-packages\toolwrapper.py", line
108, in start
env=env,
File "C:\Program Files\Python\lib\subprocess.py", line 709, in
__init__
restore_signals, start_new_session)
File "C:\Program Files\Python\lib\subprocess.py", line 997, in
_execute_child
startupinfo)
FileNotFoundError: [WinError 2] The system cannot find the file
specified
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File
"C:/Users/Phil/AppData/Roaming/JetBrains/IntelliJIdea2020.1/scratches/transformers_error.py",
line 9, in <module>
tok = MarianTokenizer.from_pretrained(mname)
File "C:\Program
Files\Python\lib\site-packages\transformers\tokenization_utils_base.py",
line 1008, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "C:\Program
Files\Python\lib\site-packages\transformers\tokenization_utils_base.py",
line 1164, in _from_pretrained
"Unable to load vocabulary from file. "
OSError: Unable to load vocabulary from file. Please check that the
provided vocabulary is accessible and not corrupted.
Process finished with exit code 1
Hope this helps
Phil
---
On 2020-06-16 09:50, Sam Shleifer wrote:
> I think this bug may be fixed on master, but I can't verify because I don't have windows. Could 1 person check and post their results? Remember to be up to date with master, your git log should contain 3d495c61e Sam Shleifer: Fix marian tokenizer save pretrained (#5043) - (HEAD -> master, upstream/master) (2 minutes ago)
>
> --
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub [1], or unsubscribe [2].
Links:
------
[1]
https://github.com/huggingface/transformers/issues/4491#issuecomment-644778862
[2]
https://github.com/notifications/unsubscribe-auth/ABPRJH5BKWN3OBT7DOP4PVTRW52CRANCNFSM4NGLYESA<|||||>Just upgraded to version 3.0, and everything is working! |
transformers | 4,490 | closed | How to load a pruned Albert model with from_pretrained()? | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I pruned Albert during the fine-tuning phase. I was unable to load the pruned model after saving it. I tried using:
```
output_model_file = os.path.join(args.output_dir, "pytorch_model.bin")
model_state_dict = torch.load(output_model_file)
model = model_class.from_pretrained(args.output_dir,state_dict=model_state_dict)
```
but still got the same error:
```
File "run_glue.py", line 526, in main
model = model_class.from_pretrained(args.output_dir,state_dict=model_state_dict)
File "/home/user/.local/lib/python3.7/site-packages/transformers/modeling_utils.py", line 471, in from_pretrained
model.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for AlbertForSequenceClassification:
size mismatch for albert.encoder.albert_layer_groups.0.albert_layers.0.attention.query.weight: copying a param with shape torch.Size([64, 768]) from checkpoint, the shape in current model is torch.Size([768, 768]).
size mismatch for albert.encoder.albert_layer_groups.0.albert_layers.0.attention.query.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for albert.encoder.albert_layer_groups.0.albert_layers.0.attention.key.weight: copying a param with shape torch.Size([64, 768]) from checkpoint, the shape in current model is torch.Size([768, 768]).
size mismatch for albert.encoder.albert_layer_groups.0.albert_layers.0.attention.key.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for albert.encoder.albert_layer_groups.0.albert_layers.0.attention.value.weight: copying a param with shape torch.Size([64, 768]) from checkpoint, the shape in current model is torch.Size([768, 768]).
size mismatch for albert.encoder.albert_layer_groups.0.albert_layers.0.attention.value.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for albert.encoder.albert_layer_groups.0.albert_layers.0.attention.dense.weight: copying a param with shape torch.Size([768, 64]) from checkpoint, the shape in current model is torch.Size([768, 768]).
```
transformer version == 2.2.1
torch-1.4.0
Anybody can help?
| 05-20-2020 22:46:33 | 05-20-2020 22:46:33 | Now I'm using head_mask instead of prune_heads. So, I didn't actually prune heads. |
transformers | 4,489 | closed | bugfix: pass on tokenizer to pipeline in load_graph_from_args | I think I found a small bug in the `load_graph_from_args` function in `convert_graph_to_onnx.py` as it accepts a tokenizer as input but doesn't pass it on to the pipeline inside the function.
Love the library π€ | 05-20-2020 18:01:52 | 05-20-2020 18:01:52 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4489?src=pr&el=h1) Report
> Merging [#4489](https://codecov.io/gh/huggingface/transformers/pull/4489?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/14cb5b35faeda7881341656aacf89d12a8a7e07b&el=desc) will **decrease** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4489?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4489 +/- ##
==========================================
- Coverage 78.04% 78.03% -0.01%
==========================================
Files 123 123
Lines 20477 20477
==========================================
- Hits 15981 15980 -1
- Misses 4496 4497 +1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4489?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4489/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4489?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4489?src=pr&el=footer). Last update [14cb5b3...a5ce320](https://codecov.io/gh/huggingface/transformers/pull/4489?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Indeed ! Thanks for spotting this @RensDimmendaal |
transformers | 4,488 | closed | Make changes to german-bert vocab file more prominent | We have been approached by researchers because the expected behavior of their bert-base-german-cased models changed without code modifications.
- So we wanted to make the changes to the vocab more prominent in the model card
- and also support a solution where people can easily use the old version through https://huggingface.co/deepset/bert-base-german-cased-oldvocab | 05-20-2020 16:01:02 | 05-20-2020 16:01:02 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4488?src=pr&el=h1) Report
> Merging [#4488](https://codecov.io/gh/huggingface/transformers/pull/4488?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6dc52c78d8f1f96ffd9b8f8178e142b7d4a77d14&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4488?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4488 +/- ##
==========================================
+ Coverage 78.02% 78.04% +0.01%
==========================================
Files 123 123
Lines 20477 20477
==========================================
+ Hits 15978 15982 +4
+ Misses 4499 4495 -4
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4488?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4488/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.76% <0.00%> (+0.23%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4488/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.83% <0.00%> (+0.32%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4488/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4488?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4488?src=pr&el=footer). Last update [6dc52c7...c4a85ea](https://codecov.io/gh/huggingface/transformers/pull/4488?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Great, thanks for adding this notice |
transformers | 4,487 | closed | Fix slow gpu tests lysandre | Fixes three tests of the slow + gpu suite cc @sshleifer | 05-20-2020 15:28:02 | 05-20-2020 15:28:02 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4487?src=pr&el=h1) Report
> Merging [#4487](https://codecov.io/gh/huggingface/transformers/pull/4487?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6dc52c78d8f1f96ffd9b8f8178e142b7d4a77d14&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4487?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4487 +/- ##
=======================================
Coverage 78.02% 78.03%
=======================================
Files 123 123
Lines 20477 20477
=======================================
+ Hits 15978 15980 +2
+ Misses 4499 4497 -2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4487?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4487/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (+0.16%)` | :arrow_up: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4487/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.76% <0.00%> (+0.23%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4487?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4487?src=pr&el=footer). Last update [6dc52c7...2260280](https://codecov.io/gh/huggingface/transformers/pull/4487?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,486 | closed | tokenizer.vocab has not changed after using add_tokens | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
When I use the add_tokens, I have the following problems
```python3
# len(tokenizer) == 30522
tokens_dict = ['[HL]']
num_added_toks = tokenizer.add_tokens(tokens_dict)
# len(tokenizer) == 30523
# But tokenizer.vocab_size == 30522
```
Should I change the dictionary by myself ?
| 05-20-2020 14:29:23 | 05-20-2020 14:29:23 | This is the expected behaviour: `len(tokenizer)` shows you the actual size (including the added tokens), whereas `.vocab_size` tells you the original size.
https://github.com/huggingface/transformers/blob/a08652772791fdaeed6f263b1a99926ca64be5dc/src/transformers/tokenization_utils.py#L2285-L2290
PS: don't forget to update your model's embeddings!
```python
model.resize_token_embeddings(len(tokenizer))
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I find it confusing that the vocab_size doesn't get modified. Also, the Hugging Face documentation describes `tokenizer.add_tokens` as follows:
> Add a list of new tokens to the tokenizer class. If the new tokens are not in the vocabulary, they are added to it with indices starting from length of the current vocabulary. |
transformers | 4,485 | closed | Can't find vocabulary file or is corrupted for MarianTokenizer | # π Bug
## Information
Model I am using MarianMT:
The problem arises when using the tokenizer with from_pretrained
## To reproduce
from transformers import MarianTokenizer, MarianMTModel
src = 'fr' # source language
trg = 'en' # target language
sample_text = "oΓΉ est l'arrΓͺt de bus ?"
mname = f'Helsinki-NLP/opus-mt-{src}-{trg}'
model = MarianMTModel.from_pretrained(mname)
tok = MarianTokenizer.from_pretrained(mname)
stdbuf was not found; communication with perl may hang due to stdio buffering.
FileNotFoundError Traceback (most recent call last)
~\Anaconda3\envs\pytorch\lib\site-packages\transformers\tokenization_utils.py in _from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1054 try:
-> 1055 tokenizer = cls(*init_inputs, **init_kwargs)
1056 except OSError:
~\Anaconda3\envs\pytorch\lib\site-packages\transformers\tokenization_marian.py in __init__(self, vocab, source_spm, target_spm, source_lang, target_lang, unk_token, eos_token, pad_token, max_len)
88
---> 89 self.punc_normalizer = MosesPunctuationNormalizer(source_lang)
90 except ImportError:
~\Anaconda3\envs\pytorch\lib\site-packages\mosestokenizer\punctnormalizer.py in __init__(self, lang)
46 argv = ["perl", program, "-b", "-l", self.lang]
---> 47 super().__init__(argv)
48
~\Anaconda3\envs\pytorch\lib\site-packages\toolwrapper.py in __init__(self, argv, encoding, start, cwd, stdbuf, stderr, env)
63 if start:
---> 64 self.start()
65
~\Anaconda3\envs\pytorch\lib\site-packages\toolwrapper.py in start(self)
107 cwd=self.cwd,
--> 108 env=env,
109 )
~\Anaconda3\envs\pytorch\lib\subprocess.py in __init__(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags, restore_signals, start_new_session, pass_fds, encoding, errors, text)
799 errread, errwrite,
--> 800 restore_signals, start_new_session)
801 except:
~\Anaconda3\envs\pytorch\lib\subprocess.py in _execute_child(self, args, executable, preexec_fn, close_fds, pass_fds, cwd, env, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, unused_restore_signals, unused_start_new_session)
1206 os.fspath(cwd) if cwd is not None else None,
-> 1207 startupinfo)
1208 finally:
FileNotFoundError: [WinError 2] The system cannot find the file specified
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-10-56dab20251f1> in <module>
6 mname = f'Helsinki-NLP/opus-mt-{src}-{trg}'
7 model = MarianMTModel.from_pretrained(mname)
----> 8 tok = MarianTokenizer.from_pretrained(mname)
~\Anaconda3\envs\pytorch\lib\site-packages\transformers\tokenization_utils.py in from_pretrained(cls, *inputs, **kwargs)
900
901 """
--> 902 return cls._from_pretrained(*inputs, **kwargs)
903
904 @classmethod
~\Anaconda3\envs\pytorch\lib\site-packages\transformers\tokenization_utils.py in _from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1056 except OSError:
1057 raise OSError(
-> 1058 "Unable to load vocabulary from file. "
1059 "Please check that the provided vocabulary is accessible and not corrupted."
1060 )
OSError: Unable to load vocabulary from file. Please check that the provided vocabulary is accessible and not corrupted.
----------------------------------------
I'm working on a windows 10 environment
| 05-20-2020 12:42:24 | 05-20-2020 12:42:24 | Closing in favour of the better formulated question here: https://github.com/huggingface/transformers/issues/4491<|||||>Can anyone help with this issue: #5040 ? |
transformers | 4,484 | closed | Bug using Roberta models in QA Transformers pipeline. | # π Bug
Hello, I cant use any roberta model with pipeline('question-answering'), someone can help me in how to fix this issue?
OBS=This error appears just when I use Roberta models.
ERROR:

| 05-20-2020 11:53:48 | 05-20-2020 11:53:48 | Hi, could you please post a code sample and a textual error, rather than an image? Thanks.<|||||>@LysandreJik yes, its very simple my code I just trying to run a transformers example.
if __name__ == '__main__':
import ipywidgets as widgets
from transformers.pipelines import pipeline
from transformers.modeling_auto import AutoModelForQuestionAnswering
from transformers.tokenization_auto import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("deepset/roberta-base-squad2")
model = AutoModelForQuestionAnswering.from_pretrained("deepset/roberta-base-squad2")
nlp_qa = pipeline('question-answering', model=model, tokenizer=tokenizer, device = 0)
X = nlp_qa(context="text document.txt", question='What is this project?')
print(X)
And runing with this albert or any another albert I got this error:
File "c:/Users/tioga/Desktop/Tranformers/transformers_test.py", line 44, in <module>
X = nlp_qa(context=st, question='What is this project?')
File "C:\Python\lib\site-packages\transformers\pipelines.py", line 1042, in __call__
for s, e, score in zip(starts, ends, scores)
File "C:\Python\lib\site-packages\transformers\pipelines.py", line 1042, in <listcomp>
for s, e, score in zip(starts, ends, scores)
KeyError: 0
<|||||>I can reproduce this error, but it is working with other models for me. Pinging @tholor who might know what is going on.<|||||>hi guys, anyone managed to understand what the above issue is? I am facing the same issue.
Thanks.<|||||>I believe this was fixed in #4049, which is available in the latest release `v2.10.0`. What are your installed `transformers` versions?<|||||>@LysandreJik I was using 2.7.0, but I still get the same error using 2.10.0<|||||>Using the exact code sample mentioned above? Are you using different code?<|||||>I have the exact issue with one of my Roberta models.. But I tried exact code now
<img width="827" alt="Screen Shot 2020-05-26 at 6 38 24 AM" src="https://user-images.githubusercontent.com/3698879/82908070-50b28980-9f1c-11ea-8a12-ff70f862c46b.png">
<|||||>It's hard for me to test if you give an image. Can you paste the code? If you already have `transformers==2.7.0` installed, your `!pip install transformers==2.10.0` won't work. You need to add the `--upgrade` or `-U` flag.
Can you add
```py
from transformers import __version__
print(__version__)
```
just to make sure?<|||||>@LysandreJik works for me. Thank you.<|||||>Glad I could help!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Hi,
When I use transformers 2.4.0, it is working without the error, but with 3.0.2 I will get the same error!
So when the context has the answer in it, everything is fine, when it has not, I get the same error.
Example:
```
from transformers.pipelines import pipeline
name="ktrapeznikov/albert-xlarge-v2-squad-v2"
nlp=pipeline('question-answering',model=name,tokenizer=name,device=-1)
```
This example won't cause any errors and I get the right answer:
```
qa_input = {'question': 'Is the company listed on any stock exchange?', 'context': 'Roche Corporate Executive Committee on 31 December 2019. We are dedicated to long-term success. Roche is listed on New York stock exchange.'}
qa_response = nlp(qa_input)
```
This will cause the error:
```
qa_input = {'question': 'Is the company listed on any stock exchange?', 'context': 'Roche Corporate Executive Committee on 31 December 2019. We are dedicated to long-term success.'}
qa_response = nlp(qa_input)
```
Can you verify that it is not working with 3.0.2 ?
Do you have any solutions or I should just use older versions for now to work with?
Thanks!
|
transformers | 4,483 | open | Trying to add support for GPT2 as decoder in EncoderDecoder model | # π Feature request
Hi,
I am trying to add the option of using GPT2 as the decoder in the EncoderDecoder model, which only support
## Motivation
For a generation problem, it usually better to use GPT2 as the decoder, over BERT.
## Your contribution
I've made the following changes in `modeling_gpt2.py` file:
- Added crossattention layer if the model is a decoder, to the `Block` class:
```python
class Block(nn.Module):
def __init__(self, n_ctx, config, scale=False):
super().__init__()
nx = config.n_embd
self.ln_1 = nn.LayerNorm(nx, eps=config.layer_norm_epsilon)
self.attn = Attention(nx, n_ctx, config, scale)
self.ln_2 = nn.LayerNorm(nx, eps=config.layer_norm_epsilon)
self.mlp = MLP(4 * nx, config)
self.is_decoder = config.is_decoder
if self.is_decoder:
self.crossattention = Attention(nx, n_ctx, config, scale)
...
def forward(self, x, layer_past=None, attention_mask=None, head_mask=None, use_cache=False, encoder_hidden_states=None,
encoder_attention_mask=None):
output_attn = self.attn(
self.ln_1(x),
layer_past=layer_past,
attention_mask=attention_mask,
head_mask=head_mask,
use_cache=use_cache,
)
a = output_attn[0] # output_attn: a, present, (attentions)
outputs = []
if self.is_decoder and encoder_hidden_states is not None:
cross_attention_outputs = self.crossattention(
a, layer_past, attention_mask, head_mask, encoder_hidden_states=encoder_hidden_states,
encoder_attention_mask=encoder_attention_mask
)
a = cross_attention_outputs[0]
outputs = outputs + cross_attention_outputs[1:] # add cross attentions if we output attention weights
x = x + a
m = self.mlp(self.ln_2(x))
x = x + m
outputs = [x] + output_attn[1:] + outputs
return outputs # x, present, (attentions)
```
- Added 3 Linear layers instead of the Conv1d layer:
```python
class Attention(nn.Module):
def __init__(self, nx, n_ctx, config, scale=False):
...
# self.c_attn = Conv1D(n_state * 3, nx)
self.query = nn.Linear(n_state, nx)
self.key = nn.Linear(n_state, nx)
self.value = nn.Linear(n_state, nx)
...
```
- Added `encoder_attention_mask` and `encoder_hidden_states` to the forward function of the `Attention` class, and using them for the key and the value if they are provided:
```python
def forward(self, x, layer_past=None, attention_mask=None, head_mask=None, use_cache=False, encoder_hidden_states=None,
encoder_attention_mask=None):
query = self.query(x)
if encoder_hidden_states is not None:
key = self.key(encoder_hidden_states)
value = self.value(encoder_hidden_states)
attention_mask = encoder_attention_mask
else:
key = self.key(x)
value = self.value(x)
query = self.split_heads(query)
key = self.split_heads(key, k=True)
value = self.split_heads(value)
...
```
- Added the `encoder_attention_mask` and `encoder_hidden_states` arguments to the `GPT2Model` forward function, and processed `encoder_attention_mask` same as attention_mask:
```python
class GPT2Model(GPT2PreTrainedModel):
...
def forward(
self,
input_ids=None,
past=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
use_cache=True,
encoder_hidden_states=None,
encoder_attention_mask=None,
):
...
# Encoder attention mask. (same action as for regular attention mask)
if encoder_attention_mask is not None:
assert batch_size > 0, "batch_size has to be defined and > 0"
encoder_attention_mask = encoder_attention_mask.view(batch_size, -1)
encoder_attention_mask = encoder_attention_mask.unsqueeze(1).unsqueeze(2)
encoder_attention_mask = encoder_attention_mask.to(dtype=next(self.parameters()).dtype) # fp16 compatibility
encoder_attention_mask = (1.0 - encoder_attention_mask) * -10000.0
...
for i, (block, layer_past) in enumerate(zip(self.h, past)):
if self.output_hidden_states:
all_hidden_states = all_hidden_states + (hidden_states.view(*output_shape),)
outputs = block(
hidden_states,
layer_past=layer_past,
attention_mask=attention_mask,
head_mask=head_mask[i],
use_cache=use_cache,
encoder_hidden_states=encoder_hidden_states,
encoder_attention_mask=encoder_attention_mask,
)
...
```
- Added the `encoder_attention_mask` and `encoder_hidden_states` arguments to the `GPT2LMHeadModel`forward function, as well as `lm_lables` and `masked_lm_labels` for EncoderDecoder model compatibility (probably it's better to use `GPT2DoubleHeadsModel`):
```python
class GPT2LMHeadModel(GPT2PreTrainedModel):
...
def forward(
self,
input_ids=None,
past=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
labels=None,
use_cache=True,
lm_labels=None,
masked_lm_labels=None,
encoder_hidden_states=None,
encoder_attention_mask=None,
):
...
if lm_labels is not None:
if labels is not None:
raise ValueError("You cannot specify both labels and lm_labels at the same time")
labels = lm_labels
transformer_outputs = self.transformer(
input_ids,
past=past,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
use_cache=use_cache,
encoder_hidden_states=encoder_hidden_states,
encoder_attention_mask=encoder_attention_mask,
)
...
```
My biggest concern is with the second bullet, and I wanted to ask you if this implementation seems right (for now it's look like I am able to train and test an EncoderDecoder with BERT2GPT architecture).
Of course that if needed, I can provide the full code to all of my changes, but all of my changes is listed above.
Most (if not all) of the code I've add is adapted from huggingface `modeling_bert.py`file, so all of the credit goes to them.
Thanks | 05-20-2020 11:24:44 | 05-20-2020 11:24:44 | @dimi1357 out of curiosity, what does training this look like?<|||||>> @dimi1357 out of curiosity, what does training this look like?
This is my training loop:
```python
x, encoder_attention_mask, y, decoder_attention_mask, _ = batch
x = x.to(self.device)
y = y.to(self.device)
encoder_attention_mask = encoder_attention_mask.to(self.device)
decoder_attention_mask = decoder_attention_mask.to(self.device)
model_kwargs = {
"attention_mask": encoder_attention_mask,
"decoder_attention_mask": decoder_attention_mask,
"lm_labels": y
}
self.optimizer.zero_grad()
outputs = self.model(input_ids=x, decoder_input_ids=y, **model_kwargs)
loss = outputs[0]
loss.backward()
self.optimizer.step()
if self.scheduler is not None:
self.scheduler.step()
```
and I create the model this way:
```pyhon
config_decoder = AutoConfig.from_pretrained(decoder_model_name, is_decoder=True)
config_encoder = AutoConfig.from_pretrained(encoder_model_name, is_decoder=False)
config = EncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)
res_model = EncoderDecoderModel(config=config)
```<|||||>@dimi1357 Did you finally make it work? Can you provide me the "full changes" in some way? I am also interested in using the GPT2 model as decoder.<|||||>Thanks for the Feature request and the in-detail code! I will think a bit more about how to implement this and get back to you!<|||||>> Thanks for the Feature request and the in-detail code! I will think a bit more about how to implement this and get back to you!
I forgot to add the change I've made to `Block` class forward function (I've also edited the issue):
```python
def forward(self, x, layer_past=None, attention_mask=None, head_mask=None, use_cache=False, encoder_hidden_states=None,
encoder_attention_mask=None):
output_attn = self.attn(
self.ln_1(x),
layer_past=layer_past,
attention_mask=attention_mask,
head_mask=head_mask,
use_cache=use_cache,
)
a = output_attn[0] # output_attn: a, present, (attentions)
outputs = []
if self.is_decoder and encoder_hidden_states is not None:
cross_attention_outputs = self.crossattention(
a, layer_past, attention_mask, head_mask, encoder_hidden_states=encoder_hidden_states,
encoder_attention_mask=encoder_attention_mask
)
a = cross_attention_outputs[0]
outputs = outputs + cross_attention_outputs[1:] # add cross attentions if we output attention weights
x = x + a
m = self.mlp(self.ln_2(x))
x = x + m
outputs = [x] + output_attn[1:] + outputs
return outputs # x, present, (attentions)
```<|||||>> @dimi1357 Did you finally make it work? Can you provide me the "full changes" in some way? I am also interested in using the GPT2 model as decoder.
You can add the code above to where you've installed the transformers package, but I'm still not sure that this implementation is correct, so I suggest you wait for an update from huggingface team if this is okay.<|||||>Hey @dimi1357 . So I think the Encoder Decoder roadmap is as follows:
- In ~2 weeks, we will open-source a clean notebook showing how a `Bert2Bert` model can be fine-tuned
- After that, we will take a deeper look into hooking `GPT2` into the `EncoderDecoder` framework.
I will keep your code sample here in mind for this :-) <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>> Hey @dimi1357 . So I think the Encoder Decoder roadmap is as follows:
>
> * In ~2 weeks, we will open-source a clean notebook showing how a `Bert2Bert` model can be fine-tuned
> * After that, we will take a deeper look into hooking `GPT2` into the `EncoderDecoder` framework.
>
> I will keep your code sample here in mind for this :-)
Hi,
Is there any updates regarding to BERT2GPT implementation.
Thanks!<|||||>Hey, I will take a look at BERTGPT2 encoder-decoder probably on Monday next week<|||||>@patrickvonplaten Can you please share a work in progress notebook/colab, or some code. I am willing to help with tests and datasets, in order to improve the BERT2GPT2 model. Thank you :D<|||||>Will finish the PR tomorrow then it should be pretty easy to do BERT2GPT2.<|||||>Hi @patrickvonplaten . I've used your latest commit to train BERT2GPT2 using your BERT2BERT training tutorial. It was straight forward, I only had to replace the "bert" from decoder with "gpt2". The training worked, but at inference time there was a code error in `prepare_inputs_for_generation` at line 299:
> /transformers/modeling_encoder_decoder.py
> 297 # first step
> 298 if type(past) is tuple:
> 299 encoder_outputs, _ = past <----
> 300 else:
> 301 encoder_outputs = (past,)
>
>
> ValueError: too many values to unpack (expected 2)
I do not know if the model requires a different evaluation approach. <|||||>> Will finish the PR tomorrow then it should be pretty easy to do BERT2GPT2.
Thanks for the implementation, I'm going to test it now.<|||||>GPT2 is added and results on summariation look promising. Check out this model (Bert2GPT2 trained on CNN/Daily Mail) including train and eval script: https://huggingface.co/patrickvonplaten/bert2gpt2-cnn_dailymail-fp16 .<|||||>Hi @patrickvonplaten, I used this model card to train on my custom dataset, but again the TypeError is been thrownback that `forward() got an unexpected keyword argument 'encoder_hidden_states'`
here is my code
```
import nlp
import logging
from transformers import BertTokenizer, GPT2Tokenizer, EncoderDecoderModel, Trainer, TrainingArguments
logging.basicConfig(level=logging.INFO)
model = EncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-uncased", "gpt2")
# cache is currently not supported by EncoderDecoder framework
model.decoder.config.use_cache = False
bert_tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
# CLS token will work as BOS token
bert_tokenizer.bos_token = bert_tokenizer.cls_token
# SEP token will work as EOS token
bert_tokenizer.eos_token = bert_tokenizer.sep_token
# make sure GPT2 appends EOS in begin and end
def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
outputs = [self.bos_token_id] + token_ids_0 + [self.eos_token_id]
return outputs
GPT2Tokenizer.build_inputs_with_special_tokens = build_inputs_with_special_tokens
gpt2_tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
# set pad_token_id to unk_token_id -> be careful here as unk_token_id == eos_token_id == bos_token_id
gpt2_tokenizer.pad_token = gpt2_tokenizer.unk_token
# set decoding params
model.config.decoder_start_token_id = gpt2_tokenizer.bos_token_id
model.config.eos_token_id = gpt2_tokenizer.eos_token_id
model.config.max_length = 142
model.config.min_length = 56
model.config.no_repeat_ngram_size = 3
model.early_stopping = True
model.length_penalty = 2.0
model.num_beams = 4
# load train and validation data
train_dataset = nlp.load_dataset('csv', data_files='data.csv',split='train[:80%]')
val_dataset = nlp.load_dataset('csv', data_files='data.csv',split='train[80%:]')
# load rouge for validation
rouge = nlp.load_metric("rouge", experiment_id=1)
encoder_length = 512
decoder_length = 128
batch_size = 16
# map data correctly
def map_to_encoder_decoder_inputs(batch): # Tokenizer will automatically set [BOS] <text> [EOS]
# use bert tokenizer here for encoder
inputs = bert_tokenizer.encode_plus(batch["Patient"], padding="max_length", truncation=True, max_length=encoder_length)
# force summarization <= 128
outputs = gpt2_tokenizer.encode_plus(batch["Doctor"], padding="max_length", truncation=True, max_length=decoder_length)
batch["input_ids"] = inputs.input_ids
batch["attention_mask"] = inputs.attention_mask
batch["decoder_input_ids"] = outputs.input_ids
batch["labels"] = outputs.input_ids.copy()
batch["decoder_attention_mask"] = outputs.attention_mask
# complicated list comprehension here because pad_token_id alone is not good enough to know whether label should be excluded or not
batch["labels"] = [
[-100 if mask == 0 else token for mask, token in mask_and_tokens] for mask_and_tokens in [zip(masks, labels) for masks, labels in zip(batch["decoder_attention_mask"], batch["labels"])]
]
assert all([len(x) == encoder_length for x in inputs.input_ids])
assert all([len(x) == decoder_length for x in outputs.input_ids])
return batch
def compute_metrics(pred):
labels_ids = pred.label_ids
pred_ids = pred.predictions
# all unnecessary tokens are removed
pred_str = gpt2_tokenizer.batch_decode(pred_ids, skip_special_tokens=True)
labels_ids[labels_ids == -100] = gpt2_tokenizer.eos_token_id
label_str = gpt2_tokenizer.batch_decode(labels_ids, skip_special_tokens=True)
rouge_output = rouge.compute(predictions=pred_str, references=label_str, rouge_types=["rouge2"])["rouge2"].mid
return {
"rouge2_precision": round(rouge_output.precision, 4),
"rouge2_recall": round(rouge_output.recall, 4),
"rouge2_fmeasure": round(rouge_output.fmeasure, 4),
}
# make train dataset ready
train_dataset = train_dataset.map(
map_to_encoder_decoder_inputs, batched=True, batch_size=batch_size, remove_columns=["Patient", "Doctor"],
)
train_dataset.set_format(
type="torch", columns=["input_ids", "attention_mask", "decoder_input_ids", "decoder_attention_mask", "labels"],
)
# same for validation dataset
val_dataset = val_dataset.map(
map_to_encoder_decoder_inputs, batched=True, batch_size=batch_size, remove_columns=["Patient", "Doctor"],
)
val_dataset.set_format(
type="torch", columns=["input_ids", "attention_mask", "decoder_input_ids", "decoder_attention_mask", "labels"],
)
# set training arguments - these params are not really tuned, feel free to change
training_args = TrainingArguments(
output_dir="./ambi",
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
evaluate_during_training=True,
do_train=True,
do_eval=True,
logging_steps=1000,
save_steps=1000,
eval_steps=1000,
overwrite_output_dir=True,
warmup_steps=2000,
save_total_limit=10,
fp16=True,
)
# instantiate trainer
trainer = Trainer(
model=model,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train_dataset,
eval_dataset=val_dataset,
)
# start training
trainer.train()
```
If you can see it carefully you can find that an argument is missing in `TrainingArguments` module, I always get an error that why `predict_from_generate` is passed, I tried finding that attribute in [`training_args.py`](https://github.com/huggingface/transformers/blob/master/src/transformers/training_args.py), but it seems there is no such attribute available in it. Please clarify which version are you using, If it is above 2.11 then please clarify why my the above code is throwing this error.<|||||>You need to switch to this branch: https://github.com/huggingface/transformers/tree/more_general_trainer_metric to make the training work. I am trying to integrate this branch into master soon :-) <|||||>Thanks for letting me know.<|||||>Sorry to ask a question after a long period of time :-). I am still not very clear about the effect of **encoder attention mask** in GPT2.
I understand that it is used only in the decoder of Encoder-Decoder model to make some change to the cross attention weights. Also, I notice the operation defined in the modelling_gpt2.py:
`attention_mask = encoder_attention_mask`
`...`
`w=w+attention_mask`
However, I am confused why we need this **encoder attention mask**. Is that also because the decoder can not see the whole sequence?
Thanks for help :-)
<|||||>> Hi @patrickvonplaten, I used this model card to train on my custom dataset, but again the TypeError is been thrownback that `forward() got an unexpected keyword argument 'encoder_hidden_states'` here is my code
>
> ```
> import nlp
> import logging
> from transformers import BertTokenizer, GPT2Tokenizer, EncoderDecoderModel, Trainer, TrainingArguments
>
> logging.basicConfig(level=logging.INFO)
>
> model = EncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-uncased", "gpt2")
> # cache is currently not supported by EncoderDecoder framework
> model.decoder.config.use_cache = False
> bert_tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
>
> # CLS token will work as BOS token
> bert_tokenizer.bos_token = bert_tokenizer.cls_token
>
> # SEP token will work as EOS token
> bert_tokenizer.eos_token = bert_tokenizer.sep_token
>
>
> # make sure GPT2 appends EOS in begin and end
> def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
> outputs = [self.bos_token_id] + token_ids_0 + [self.eos_token_id]
> return outputs
>
>
> GPT2Tokenizer.build_inputs_with_special_tokens = build_inputs_with_special_tokens
> gpt2_tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
> # set pad_token_id to unk_token_id -> be careful here as unk_token_id == eos_token_id == bos_token_id
> gpt2_tokenizer.pad_token = gpt2_tokenizer.unk_token
>
>
> # set decoding params
> model.config.decoder_start_token_id = gpt2_tokenizer.bos_token_id
> model.config.eos_token_id = gpt2_tokenizer.eos_token_id
> model.config.max_length = 142
> model.config.min_length = 56
> model.config.no_repeat_ngram_size = 3
> model.early_stopping = True
> model.length_penalty = 2.0
> model.num_beams = 4
>
> # load train and validation data
> train_dataset = nlp.load_dataset('csv', data_files='data.csv',split='train[:80%]')
> val_dataset = nlp.load_dataset('csv', data_files='data.csv',split='train[80%:]')
>
> # load rouge for validation
> rouge = nlp.load_metric("rouge", experiment_id=1)
>
> encoder_length = 512
> decoder_length = 128
> batch_size = 16
>
>
> # map data correctly
> def map_to_encoder_decoder_inputs(batch): # Tokenizer will automatically set [BOS] <text> [EOS]
> # use bert tokenizer here for encoder
> inputs = bert_tokenizer.encode_plus(batch["Patient"], padding="max_length", truncation=True, max_length=encoder_length)
> # force summarization <= 128
> outputs = gpt2_tokenizer.encode_plus(batch["Doctor"], padding="max_length", truncation=True, max_length=decoder_length)
>
> batch["input_ids"] = inputs.input_ids
> batch["attention_mask"] = inputs.attention_mask
> batch["decoder_input_ids"] = outputs.input_ids
> batch["labels"] = outputs.input_ids.copy()
> batch["decoder_attention_mask"] = outputs.attention_mask
>
> # complicated list comprehension here because pad_token_id alone is not good enough to know whether label should be excluded or not
> batch["labels"] = [
> [-100 if mask == 0 else token for mask, token in mask_and_tokens] for mask_and_tokens in [zip(masks, labels) for masks, labels in zip(batch["decoder_attention_mask"], batch["labels"])]
> ]
>
> assert all([len(x) == encoder_length for x in inputs.input_ids])
> assert all([len(x) == decoder_length for x in outputs.input_ids])
>
> return batch
>
>
> def compute_metrics(pred):
> labels_ids = pred.label_ids
> pred_ids = pred.predictions
>
> # all unnecessary tokens are removed
> pred_str = gpt2_tokenizer.batch_decode(pred_ids, skip_special_tokens=True)
> labels_ids[labels_ids == -100] = gpt2_tokenizer.eos_token_id
> label_str = gpt2_tokenizer.batch_decode(labels_ids, skip_special_tokens=True)
>
> rouge_output = rouge.compute(predictions=pred_str, references=label_str, rouge_types=["rouge2"])["rouge2"].mid
>
> return {
> "rouge2_precision": round(rouge_output.precision, 4),
> "rouge2_recall": round(rouge_output.recall, 4),
> "rouge2_fmeasure": round(rouge_output.fmeasure, 4),
> }
>
>
> # make train dataset ready
> train_dataset = train_dataset.map(
> map_to_encoder_decoder_inputs, batched=True, batch_size=batch_size, remove_columns=["Patient", "Doctor"],
> )
> train_dataset.set_format(
> type="torch", columns=["input_ids", "attention_mask", "decoder_input_ids", "decoder_attention_mask", "labels"],
> )
>
> # same for validation dataset
> val_dataset = val_dataset.map(
> map_to_encoder_decoder_inputs, batched=True, batch_size=batch_size, remove_columns=["Patient", "Doctor"],
> )
> val_dataset.set_format(
> type="torch", columns=["input_ids", "attention_mask", "decoder_input_ids", "decoder_attention_mask", "labels"],
> )
>
> # set training arguments - these params are not really tuned, feel free to change
> training_args = TrainingArguments(
> output_dir="./ambi",
> per_device_train_batch_size=batch_size,
> per_device_eval_batch_size=batch_size,
> evaluate_during_training=True,
> do_train=True,
> do_eval=True,
> logging_steps=1000,
> save_steps=1000,
> eval_steps=1000,
> overwrite_output_dir=True,
> warmup_steps=2000,
> save_total_limit=10,
> fp16=True,
> )
>
> # instantiate trainer
> trainer = Trainer(
> model=model,
> args=training_args,
> compute_metrics=compute_metrics,
> train_dataset=train_dataset,
> eval_dataset=val_dataset,
> )
>
> # start training
> trainer.train()
> ```
>
> If you can see it carefully you can find that an argument is missing in `TrainingArguments` module, I always get an error that why `predict_from_generate` is passed, I tried finding that attribute in [`training_args.py`](https://github.com/huggingface/transformers/blob/master/src/transformers/training_args.py), but it seems there is no such attribute available in it. Please clarify which version are you using, If it is above 2.11 then please clarify why my the above code is throwing this error.
@AmbiTyga @patrickvonplaten Is this error fixed? I have switched to the branch "more_general_trainer_metric." But it seems this error still exists when I am running codes in https://huggingface.co/patrickvonplaten/bert2gpt2-cnn_dailymail-fp16. <|||||>The code is a bit outdated there. You should be able to simply use the https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization example. In order to create a BERT2GPT2 checkpoint, you could a code that is similar to this one: https://huggingface.co/docs/transformers/v4.17.0/en/model_doc/encoder-decoder#transformers.EncoderDecoderModel.forward
(just replace one BERT by GPT2)
So to summarize,
1. Create a warm-started bert-gpt2 checkpoint
2. save checkpoint
3. use summarization example to fine-tune the checkpoint
I'll keep this issue open for now since we should probably create a nice "How-to" guide for this<|||||>> The code is a bit outdated there. You should be able to simply use the https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization example. In order to create a BERT2GPT2 checkpoint, you could a code that is similar to this one: https://huggingface.co/docs/transformers/v4.17.0/en/model_doc/encoder-decoder#transformers.EncoderDecoderModel.forward
>
> (just replace one BERT by GPT2)
>
> So to summarize,
>
> 1. Create a warm-started bert-gpt2 checkpoint
> 2. save checkpoint
> 3. use summarization example to fine-tune the checkpoint
>
> I'll keep this issue open for now since we should probably create a nice "How-to" guide for this
Thanks for your guidance! I try this method to create and ft a bert2gpt2 model, but it seems that "tokenizer" would be a problem: I can't load a single suitable tokenizer for this model in the summarization example. So is it necessary for me to defined tokenizer1 for bert and tokenizer2 for gpt2 and then change any code that is related to "tokenizer" in order to fix this problem? @patrickvonplaten <|||||>It's fine to load two tokenizers no? <|||||>>
YeahοΌI use 2 tokenizers to replace "tokenizer" in run_summarization.py and also do some other changes, the code can work now(although I don't know whether it is right....). Here are my changes.
1. change the resize_token_embeddings method`#model.resize_token_embeddings(len(tokenizer))`
`model.encoder.resize_token_embeddings(len(tokenizer1))`
`model.decoder.resize_token_embeddings(len(tokenizer2))`
2. some special tokens settings according to [https://huggingface.co/patrickvonplaten/bert2gpt2-cnn_dailymail-fp16](url)
3. facing problem like https://github.com/huggingface/transformers/issues/10646#issue-829065330, and used codes in [https://github.com/huggingface/transformers/blob/24e2fa1590faac894da3422daf56abf9770c9d81/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L555](url) line554-555 and line147-162
4. Noticing that in bert base/large "max_position_embeddings" is 512, and default max_source_length in run_summarization.py is 1024, as a result if our input sequence length is over 512, we will get an error like https://github.com/huggingface/transformers/issues/15081#issue-1097193504. So let max_source_length=512.
5. all codes segmentations of (tokenizer->tokenizer2) in run_summarization.py(**Not sure**)
```
# Setup the tokenizer for targets
with tokenizer2.as_target_tokenizer():
labels = tokenizer2(targets, max_length=max_target_length, padding=padding, truncation=True)
# If we are padding here, replace all tokenizer.pad_token_id in the labels by -100 when we want to ignore
# padding in the loss.
if padding == "max_length" and data_args.ignore_pad_token_for_loss:
labels["input_ids"] = [
[(l if l != tokenizer2.pad_token_id else -100) for l in label] for label in labels["input_ids"]
]
model_inputs["labels"] = labels["input_ids"]
return model_inputs
```
```
def compute_metrics(eval_preds):
preds, labels = eval_preds
if isinstance(preds, tuple):
preds = preds[0]
decoded_preds = tokenizer2.batch_decode(preds, skip_special_tokens=True)
if data_args.ignore_pad_token_for_loss:
# Replace -100 in the labels as we can't decode them.
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_labels = tokenizer2.batch_decode(labels, skip_special_tokens=True)
```
```
if trainer.is_world_process_zero():
if training_args.predict_with_generate:
predictions = tokenizer2.batch_decode(
predict_results.predictions, skip_special_tokens=True, clean_up_tokenization_spaces=True
)
predictions = [pred.strip() for pred in predictions]
output_prediction_file = os.path.join(training_args.output_dir, "generated_predictions.txt")
with open(output_prediction_file, "w") as writer:
writer.write("\n".join(predictions))
```
> It's fine to load two tokenizers no?
<|||||>Hey everyone,
Did this work go anywhere?
I need a pre-trained gpt2 model based on nn.Linear instead of Conv1D layers for research purpose, Is the implementation above merged anywhere, or there exist some other gpt2 model based on nn.Linear?<|||||>Can I work on this issue as a good first issue or is there no point?<|||||>I don't think there is any point @Forpee |
transformers | 4,482 | closed | Create model card for RuPERTA-base-finetuned-pos | 05-20-2020 11:14:55 | 05-20-2020 11:14:55 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4482?src=pr&el=h1) Report
> Merging [#4482](https://codecov.io/gh/huggingface/transformers/pull/4482?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/efbc1c5a9d96048ab11f8d746fe51107cb91646f&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4482?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4482 +/- ##
=======================================
Coverage 78.03% 78.03%
=======================================
Files 123 123
Lines 20477 20477
=======================================
Hits 15980 15980
Misses 4497 4497
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4482?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4482/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.50% <0.00%> (-0.17%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4482/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4482?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4482?src=pr&el=footer). Last update [efbc1c5...82aef0f](https://codecov.io/gh/huggingface/transformers/pull/4482?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 4,481 | closed | Add mecab dependency on slow tests. | Solves the following error:
```
2020-05-19T17:01:17.3352437Z [gw0] linux -- Python 3.7.6 /home/hf/actions-r
2020-05-19T17:01:17.3354221Z @slow
2020-05-19T17:01:17.3354825Z def test_sequence_builders(self):
2020-05-19T17:01:17.3356512Z > tokenizer = self.tokenizer_class.from_pretrained("bert-base-japanese-char")
2020-05-19T17:01:17.3356685Z
2020-05-19T17:01:17.3357374Z tests/test_tokenization_bert_japanese.py:192:
2020-05-19T17:01:17.3359012Z _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
2020-05-19T17:01:17.3360868Z .env/lib/python3.7/site-packages/transformers/tokenization_utils.py:902: in from_pretrained
2020-05-19T17:01:17.3361266Z return cls._from_pretrained(*inputs, **kwargs)
2020-05-19T17:01:17.3363161Z .env/lib/python3.7/site-packages/transformers/tokenization_utils.py:1055: in _from_pretrained
2020-05-19T17:01:17.3363615Z tokenizer = cls(*init_inputs, **init_kwargs)
2020-05-19T17:01:17.3365382Z .env/lib/python3.7/site-packages/transformers/tokenization_bert_japanese.py:139: in __init__
2020-05-19T17:01:17.3366229Z do_lower_case=do_lower_case, never_split=never_split, **(mecab_kwargs or {})
2020-05-19T17:01:17.3367669Z _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
2020-05-19T17:01:17.3367895Z
2020-05-19T17:01:17.3369474Z self = <transformers.tokenization_bert_japanese.MecabTokenizer object at 0x7f433565b9d0>
2020-05-19T17:01:17.3371043Z do_lower_case = False, never_split = None, normalize_text = True
2020-05-19T17:01:17.3371414Z mecab_option = None
2020-05-19T17:01:17.3371564Z
2020-05-19T17:01:17.3373681Z def __init__(self, do_lower_case=False, never_split=None, normalize_text=True, mecab_option: Optional[str] = None):
2020-05-19T17:01:17.3373909Z """Constructs a MecabTokenizer.
2020-05-19T17:01:17.3374082Z
2020-05-19T17:01:17.3374357Z Args:
2020-05-19T17:01:17.3375149Z **do_lower_case**: (`optional`) boolean (default True)
2020-05-19T17:01:17.3375850Z Whether to lower case the input.
2020-05-19T17:01:17.3376666Z **never_split**: (`optional`) list of str
2020-05-19T17:01:17.3377692Z Kept for backward compatibility purposes.
2020-05-19T17:01:17.3378953Z Now implemented directly at the base class level (see :func:`PreTrainedTokenizer.tokenize`)
2020-05-19T17:01:17.3379578Z List of token not to split.
2020-05-19T17:01:17.3380559Z **normalize_text**: (`optional`) boolean (default True)
2020-05-19T17:01:17.3381677Z Whether to apply unicode normalization to text before tokenization.
2020-05-19T17:01:17.3382985Z **mecab_option**: (`optional`) string passed to `MeCab.Tagger` constructor (default "")
2020-05-19T17:01:17.3383398Z """
2020-05-19T17:01:17.3384034Z self.do_lower_case = do_lower_case
2020-05-19T17:01:17.3385088Z self.never_split = never_split if never_split is not None else []
2020-05-19T17:01:17.3385841Z self.normalize_text = normalize_text
2020-05-19T17:01:17.3386284Z
2020-05-19T17:01:17.3386881Z > import MeCab
2020-05-19T17:01:17.3388516Z E ModuleNotFoundError: No module named 'MeCab'
``` | 05-20-2020 08:28:59 | 05-20-2020 08:28:59 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4481?src=pr&el=h1) Report
> Merging [#4481](https://codecov.io/gh/huggingface/transformers/pull/4481?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/384f0eb2f9d42e44094dbfd0917ccf4e6ddb462a&el=desc) will **decrease** coverage by `0.08%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4481?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4481 +/- ##
==========================================
- Coverage 77.96% 77.88% -0.09%
==========================================
Files 120 120
Lines 20140 20140
==========================================
- Hits 15703 15686 -17
- Misses 4437 4454 +17
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4481?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4481/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-5.42%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4481?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4481?src=pr&el=footer). Last update [384f0eb...4deb915](https://codecov.io/gh/huggingface/transformers/pull/4481?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>I think 865d4d595eefc8cc9cee58fec9179bd182be0e2e might be a more "correct" way to fix this |
transformers | 4,480 | closed | [Reformer] Include char lm to Trainer | Trainer currently expects every model to have a tokenizer. The reformer: `google/reformer-enwik8` is a char lm which does not require a tokenizer. | 05-20-2020 07:57:07 | 05-20-2020 07:57:07 | @patrickvonplaten Thank you your awesome work. IΒ΄m super excited about a char-lm.
IΒ΄m trying to can train the "reformer" model, using the "google/reformer-enwik8".
But using this script or the run_language_modeling.py I get the error about the lack of tokenizer ( that is expected to a char only LM).
`Model name 'google/reformer-enwik8' was not found in tokenizers`
Can you give me some pointer how I could train it?<|||||>Hi bratao! Good point I will consult with our team on how to include models that don't have don't need a tokenizer! Let me get back to you in a couple of days :-) |
transformers | 4,479 | closed | [examples] fix no grad in second pruning in run_bertology | the `new_head_mask` index assignment operation makes it become a non-leaf node in the following gradient computation, resulting in grad is None bug as mentioned in #3895
| 05-20-2020 07:54:24 | 05-20-2020 07:54:24 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4479?src=pr&el=h1) Report
> :exclamation: No coverage uploaded for pull request base (`master@18d233d`). [Click here to learn what that means](https://docs.codecov.io/docs/error-reference#section-missing-base-commit).
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4479?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4479 +/- ##
=========================================
Coverage ? 78.20%
=========================================
Files ? 120
Lines ? 20083
Branches ? 0
=========================================
Hits ? 15705
Misses ? 4378
Partials ? 0
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4479?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `78.74% <0.00%> (ΓΈ)` | |
| [src/transformers/configuration\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/4479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX21hcmlhbi5weQ==) | `100.00% <0.00%> (ΓΈ)` | |
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/4479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `89.23% <0.00%> (ΓΈ)` | |
| [src/transformers/configuration\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/4479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3RyYW5zZm9feGwucHk=) | `92.45% <0.00%> (ΓΈ)` | |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/4479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `40.67% <0.00%> (ΓΈ)` | |
| [src/transformers/data/processors/xnli.py](https://codecov.io/gh/huggingface/transformers/pull/4479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMveG5saS5weQ==) | `29.54% <0.00%> (ΓΈ)` | |
| [src/transformers/hf\_argparser.py](https://codecov.io/gh/huggingface/transformers/pull/4479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9oZl9hcmdwYXJzZXIucHk=) | `61.97% <0.00%> (ΓΈ)` | |
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/4479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `100.00% <0.00%> (ΓΈ)` | |
| [src/transformers/tokenization\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/4479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZmxhdWJlcnQucHk=) | `40.42% <0.00%> (ΓΈ)` | |
| [src/transformers/data/processors/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/4479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvX19pbml0X18ucHk=) | `100.00% <0.00%> (ΓΈ)` | |
| ... and [110 more](https://codecov.io/gh/huggingface/transformers/pull/4479/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4479?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4479?src=pr&el=footer). Last update [18d233d...b3c4f81](https://codecov.io/gh/huggingface/transformers/pull/4479?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>This looks reasonable to me. Thanks for looking into it.<|||||>can you run
```
pip uninstall -y isort black
pip install -e .[quality]
make style
```
?
Thanks!<|||||>Thanks for noting how to run code reformatting! <|||||>Thanks! |
transformers | 4,478 | closed | β [TPU] [Trainer] Moving model to device before setting optimizer slow the training | # β Questions & Help
On master, after applying the fix of #4450, the training on 8 TPU cores is much slower.
* Before the fix : **20 min / epoch (8 iterations / s)**
* After the fix : **1h30 / epoch (2~3 iterations / s)**
Of course the training before the fix was not working (loss was not decreasing).
But this slow is not expected : for the TF2 equivalent with the same dataset, it takes **20 min / epoch**.
---
Anyone meeting the same problem ?
| 05-20-2020 06:45:13 | 05-20-2020 06:45:13 | I have the same problem, but I'm not sure if it's really a problem. I thought the extreme speed before the fix was because it wasn't training properly, and that the slower speed now is supposed to be this way, but that's just my guess.<|||||>Hi, you're right @LeonieWeissweiler. Previous to that fix, the optimizer wasn't actually adjusting weights, resulting in a major speed-up (but the script in itself wasn't working).
@Colanim, do you mind specifying what exactly you're training on? When training on TPU there's a lot you should take into account: batch size, sequence length, number of cores being the most important. Do you mind giving a bit of context as to what you're trying to run?
We've also merged this PR https://github.com/huggingface/transformers/pull/4467 which solves quite a few issues with the TPU training. Please make sure to install from source to benefit from that commit.
From my tests, on TPU with 8 cores (v3-8), on MNLI I reach 22 minutes/epoch with a batch size of 8, but **6 minutes/epoch** (with a 2 minute tracing, that isn't necessary for the following epochs) with a batch size of 128 (which does train with a final accuracy of 81% using `bert-base-cased`, single epoch). <|||||>I'm training ELECTRA for Extractive Text Summarization.
I will try to increase the batch size and see the results, thanks for the pointer.
What bother me is that I used TFElectra and I could train the model at ~8 iterations per sec.
Same model and same hyper-parameters on Pytorch and it's slower.
But I realized that in my model, since it's extractive summarization, I'm extracting the [CLS] representation of each sentence. These CLS position varies from sample to sample. Maybe that's why it's slower on pytorch-xla ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,477 | closed | Remove warning of deprecation | Remove warning of deprecated overload of addcdiv_
Fix #4451 | 05-20-2020 06:37:09 | 05-20-2020 06:37:09 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4477?src=pr&el=h1) Report
> Merging [#4477](https://codecov.io/gh/huggingface/transformers/pull/4477?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/efbc1c5a9d96048ab11f8d746fe51107cb91646f&el=desc) will **increase** coverage by `0.08%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4477?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4477 +/- ##
==========================================
+ Coverage 78.03% 78.12% +0.08%
==========================================
Files 123 123
Lines 20477 20477
==========================================
+ Hits 15980 15997 +17
+ Misses 4497 4480 -17
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4477?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/4477/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `96.00% <100.00%> (ΓΈ)` | |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4477/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.50% <0.00%> (-0.17%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4477/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4477/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `34.07% <0.00%> (+5.41%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4477?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4477?src=pr&el=footer). Last update [efbc1c5...2e2abdc](https://codecov.io/gh/huggingface/transformers/pull/4477?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,476 | closed | Tokenizer encode to have an option to overflow from left | # π Feature request
Current tokenizer encode variants ( encode, batch_encode, batch_encode_plus) handle longer sequences than max_length by overflowing tokens from the right hand side and thus restricting the length to max_length. This feature request is to allow an option for the tokenizer encode methods to overflow tokens from the left hand side as well.
## Motivation
For problems dealing with dialog, if one were to train an intent classification or next sentence prediction model and the dialog was longer than max_length, one would like to throw away the tokens from the beginning of the conversation as they are less relevant than the more recent messages.
This motivates the need for a encoder that works well with dialog data where more recent tokens are more valuable.
## Your contribution
I could change the function `truncate_sequences` by adding a new truncation_strategy option that will truncate from left. But want to get feedback from the Huggingface team about this proposal.
| 05-20-2020 04:44:50 | 05-20-2020 04:44:50 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>@misrasaurabh1 What solution do you use now for this encoding dialog data problem?<|||||>I use something like
`self.tokenizer.encode(input)[-self.block_size:]`
This throws a warning for length overflow so I deactivate it with logging.
Also one has to make attention masks separately as some models require this.<|||||>Indeed, we should add an option to truncate on the left!
cc @n1t0 for our sprint of September.<|||||>perhaps add a truncation_side to https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer to be consistent with padding_side.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>@thomwolf @n1t0 Any plan for this? I just saw this because of the bot.
I think I can do this, seems like all the logic is here.
https://github.com/huggingface/transformers/blob/07708793f20ec3a949ccab32cc4fe0c7272dcc4c/src/transformers/tokenization_utils_base.py#L2766
But how about fast π€ Tokenizers? Will I need to also change the rust code?
And I noticed something that might be a bug, and can be improved:
https://github.com/huggingface/transformers/blob/07708793f20ec3a949ccab32cc4fe0c7272dcc4c/src/transformers/tokenization_utils_base.py#L2816-L2831
Here it loops `num_tokens_to_remove` times to decide how many tokens needs to be truncated for each sequence, which can be calculated without looping.
And in case `stride` is not 0, it seems to return up to `stride`*`num_tokens_to_remove` extra tokens to `overflowing_tokens`.
https://github.com/huggingface/transformers/blob/07708793f20ec3a949ccab32cc4fe0c7272dcc4c/src/transformers/tokenization_utils_base.py#L2801-L2803
Also it seems weird to me that `overflowing_tokens` will be mixed with tokens from `ids` and `pair_ids`. Perhaps it should be a tuple of list if `TruncationStrategy` is `longest_first`.
Note to self: `overflowing_tokens` is used in squad to construct another pair if the doc is too long. `stride` is also used in squad. I can't find other use of `overflowing_tokens`.
https://github.com/huggingface/transformers/blob/969859d5f67c7106de4d1098c4891c9b03694bbe/src/transformers/data/processors/squad.py#L154-L216<|||||>One feedback about what's happening with this facility of left truncation being not available - its harder to use the datasets library and we have to do python Hackery which reduces the benefits of using the datasets library in the first place.<|||||>I recently needed to do exactly this, but ran into this issue so I had to manually truncate the text. Simply doing `encoded_tensor[-max_length:]` would also truncate samples that are less than `max_length` since they are padded to the right.
Here's the approach I used instead:
```python
def encode_right_truncated(tokenizer, text, padding='max_length', max_length=512, add_special_tokens=True):
tokenized = tokenizer.tokenize(text, padding=padding, max_length=max_length, add_special_tokens=add_special_tokens)
if not add_special_tokens:
truncated = tokenized[-max_length:]
else:
truncated = tokenized[0:1] + tokenized[-(max_length-1):]
ids = tokenizer.convert_tokens_to_ids(truncated)
return ids
```
Hope this helps future people finding this from Google/DDG<|||||>For anyone arriving here from search, note that this is now possible by setting [`truncation_side`](https://huggingface.co/docs/transformers/main/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.truncation_side)
```python
# specify when initializing the tokenizer,
tokenizer = AutoTokenizer.from_pretrained(..., truncation_side = "left")
# or modify an already-initialized tokenizer, like
tokenizer.truncation_side = "right"
```
see https://github.com/huggingface/transformers/pull/12913 |
transformers | 4,475 | closed | Request for hosting model files in a Virtual Hosted-Style S3 buckets | Is there any plans for the s3 buckets currently hosting the model files to migrate from the current "Path-Style Request" format to a "Virtual Hosted-Style Request" format?
Path-Style URLs follow the following format (s3.amazonaws.com/* OR s3.Region.amazonaws.com/*). For example, today the model config file for 'bert_uncased_L-2_H-128_A-2' is accessed via the Path-Style URL:
https://s3.amazonaws.com/models.huggingface.co/bert/google/bert_uncased_L-2_H-128_A-2/config.json
According to AWS, they will be deprecating Path-Style requests (though there will be legacy support) - but one major reason for the migration to the Virtual Hosted-Style URL (which takes the form bucket-name.s3.amazonaws.com or bucket-name.s3.Region.amazonaws.com) is for security reasons (e.g. if companies/organizations need to whitelist sites in their servers to utilize transformer models, the virtual hosted style will reduce the "blast radius" in cases of security breaches).
More details on "Path" vs "Virtual-Hosted" style requests:
https://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html#path-style-access | 05-20-2020 04:08:29 | 05-20-2020 04:08:29 | Hi @rjsaito we are actually moving to serving all our files from the cloudfront powered cdn.huggingface.co<|||||>Awesome! Do you have a current ETA when this change would be in place?<|||||>It's already in place for model weights.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,474 | closed | Remove warning of deprecation | Remove warning of deprecated overload of `addcdiv_`
Fix #4451 | 05-20-2020 00:00:41 | 05-20-2020 00:00:41 | Signature is of `addcdiv_` is different between Pytorch 1.4 and 1.5
https://pytorch.org/docs/1.4.0/tensors.html?highlight=addcdiv#torch.Tensor.addcdiv
https://pytorch.org/docs/stable/tensors.html?highlight=addcdiv_#torch.Tensor.addcdiv
I guess as long as Pytorch 1.4 is supported by `transformers` we can just ignore the Warning given when using Pytorch 1.5<|||||>See #4477 for a fix that work for both PT1.4 and PT1.5 |
transformers | 4,473 | closed | Add Fine-tune DialoGPT on new datasets notebook | Here is a tutorial notebook I created for fine-tuning the DialoGPT on a Spanish conversation dataset. It shows how to prepare a dataset that conforms to the necessary style of the original DialoGPT dataset and how to train it using a GPU provided by Google Colab. Sadly it is not using the newer Trainer that Huggingface provides, but I thought it might be useful for others trying to work with conversational AI so wanted to share.
Thanks for the amazing library and hugs to all of y'all π€!
| 05-19-2020 23:33:20 | 05-19-2020 23:33:20 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4473?src=pr&el=h1) Report
> Merging [#4473](https://codecov.io/gh/huggingface/transformers/pull/4473?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/48c3a70b4eaedab1dd9ad49990cfaa4d6cb8f6a0&el=desc) will **decrease** coverage by `0.42%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4473?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4473 +/- ##
==========================================
- Coverage 78.41% 77.98% -0.43%
==========================================
Files 123 123
Lines 20432 20432
==========================================
- Hits 16021 15934 -87
- Misses 4411 4498 +87
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4473?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4473/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-28.03%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4473/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (+0.16%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4473?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4473?src=pr&el=footer). Last update [48c3a70...8518af3](https://codecov.io/gh/huggingface/transformers/pull/4473?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Cool notebook! And a great complement to the [model card](https://huggingface.co/ncoop57/DiGPTame-medium)
cc @patrickvonplaten
Maybe you can use the `Trainer` in a v2 of the notebook =)
And you could use the [nlp](https://github.com/huggingface/nlp) library to share the dataset, cc @thomwolf <|||||>Awesome! |
transformers | 4,472 | closed | [gpu slow tests] fix mbart-large-enro gpu tests | 05-19-2020 23:18:39 | 05-19-2020 23:18:39 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4472?src=pr&el=h1) Report
> Merging [#4472](https://codecov.io/gh/huggingface/transformers/pull/4472?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/48c3a70b4eaedab1dd9ad49990cfaa4d6cb8f6a0&el=desc) will **decrease** coverage by `0.41%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4472?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4472 +/- ##
==========================================
- Coverage 78.41% 77.99% -0.42%
==========================================
Files 123 123
Lines 20432 20432
==========================================
- Hits 16021 15936 -85
- Misses 4411 4496 +85
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4472?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4472/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-28.03%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4472/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.53% <0.00%> (+0.11%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4472/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (+0.16%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4472/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4472?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4472?src=pr&el=footer). Last update [48c3a70...9152273](https://codecov.io/gh/huggingface/transformers/pull/4472?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 4,471 | closed | batch_encode_plus returns same lengths when enable pad_to_max_length | # π Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run the following script
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
sents = [
"I can eat glass without harm",
"I cannot eat glass"
]
resp = tokenizer.batch_encode_plus(sents, pad_to_max_length=True, return_lengths=True)
print(resp['length'])
# >>> get [8, 8], should be [8, 6]
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The function batch_encode_plus should return correct lengths of sentences before padded to max length. Which should be [8, 6] in the above example. Otherwise, we can just get the length from the last dimension of the mask.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Ubuntu
- Python version: 3.7
- PyTorch version (GPU?): CPU 1.5
- Tensorflow version (GPU?): No
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 05-19-2020 21:38:17 | 05-19-2020 21:38:17 | This is not a bug but expected behaviour. The length of the tokenized input is only calculated after padding.
https://github.com/huggingface/transformers/blob/a08652772791fdaeed6f263b1a99926ca64be5dc/src/transformers/tokenization_utils.py#L1981-L1982
Perhaps you are right, though, and it would be more useful to get the size before padding!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,470 | closed | Model card for Tereveni-AI/gpt2-124M-uk-fiction | Create model card for "Tereveni-AI/gpt2-124M-uk-fiction" model | 05-19-2020 19:46:42 | 05-19-2020 19:46:42 | by the way, could you add a
```
---
language: ukrainian
---
```
metadata block on top, for the model to be surfaced in search etc.?<|||||>Sure, Iβll add it
On Wed, 20 May 2020 at 18:05, Julien Chaumond <[email protected]>
wrote:
> by the way, could you add a
>
> ---
> language: ukrainian
> ---
>
> metadata block on top, for the model to be surfaced in search etc.?
>
> β
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/pull/4470#issuecomment-631533257>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AAPCBFE27XTTMKNB2IIQTNTRSPWTPANCNFSM4NFJPXQA>
> .
>
--
____________________________________
Π ΠΏΠΎΠ²Π°Π³ΠΎΡ, ΠΡΡΠΊΠΎΠ²ΡΡΠΊΠΈΠΉ ΠΠ»Π΅ΠΊΡΠ°Π½Π΄Ρ
|
transformers | 4,469 | closed | Better None gradients handling in TF Trainer | Update the TF Trainer to better handle None gradients in order to have something generic and not anymore task dependent. | 05-19-2020 19:38:16 | 05-19-2020 19:38:16 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4469?src=pr&el=h1) Report
> Merging [#4469](https://codecov.io/gh/huggingface/transformers/pull/4469?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5856999a9f2926923f037ecd8d27b8058bcf9dae&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `0.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4469?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4469 +/- ##
==========================================
+ Coverage 77.98% 78.00% +0.01%
==========================================
Files 123 123
Lines 20436 20431 -5
==========================================
Hits 15938 15938
+ Misses 4498 4493 -5
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4469?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4469/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `17.92% <0.00%> (+0.41%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4469/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4469/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (ΓΈ)` | |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4469/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.53% <0.00%> (+0.11%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4469?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4469?src=pr&el=footer). Last update [5856999...095c8d2](https://codecov.io/gh/huggingface/transformers/pull/4469?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,468 | closed | [Tests, GPU, SLOW] fix a bunch of GPU hardcoded tests in Pytorch | in almost all tests I forgot to put the model on gpu via
`model = model.to(torch_device)` | 05-19-2020 19:06:14 | 05-19-2020 19:06:14 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4468?src=pr&el=h1) Report
> Merging [#4468](https://codecov.io/gh/huggingface/transformers/pull/4468?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5856999a9f2926923f037ecd8d27b8058bcf9dae&el=desc) will **decrease** coverage by `0.00%`.
> The diff coverage is `0.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4468?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4468 +/- ##
==========================================
- Coverage 77.98% 77.98% -0.01%
==========================================
Files 123 123
Lines 20436 20436
==========================================
- Hits 15938 15937 -1
- Misses 4498 4499 +1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4468?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4468/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.41% <0.00%> (ΓΈ)` | |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4468/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4468?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4468?src=pr&el=footer). Last update [5856999...7d9fd53](https://codecov.io/gh/huggingface/transformers/pull/4468?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Great! |
transformers | 4,467 | closed | TPU hangs when saving optimizer/scheduler | Similarly to when saving a model state dict, the optimizer and scheduler should be saved using `xm.save` and behind an `xm.rendezvous`.
Additional fix: `pl.ParallelLoader` is not a `torch.utils.data.DataLoader`, and, therefore, must be reinitialized at each epoch. | 05-19-2020 19:03:13 | 05-19-2020 19:03:13 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4467?src=pr&el=h1) Report
> Merging [#4467](https://codecov.io/gh/huggingface/transformers/pull/4467?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/07dd7c2fd8996fec2979555437dfeff0d38cbf28&el=desc) will **decrease** coverage by `0.09%`.
> The diff coverage is `0.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4467?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4467 +/- ##
==========================================
- Coverage 78.07% 77.97% -0.10%
==========================================
Files 123 123
Lines 20436 20439 +3
==========================================
- Hits 15955 15937 -18
- Misses 4481 4502 +21
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4467?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4467/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.48% <0.00%> (-0.28%)` | :arrow_down: |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4467/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-4.78%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4467/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4467/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (-0.17%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4467/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.41% <0.00%> (-0.12%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4467?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4467?src=pr&el=footer). Last update [07dd7c2...11186c1](https://codecov.io/gh/huggingface/transformers/pull/4467?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Solid work, as discussed π<|||||>Nice! I tried two days ago and just skipped the checkpoints but something with the eval seemed to be messing things up(ForMultipleChoice) as well. WandB created 8 runs. dont know if its useful, im still figuring out alot of the stuff. |
transformers | 4,466 | closed | Model card for RuPERTa-base fine-tuned for NER | 05-19-2020 17:34:05 | 05-19-2020 17:34:05 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4466?src=pr&el=h1) Report
> Merging [#4466](https://codecov.io/gh/huggingface/transformers/pull/4466?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5856999a9f2926923f037ecd8d27b8058bcf9dae&el=desc) will **decrease** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4466?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4466 +/- ##
==========================================
- Coverage 77.98% 77.98% -0.01%
==========================================
Files 123 123
Lines 20436 20436
==========================================
- Hits 15938 15937 -1
- Misses 4498 4499 +1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4466?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4466/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.50% <0.00%> (-0.17%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4466?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4466?src=pr&el=footer). Last update [5856999...760964e](https://codecov.io/gh/huggingface/transformers/pull/4466?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>nice example |
|
transformers | 4,465 | closed | [ci] Slow GPU tests run daily | Could be useful to fix a few of the warnings and deprecation warnings too | 05-19-2020 17:05:06 | 05-19-2020 17:05:06 | |
transformers | 4,464 | closed | [Longformer] Docs and clean API | This PR:
- adds a documentation page for Longformer. @ibeltagy - it's best to read it using this link I think: https://github.com/huggingface/transformers/pull/4464/files?short_path=3909947#diff-3909947f36862a1731195bf05c85c64c.
- fixes a typo to correctly render the pretrained models doc page
- changes the API of Longformer slightly. I removed the `attention_mode` from Longformer because I don't think it should be used. The modus should always be `Longformer` since it is a `Longformer` model. The user should not be able to create a `RobertaModel` using `LongformerModel`.
For comparisons people should use `RobertaModel` vs `LongformerModel` and not different modi of Longformer which is essentially the same as `RobertaModel` (correct me if I'm wrong here @ibeltagy). | 05-19-2020 16:00:01 | 05-19-2020 16:00:01 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4464?src=pr&el=h1) Report
> Merging [#4464](https://codecov.io/gh/huggingface/transformers/pull/4464?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8f1d0471489352ec01556ae61f8e8246002bbc58&el=desc) will **increase** coverage by `0.04%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4464?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4464 +/- ##
==========================================
+ Coverage 77.93% 77.98% +0.04%
==========================================
Files 123 123
Lines 20430 20426 -4
==========================================
+ Hits 15922 15929 +7
+ Misses 4508 4497 -11
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4464?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/configuration\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4464/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <100.00%> (ΓΈ)` | |
| [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4464/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `82.94% <100.00%> (+0.18%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4464/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.41% <0.00%> (-0.12%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4464/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (+1.80%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4464?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4464?src=pr&el=footer). Last update [8f1d047...e53e5dc](https://codecov.io/gh/huggingface/transformers/pull/4464?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,463 | closed | Adds predict stage for glue tasks, and generate result files which can be submitted to gluebenchmark.com | By simply fine-tune robterta-large with 3k steps on several tasks, achieved:
Task | Metrics | Score
-- | -- | --
Microsoft Research Paraphrase Corpus | F1 / Accuracy | 91.5/88.6
Semantic Textual Similarity Benchmark | Pearson-Spearman Corr | 90.7/90.2
Quora Question Pairs | F1 / Accuracy | 69.5/87.3
Recognizing Textual Entailment | Accuracy | 82.0
Winograd NLI | Accuracy | 65.1
| 05-19-2020 14:55:50 | 05-19-2020 14:55:50 | Looks good!
I just improved consistency with other scripts we have (in particular, `run_ner.py`) by:
- using an enum instead of two boolean flags
- I also always append the actual label name in the predictions file, which removes the need for a new arg
Let me know if that works for you and I'll merge to master soon<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4463?src=pr&el=h1) Report
> Merging [#4463](https://codecov.io/gh/huggingface/transformers/pull/4463?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8f1d0471489352ec01556ae61f8e8246002bbc58&el=desc) will **decrease** coverage by `0.01%`.
> The diff coverage is `46.03%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4463?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4463 +/- ##
==========================================
- Coverage 77.93% 77.91% -0.02%
==========================================
Files 123 123
Lines 20430 20474 +44
==========================================
+ Hits 15922 15953 +31
- Misses 4508 4521 +13
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4463?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/4463/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `49.62% <31.81%> (-1.41%)` | :arrow_down: |
| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/4463/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `86.15% <77.77%> (-4.05%)` | :arrow_down: |
| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/4463/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `27.63% <100.00%> (+0.47%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4463/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.41% <0.00%> (-0.12%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4463/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.50% <0.00%> (+1.64%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4463?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4463?src=pr&el=footer). Last update [8f1d047...d172a3b](https://codecov.io/gh/huggingface/transformers/pull/4463?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks Julien for the code improvement! This look very good to me. <|||||>Thank you for contributing this! |
transformers | 4,462 | closed | add T5 fine-tuning notebook [Community notebooks] | @patrickvonplaten
This is the second notebook which shows how to fine-tune T5 for multiple tasks with text-to-text approach (IMDB, emotion classification, SWAG) that we discussed in issue #4426 . I didn't find emotion and SWAG dataset in the `nlp` library so I decided to keep my original dataset code to keep everything unified.
Also there's growing interest in `pytoch-lightning` so I decided to keep the `lightning` trainer. But if you think I should use HF Trainer then I can add that as well. | 05-19-2020 14:51:08 | 05-19-2020 14:51:08 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4462?src=pr&el=h1) Report
> Merging [#4462](https://codecov.io/gh/huggingface/transformers/pull/4462?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8f1d0471489352ec01556ae61f8e8246002bbc58&el=desc) will **increase** coverage by `0.04%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4462?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4462 +/- ##
==========================================
+ Coverage 77.93% 77.98% +0.04%
==========================================
Files 123 123
Lines 20430 20430
==========================================
+ Hits 15922 15932 +10
+ Misses 4508 4498 -10
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4462?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4462/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.41% <0.00%> (-0.12%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4462/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (+1.80%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4462?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4462?src=pr&el=footer). Last update [8f1d047...ca0d2a0](https://codecov.io/gh/huggingface/transformers/pull/4462?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>That's awesome. Thanks @patil-suraj! @mariamabarham @lhoestq - might it be interesting to add `emotion classification` and `swag` to `nlp`? <|||||>Reworded the description a bit - hope that's ok @patil-suraj <|||||>> Reworded the description a bit - hope that's ok @patil-suraj
@patrickvonplaten yes, it's more clear now. Thank you! |
transformers | 4,461 | closed | ProjectedAdaptiveLogSoftmax.log_prob raises Exception | # π Bug
## Information
Model I am using (Bert, XLNet ...): Transformer-XL
Language I am using the model on (English, Chinese ...): WikiText-103 (English)
The problem arises when using:
* [x] the official example scripts: `run_transfo_xl.py` (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official task: WikiText-103
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Fix `run_transfo_xl.py` by removing the unused `work_dir` argument (line 52) and changing `lm_labels=target` to `labels=target` (line 108).
2. Add `logits = self.crit.log_prob(pred_hid.flatten(0, 1))` right before the model output, e.g. in line 919 of `modeling_transfo_xl.py`.
3. Run `run_transfo_xl.py`.
4. Look at error message:
```
The size of tensor a (1280) must match the size of tensor b (20000) at non-singleton dimension 1
File ".../transformers/src/transformers/modeling_transfo_xl_utilities.py", line 246, in log_prob
logprob_i = head_logprob[:, -i] + tail_logprob_i
File ".../transformers/src/transformers/modeling_transfo_xl.py", line 920, in forward
logits = self.crit.log_prob(pred_hid.flatten(0, 1))
File ".../run_transfo_xl.py", line 107, in evaluate
ret = model(data, labels=target, mems=mems)
File ".../run_transfo_xl.py", line 124, in main
test_loss = evaluate(te_iter)
File ".../run_transfo_xl.py", line 143, in <module>
main()
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
`log_prob` should return the log probabilities instead of raising an Exception.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: Master (commit 384f0eb)
- Platform: Ubuntu
- Python version: 3.7.6
- PyTorch version (GPU?): 1.4.0 (CUDA 10.1, CuDNN 7.6.3)
- Tensorflow version (GPU?): -
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 05-19-2020 14:41:12 | 05-19-2020 14:41:12 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,460 | closed | Attempt to do some optimizations for BERT models | - Use functional as much as possible instead of creating a class instance everytime
- Precompute and store the attention scaling factor to avoid `1/sqrt(...)` every forward
- Refactor Self-Attention to group QKV weights and and increase hardware density | 05-19-2020 14:36:50 | 05-19-2020 14:36:50 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,459 | closed | Pretrained Transformer-XL gives unreasonable result on WikiText-103 | # π Bug
## Information
Model I am using (Bert, XLNet ...): Transformer-XL (`transfo-xl-wt103`)
Language I am using the model on (English, Chinese ...): WikiText-103 (English)
The problem arises when using:
* [x] the official example scripts: `run_transfo_xl.py` (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official task: WikiText-103 (not GLUE/SQUaD)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Fix `run_transfo_xl.py` by removing the unused `work_dir` argument (line 52) and changing `lm_labels=target` to `labels=target` (line 108).
2. Run `run_transfo_xl.py`.
3. Observe the result: `test loss 10.20 | test ppl 26951.114`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
A reasonable result around the order of PPL=18.3, as reported in the paper. I know that the result will not be exactly the same, but something is definitely wrong here.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: Both 2.9.1 and Master (commit 384f0eb)
- Platform: Ubuntu
- Python version: 3.7.6
- PyTorch version (GPU?): 1.4.0 (CUDA 10.1, CuDNN 7.6.3)
- Tensorflow version (GPU?): -
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| 05-19-2020 14:33:04 | 05-19-2020 14:33:04 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,458 | closed | layer name change to match compatibility with pytorch layer name in BertForQuestionAnswering | As pointed out in #438 When using BertForQuestionAnswering to load a tensorflow model using from_pretrained, one runs into an error
`AttributeError: 'BertForQuestionAnswering' object has no attribute 'classifier'`
As pointed out in the thread it should be "qa_outputs" and not "classifier" for this functionality to work as expected.
| 05-19-2020 13:30:46 | 05-19-2020 13:30:46 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4458?src=pr&el=h1) Report
> Merging [#4458](https://codecov.io/gh/huggingface/transformers/pull/4458?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/384f0eb2f9d42e44094dbfd0917ccf4e6ddb462a&el=desc) will **decrease** coverage by `0.08%`.
> The diff coverage is `0.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4458?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4458 +/- ##
==========================================
- Coverage 77.96% 77.88% -0.09%
==========================================
Files 120 120
Lines 20140 20140
==========================================
- Hits 15703 15686 -17
- Misses 4437 4454 +17
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4458?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4458/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.82% <0.00%> (ΓΈ)` | |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4458/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-5.42%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4458/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.50% <0.00%> (-0.17%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4458/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4458?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4458?src=pr&el=footer). Last update [384f0eb...568d3f1](https://codecov.io/gh/huggingface/transformers/pull/4458?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I am not quite sure who the right person to review and include this PR would be. But commenting to open it up back nonetheless. |
transformers | 4,457 | closed | FastTokenizer add_special_tokens also adding individual characters for multi character tokens | # π Bug
## Information
Model I am using (Bert, XLNet ...): BERT FastTokenizer
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] my own task or dataset: (give details below)
As far as I can tell, adding tokens (e.g. '[EOS]') to the FastTokenizer will result in all the single characters of the token to be added to the tokenizer ('E', 'O', 'S'). This seems to only occur when using tokenizer.add_special_tokens() as described [here](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.add_special_tokens). However, adding the tokens via the constructor seem to work just fine.
This bug results in problems using the uncased models as we don't want uppercase letters. Also the unwanted additions to the vocab don't seem to show up in len(tokenizer), which results in index of out range errors when feeding into BERT.
This would be a breaking case:
```
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', use_fast=True)
print('Tokenizer len before: {}'.format(len(tokenizer)))
num_added = tokenizer.add_special_tokens({'eos_token': '[EOS]', 'bos_token': '[BOS]'})
print('Tokenizer len after: {}'.format(len(tokenizer)))
print('Number tokens added: ', num_added)
print(tokenizer.bos_token)
print(tokenizer.eos_token)
# We can see that the tokens have been added successfully
# However, encoding the same sequence as before, we run into problems:
encoded = tokenizer.encode('This is a big S!')
print(encoded)
print(tokenizer.convert_ids_to_tokens(encoded))
# If you look carefully, you can see that the 'S' in the sequence is not lowercase.
# Also the id in the line above (30526) should not be higher than the tokenizer len (30524)
# If we feed this into bert (after model.resize_token_embeddings(len(tokenizer))) this will crash
with an index out of range exception.
```
Outputs:
```
Tokenizer len before: 30522
Tokenizer len after: 30524
Number tokens added: 2
[BOS]
[EOS]
[101, 2023, 2003, 1037, 2502, 30526, 999, 102]
['[CLS]', 'this', 'is', 'a', 'big', 'S', '!', '[SEP]']
```
Edit: My proposed workaround of adding the special tokens via the constructor also does not work. The tokens are accessible via tokenizer.<eos/bos>_token but adding tokens this way does not change the number of tokens in the vocab. I.e. using len(tokenizer) doesn't reflect the newly added tokens.
## To reproduce
Steps to reproduce the behavior:
Please find this [colab notebook](https://colab.research.google.com/drive/1hMEr0gpbyGJCZvIzFB22eKlmb9I-vKuu?usp=sharing) investigating the bug
## Expected behavior
add_special_tokens() of the fast tokenizer should behave the same as for the regular tokenizer: Only adding the full special tokens '[EOS]' and not also single characters of it. Furthermore, I would expect that if something is added, this would also be reflected in the length of the tokenizer.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.9.1
- Platform: Colab, linux
- Python version: 3.7
- PyTorch version (GPU?): 1.5, gpu
- Tensorflow version (GPU?):
- Using GPU in script?: tried both, no difference
- Using distributed or parallel set-up in script?: No
| 05-19-2020 13:00:40 | 05-19-2020 13:00:40 | Thanks for the extensive bug report! I can confirm that you are correct and that the issue does not occur when using the slow option.
Pinging @n1t0 |
transformers | 4,456 | closed | Problems About Using the Run_language_modeling with Tf2. | # π Bug
### Model I am using Bert.
### The problem arises when using:
I'm using the run_language_modling.py to fine-tuning with (Tf2.0.0/Tf2.2.0). Of course i modify the script.
I use TFAutoModelWithLMHead,TFTrainer in this repo to build my script.
When i'm traning,here is the problem.
`ValueError: Trying to create optimizer slot variable under the scope for tf.distribute.Strategy (<tensorflow.python.distribute.one_device_strategy.OneDeviceStrategy object at 0x15b49bed0>), which is different from the scope used for the original variable (<tf.Variable 'tf_bert_for_masked_lm/bert/embeddings/word_embeddings/weight:0' shape=(21128, 768) dtype=float32, numpy=array(),dtype=float32)>). Make sure the slot variables are created under the same strategy scope. This may happen if you're restoring from a checkpoint outside the scope`
I move initializing the model to TFTrainer().args.strategy.scope() and work well!
### Someing to say
- It seems that when using tf2.0, there's some problems about TFTrainer's design. Will you improve this design so that one can use tf2.0 more conveniently in this repo?
- Thanks for this repo and your contribution.
| 05-19-2020 12:57:43 | 05-19-2020 12:57:43 | cc @jplu, might be of interest :)<|||||>Hello!
Like this there is not enough details to see what is the issue. Can you provide the piece of code you are trying to run? plz :)
Also if you are looking for how to properly use the trainer I suggest you to look at the already existing examples:
- [Question-answering](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_tf_squad.py)
- [Token classification](https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_tf_ner.py)
- [Sequence classification](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_tf_glue.py)
- [Multiple Choice](https://github.com/huggingface/transformers/blob/master/examples/multiple-choice/run_tf_multiple_choice.py)
They are all working well with CPU/Single GPU/Multiple GPU, only TPU need to be further tested for now.
Basically the score is created the first time when you call the `strategy` property of the `tf_training_args.py`. Be careful if you try to translate the PT script to TF there are quite a lot of differences to care about. The training arguments are one of them.
Nevertheless, thanks a lot for trying to make a language modeling with TF2 and will be happy to help in case you need some.<|||||>Thanks for your reply! @jplu
Sorry for that i'm not looking at the examples.In the examples, it initializes the model with trainng_args.strategy.scope(),and in my script, i initialize the model in TFTrainer.\__init\__ with the self.args.strategy.scope(). It looks the same.
If we initianize the model in trainng_args.strategy.scope(), it's be ok ! At first, I think we should pass the model_args to TFTrainer, not the model, and initialize the model in \__init\__. Therefore i think maybe we should improve the design.
Thanks again for your reply!
|
transformers | 4,455 | closed | get output from a particular layer of pre-trained transformer (xlnet) | As the title, how can I do this in PyTorch version of pre-trained transformer? | 05-19-2020 11:15:54 | 05-19-2020 11:15:54 | Sure, these are called hidden states! Here's the [documentation of the XLNet model](https://huggingface.co/transformers/model_doc/xlnet.html?highlight=output_hidden_states#transformers.XLNetModel).
Please note the third return:
> **hidden_states (tuple(torch.FloatTensor), optional, returned when config.output_hidden_states=True):**
>
> Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
>
> Hidden-states of the model at the output of each layer plus the initial embedding outputs.
|
transformers | 4,454 | closed | DMOZ - web page classification / multi-language | Hi,
Hope you are all well !
Still quite a newbie with transformers, I wanted to know how could it be possible to build a web page classifier with the DMOZ dump and to classify them into categories in several languages.
Thanks in advance for any insights or inputs on that question.
Cheers,
X | 05-19-2020 10:59:51 | 05-19-2020 10:59:51 | Can you check the Dmoz[Dmoz]
the database in sql he cost 1000$ to change it (https://idmoz.org) <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,453 | closed | Bug - TFBertForSequenceClassification on SQUaD data | # π Bug
## Information
I'm using TFBertForSequenceClassification on SQUaD data v1 data.
The problem arises when using:
* [ ] Both official example scripts and my own modified scripts
The tasks I am working on is:
* [ ] an official SQUaD v1 data and my own SQUaD v1 data.
## To reproduce
### Try 1 - with official squad via `tensorflow_datasets.load("squad")`, trying to mimic the following official reference -
https://github.com/huggingface/transformers#quick-tour-tf-20-training-and-pytorch-interoperability
```python
import tensorflow as tf
from transformers import TFBertForSequenceClassification, BertTokenizer, \
squad_convert_examples_to_features, SquadV1Processor
import tensorflow_datasets
model = TFBertForSequenceClassification.from_pretrained("bert-base-cased")
tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
data = tensorflow_datasets.load("squad")
processor = SquadV1Processor()
examples = processor.get_examples_from_dataset(data, evaluate=False)
dataset_features = squad_convert_examples_to_features(examples=examples, tokenizer=tokenizer, max_seq_length=384, doc_stride=128, max_query_length=64, is_training=True, return_dataset='tf')
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE, from_logits=True)
opt = tf.keras.optimizers.Adam(learning_rate=3e-5)
model.compile(optimizer=opt,
loss={'start_position': loss_fn, 'end_position': loss_fn},
loss_weights={'start_position': 1., 'end_position': 1.},
metrics=['accuracy'])
model.fit(dataset_features, epochs=3)
```
**Stacktrace:** - the bug is at the `squad_convert_examples_to_features` part
```python
convert squad examples to features: 0%| | 0/10570 [00:00<?, ?it/s]
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/multiprocessing/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/multiprocessing/pool.py", line 44, in mapstar
return list(map(*args))
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/transformers/data/processors/squad.py", line 95, in squad_convert_example_to_features
cleaned_answer_text = " ".join(whitespace_tokenize(example.answer_text))
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/transformers/tokenization_bert.py", line 112, in whitespace_tokenize
text = text.strip()
AttributeError: 'NoneType' object has no attribute 'strip'
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/ec2-user/yonatab/ZeroShot/transformers_experiments/src/examples_git/huggingface_tf_example_squad.py", line 18, in <module>
dataset_features = squad_convert_examples_to_features(examples=examples, tokenizer=tokenizer, max_seq_length=384, doc_stride=128, max_query_length=64, is_training=True, return_dataset='tf')
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/transformers/data/processors/squad.py", line 327, in squad_convert_examples_to_features
disable=not tqdm_enabled,
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tqdm/std.py", line 1129, in __iter__
for obj in iterable:
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/multiprocessing/pool.py", line 320, in <genexpr>
return (item for chunk in result for item in chunk)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/multiprocessing/pool.py", line 735, in next
raise value
AttributeError: 'NoneType' object has no attribute 'strip'
```
### Try 2 - readine data from file, trying to mimic the following official reference- https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb
```python
import tensorflow as tf
from transformers import TFBertForSequenceClassification, BertTokenizer, \
squad_convert_examples_to_features, SquadV1Processor
import tensorflow_datasets
model = TFBertForSequenceClassification.from_pretrained("bert-base-cased")
tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
data = tensorflow_datasets.load("squad", data_dir='/data/users/yonatab/zero_shot_data/datasets_refs')
processor = SquadV1Processor()
examples = processor.get_examples_from_dataset(data, evaluate=True)
dataset_features = squad_convert_examples_to_features(examples=examples, tokenizer=tokenizer, max_seq_length=384, doc_stride=128, max_query_length=64, is_training=True, return_dataset='tf')
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE, from_logits=True)
opt = tf.keras.optimizers.Adam(learning_rate=3e-5)
model.compile(optimizer=opt,
loss={'start_position': loss_fn, 'end_position': loss_fn},
loss_weights={'start_position': 1., 'end_position': 1.},
metrics=['accuracy'])
model.fit(dataset_features, epochs=3)
```
**Stacktrace:** - the bug is at the `fit` method
```python
Traceback (most recent call last):
File "/home/ec2-user/yonatab/ZeroShot/transformers_experiments/src/examples_git/minimal_example_for_git.py", line 97, in <module>
main()
File "/home/ec2-user/yonatab/ZeroShot/transformers_experiments/src/examples_git/minimal_example_for_git.py", line 69, in main
history = model.fit(tfdataset, epochs=1, steps_per_epoch=3)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 819, in fit
use_multiprocessing=use_multiprocessing)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 235, in fit
use_multiprocessing=use_multiprocessing)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 593, in _process_training_inputs
use_multiprocessing=use_multiprocessing)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 706, in _process_inputs
use_multiprocessing=use_multiprocessing)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/data_adapter.py", line 702, in __init__
x = standardize_function(x)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 660, in standardize_function
standardize(dataset, extract_tensors_from_dataset=False)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 2360, in _standardize_user_data
self._compile_from_inputs(all_inputs, y_input, x, y)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 2580, in _compile_from_inputs
target, self.outputs)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_utils.py", line 1341, in cast_if_floating_dtype_and_mismatch
if target.dtype != out.dtype:
AttributeError: 'str' object has no attribute 'dtype'
```
### Try 3
```python
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
processor = SquadV1Processor()
examples = processor.get_train_examples(args.data_dir, filename=args.train_file)
dataset_features = squad_convert_examples_to_features(examples=examples, tokenizer=tokenizer, max_seq_length=384,
doc_stride=128, max_query_length=64, is_training=True,
return_dataset='tf')
model = TFBertForQuestionAnswering.from_pretrained("bert-base-cased")
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE, from_logits=True)
opt = tf.keras.optimizers.Adam(learning_rate=3e-5)
model.compile(optimizer=opt,
loss={'start_position': loss_fn, 'end_position': loss_fn},
loss_weights={'start_position': 1., 'end_position': 1.},
metrics=['accuracy'])
history = model.fit(dataset_features, epochs=1)
```
**Stacktrace:** - the bug is at the `fit` method
```python
Traceback (most recent call last):
File "/home/ec2-user/yonatab/ZeroShot/transformers_experiments/src/examples_git/reading_from_file.py", line 39, in <module>
main()
File "/home/ec2-user/yonatab/ZeroShot/transformers_experiments/src/examples_git/reading_from_file.py", line 32, in main
history = model.fit(dataset_features, epochs=1)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 819, in fit
use_multiprocessing=use_multiprocessing)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 235, in fit
use_multiprocessing=use_multiprocessing)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 593, in _process_training_inputs
use_multiprocessing=use_multiprocessing)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 706, in _process_inputs
use_multiprocessing=use_multiprocessing)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/data_adapter.py", line 702, in __init__
x = standardize_function(x)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 660, in standardize_function
standardize(dataset, extract_tensors_from_dataset=False)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 2360, in _standardize_user_data
self._compile_from_inputs(all_inputs, y_input, x, y)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 2580, in _compile_from_inputs
target, self.outputs)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_utils.py", line 1341, in cast_if_floating_dtype_and_mismatch
if target.dtype != out.dtype:
AttributeError: 'str' object has no attribute 'dtype'
```
### Try 4 - (after first comment here)
I'm using the code of `run_tf_squad.py` and instead of the `VFTrainer` i'm trying to use `fit`.
This is the only change I made - same dataset, same examples, same features. Just trying to use `fit`.
```python
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE, from_logits=True)
opt = tf.keras.optimizers.Adam(learning_rate=3e-5)
model.compile(optimizer=opt,
loss={'output_1': loss_fn, 'output_2': loss_fn},
loss_weights={'output_1': 1., 'output_2': 1.},
metrics=['accuracy'])
history = model.fit(train_dataset, validation_data=eval_dataset, epochs=1)
```
And it's the same problem that occurs:
```python
Traceback (most recent call last):
File "/home/ec2-user/yonatab/ZeroShot/transformers_experiments/src/run_squad_tf.py", line 257, in <module>
main()
File "/home/ec2-user/yonatab/ZeroShot/transformers_experiments/src/run_squad_tf.py", line 242, in main
history = model.fit(train_dataset, validation_data=eval_dataset, epochs=1)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 819, in fit
use_multiprocessing=use_multiprocessing)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 235, in fit
use_multiprocessing=use_multiprocessing)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 593, in _process_training_inputs
use_multiprocessing=use_multiprocessing)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 706, in _process_inputs
use_multiprocessing=use_multiprocessing)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/data_adapter.py", line 702, in __init__
x = standardize_function(x)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 660, in standardize_function
standardize(dataset, extract_tensors_from_dataset=False)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 2360, in _standardize_user_data
self._compile_from_inputs(all_inputs, y_input, x, y)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 2580, in _compile_from_inputs
target, self.outputs)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_utils.py", line 1341, in cast_if_floating_dtype_and_mismatch
if target.dtype != out.dtype:
AttributeError: 'str' object has no attribute 'dtype'
```
## Expected behavior
I want to be able to use fit on my own squad data.
## Environment info
- `transformers` version: 2.9.1
- Platform: Linux
- Python version: 3.6.6
- PyTorch version (GPU?): - Using tensorflow
- Tensorflow version (GPU?): 2.1.0
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
Edit:
Keras has a new tutorial for it:
https://keras.io/examples/nlp/text_extraction_with_bert/
| 05-19-2020 10:02:29 | 05-19-2020 10:02:29 | Hello!
If you want to train over SQuAD I suggest you to use the [run_tf_squad.py](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_tf_squad.py) example that uses the TF Trainer or to check the following [Colab](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb#scrollTo=kxZQ9Ms_vSV1) that uses the new `nlp` framework with a `.fit()` method.<|||||>Hey.
Did you see my examples?
At "Try 2" I explained the problems using the new `nlp` framework with a `.fit()` method
I need to use a custom dataset.
Regarding `run_tf_squad.py`, I still have problems with it.
I want to use the `fit` method.
I'm using this code instead of the `VFTrainer` in the same file `run_tf_squad.py`.
This is the only change I made - same dataset, same examples, same features. Just trying to use `fit`.
```python
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE, from_logits=True)
opt = tf.keras.optimizers.Adam(learning_rate=3e-5)
model.compile(optimizer=opt,
loss={'output_1': loss_fn, 'output_2': loss_fn},
loss_weights={'output_1': 1., 'output_2': 1.},
metrics=['accuracy'])
history = model.fit(train_dataset, validation_data=eval_dataset, epochs=1)
```
And it's the same problem that occurs:
```python
Traceback (most recent call last):
File "/home/ec2-user/yonatab/ZeroShot/transformers_experiments/src/run_squad_tf.py", line 257, in <module>
main()
File "/home/ec2-user/yonatab/ZeroShot/transformers_experiments/src/run_squad_tf.py", line 242, in main
history = model.fit(train_dataset, validation_data=eval_dataset, epochs=1)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 819, in fit
use_multiprocessing=use_multiprocessing)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 235, in fit
use_multiprocessing=use_multiprocessing)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 593, in _process_training_inputs
use_multiprocessing=use_multiprocessing)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 706, in _process_inputs
use_multiprocessing=use_multiprocessing)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/data_adapter.py", line 702, in __init__
x = standardize_function(x)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 660, in standardize_function
standardize(dataset, extract_tensors_from_dataset=False)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 2360, in _standardize_user_data
self._compile_from_inputs(all_inputs, y_input, x, y)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 2580, in _compile_from_inputs
target, self.outputs)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_utils.py", line 1341, in cast_if_floating_dtype_and_mismatch
if target.dtype != out.dtype:
AttributeError: 'str' object has no attribute 'dtype'
```
I will add this to the post as a failing example - Try 4<|||||>Sorry, misunderstanding, what I meant is that I proposed you to check how the features are built, if you want to use `.fit()` the features have to be built differently than in `squad_convert_examples_to_features`, also you have to use TF 2.2. Otherwise if you want to use this method, you have to pass by the trainer.
Also why using `TFBertForSequenceClassification` instead of `TFBertForQuestionAnswering`?<|||||>Thank you for the answer. I prefare to use `fit`, you dont support it?
Anyway, this is the status with the `VFTrainer`:
I've used tensorflow 2.1.0 and I've now upgradeed to 2.2.0.
I still have problems:
```python
trainer = TFTrainer(model=model, args=training_args, train_dataset=train_dataset, eval_dataset=dev_dataset)
print(f"Created TFTrainer")
trainer.train()
```
It does create the `TFTrainer`, but when getting to the `.train()` cmd it fails:
```python
Created TFTrainer
WARNING:tensorflow:From /home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/transformers/trainer_tf.py:364: StrategyBase.experimental_run_v2 (from tensorflow.python.distribute.distribute_lib) is deprecated and will be removed in a future version.
Instructions for updating:
renamed to `run`
WARNING:tensorflow:From /home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/ops/resource_variable_ops.py:1817: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/framework/indexed_slices.py:434: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
"Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
Traceback (most recent call last):
File "/home/ec2-user/yonatab/ZeroShot/transformers_experiments/src/my_squad_tf_with_trainer.py", line 112, in <module>
main()
File "/home/ec2-user/yonatab/ZeroShot/transformers_experiments/src/my_squad_tf_with_trainer.py", line 34, in main
trainer.train()
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/transformers/trainer_tf.py", line 277, in train
for training_loss in self._training_steps():
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/transformers/trainer_tf.py", line 323, in _training_steps
self._apply_gradients()
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 580, in __call__
result = self._call(*args, **kwds)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 627, in _call
self._initialize(args, kwds, add_initializers_to=initializers)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 506, in _initialize
*args, **kwds))
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 2446, in _get_concrete_function_internal_garbage_collected
graph_function, _, _ = self._maybe_define_function(args, kwargs)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 2777, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 2667, in _create_graph_function
capture_by_value=self._capture_by_value),
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/framework/func_graph.py", line 981, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 441, in wrapped_fn
return weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 3299, in bound_method_wrapper
return wrapped_fn(*args, **kwargs)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/framework/func_graph.py", line 968, in wrapper
raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:
/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/transformers/trainer_tf.py:329 _apply_gradients *
self.args.strategy.experimental_run_v2(self._step)
/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/transformers/trainer_tf.py:343 _step *
self.optimizer.apply_gradients(list(zip(gradients, vars)))
/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/transformers/optimization_tf.py:135 apply_gradients *
return super(AdamWeightDecay, self).apply_gradients(zip(grads, tvars), name=name,)
/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:478 apply_gradients **
self._create_all_weights(var_list)
/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:663 _create_all_weights
self._create_slots(var_list)
/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/keras/optimizer_v2/adam.py:156 _create_slots
self.add_slot(var, 'm')
/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:716 add_slot
.format(strategy, var))
ValueError: Trying to create optimizer slot variable under the scope for tf.distribute.Strategy (<tensorflow.python.distribute.one_device_strategy.OneDeviceStrategy object at 0x7f2d141aec50>), which is different from the scope used for the original variable (<tf.Variable 'tf_bert_for_question_answering/bert/embeddings/word_embeddings/weight:0' shape=(28996, 768) dtype=float32, numpy=
array([[-0.00054784, -0.04156886, 0.01308366, ..., -0.0038919 ,
-0.0335485 , 0.0149841 ],
[ 0.01688265, -0.03106827, 0.0042053 , ..., -0.01474032,
-0.03561099, -0.0036223 ],
[-0.00057234, -0.02673604, 0.00803954, ..., -0.01002474,
-0.0331164 , -0.01651673],
...,
[-0.00643814, 0.01658491, -0.02035619, ..., -0.04178825,
-0.049201 , 0.00416085],
[-0.00483562, -0.00267701, -0.02901638, ..., -0.05116647,
0.00449265, -0.01177113],
[ 0.03134822, -0.02974372, -0.02302896, ..., -0.01454749,
-0.05249038, 0.02843569]], dtype=float32)>). Make sure the slot variables are created under the same strategy scope. This may happen if you're restoring from a checkpoint outside the scope
```
Thank you<|||||>This error means that you haven't created the model in the proper scope. Did you use the scope created in the TrainerArgs?
What gives you the following command line without touching to the initial code:
```
python examples/question-answering/run_tf_squad.py \
--model_name_or_path bert-base-uncased \
--output_dir model \
--max-seq-length 384 \
--num_train_epochs 2 \
--per_gpu_train_batch_size 8 \
--per_gpu_eval_batch_size 16 \
--do_train \
--logging_dir logs \
--mode question-answering \
--logging_steps 10 \
--learning_rate 3e-5 \
--doc_stride 128 \
--optimizer_name adamw
```<|||||>That code works, **but** I need one extra thing: evaluation/prediction on test dataset, and it doesn't work for me.
I took the `run_tf_squad.py` and added simple changes:
```python
test_examples = processor.get_dev_examples(data_args.data_dir, filename='test-v1.1.json')
test_dataset = (
squad_convert_examples_to_features(
examples=test_examples,
tokenizer=tokenizer,
max_seq_length=data_args.max_seq_length,
doc_stride=data_args.doc_stride,
max_query_length=data_args.max_query_length,
is_training=False,
return_dataset="tf",
)
)
```
That is, only adding the test dataset.
Now I want to evalute my final model on it. I tried with both predict and evaluate and it doesn't work.
Try 1 -
```python
results = trainer.evaluate(test_dataset)
```
Trace:
```python
05/24/2020 10:55:39 - INFO - transformers.trainer_tf - ***** Running Evaluation *****
05/24/2020 10:55:39 - INFO - transformers.trainer_tf - Batch size = 16
Traceback (most recent call last):
File "/home/ec2-user/yonatab/ZeroShot/transformers_experiments/src/my_run_squad_tf_with_trainer.py", line 208, in <module>
main()
File "/home/ec2-user/yonatab/ZeroShot/transformers_experiments/src/my_run_squad_tf_with_trainer.py", line 203, in main
# results = trainer.evaluate(test_dataset)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/transformers/trainer_tf.py", line 246, in evaluate
output = self._prediction_loop(eval_dataset, description="Evaluation")
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/transformers/trainer_tf.py", line 195, in _prediction_loop
loss, logits = self._evaluate_steps(features, labels)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 580, in __call__
result = self._call(*args, **kwds)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 627, in _call
self._initialize(args, kwds, add_initializers_to=initializers)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 506, in _initialize
*args, **kwds))
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 2446, in _get_concrete_function_internal_garbage_collected
graph_function, _, _ = self._maybe_define_function(args, kwargs)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 2777, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 2667, in _create_graph_function
capture_by_value=self._capture_by_value),
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/framework/func_graph.py", line 981, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 441, in wrapped_fn
return weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 3299, in bound_method_wrapper
return wrapped_fn(*args, **kwargs)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/framework/func_graph.py", line 968, in wrapper
raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:
/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/transformers/trainer_tf.py:171 _evaluate_steps *
per_replica_loss, per_replica_logits = self.args.strategy.experimental_run_v2(
/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/transformers/trainer_tf.py:400 _run_model *
logits = self.model(features, training=training)
/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/transformers/modeling_tf_bert.py:1163 call *
outputs = self.bert(inputs, **kwargs)
/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/transformers/modeling_tf_bert.py:548 call *
extended_attention_mask = attention_mask[:, tf.newaxis, tf.newaxis, :]
/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py:984 _slice_helper
name=name)
/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py:1150 strided_slice
shrink_axis_mask=shrink_axis_mask)
/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py:10179 strided_slice
shrink_axis_mask=shrink_axis_mask, name=name)
/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:744 _apply_op_helper
attrs=attr_protos, op_def=op_def)
/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/framework/func_graph.py:595 _create_op_internal
compute_device)
/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/framework/ops.py:3327 _create_op_internal
op_def=op_def)
/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/framework/ops.py:1817 __init__
control_input_ops, op_def)
/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/framework/ops.py:1657 _create_c_op
raise ValueError(str(e))
ValueError: Index out of range using input dim 1; input has only 1 dims for '{{node tf_bert_for_question_answering/bert/strided_slice}} = StridedSlice[Index=DT_INT32, T=DT_INT32, begin_mask=9, ellipsis_mask=0, end_mask=9, new_axis_mask=6, shrink_axis_mask=0](per_replica_features, tf_bert_for_question_answering/bert/strided_slice/stack, tf_bert_for_question_answering/bert/strided_slice/stack_1, tf_bert_for_question_answering/bert/strided_slice/stack_2)' with input shapes: [128], [4], [4], [4] and with computed input tensors: input[3] = <1 1 1 1>.
```
Try 2:
```python
predictions = trainer.predict(test_dataset)
```
Trace:
```python
05/24/2020 11:06:50 - INFO - transformers.trainer_tf - ***** Running Prediction *****
05/24/2020 11:06:50 - INFO - transformers.trainer_tf - Batch size = 16
Traceback (most recent call last):
File "/home/ec2-user/yonatab/ZeroShot/transformers_experiments/src/my_run_squad_tf_with_trainer.py", line 208, in <module>
main()
File "/home/ec2-user/yonatab/ZeroShot/transformers_experiments/src/my_run_squad_tf_with_trainer.py", line 201, in main
predictions = trainer.predict(test_dataset)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/transformers/trainer_tf.py", line 430, in predict
return self._prediction_loop(test_dataset, description="Prediction")
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/transformers/trainer_tf.py", line 213, in _prediction_loop
preds = logits.numpy()
AttributeError: 'tuple' object has no attribute 'numpy'
```<|||||>That's normal, the evaluation/prediction are not implemented yet. I have to make the example compliant with the SQuAD metric from the `nlp` framework. It means that for now only training is possible.
But if you want to make this integration yourself and do a PR, you are very welcome to do it :) Otherwise I think to be able to do it in the next 2 coming weeks. Really sorry for that.<|||||>Thank you for the answers.
That's why I tried to use the normal tensorflow `fit` and `predict` methods as shown here https://blog.tensorflow.org/2019/11/hugging-face-state-of-art-natural.html.
Basically I just want to do training and evaluation during training, and then testing on the test dataset.
I succeed to do it with the pytorch model (`run_squad.py`), and I now tried to do it with the tensorflow model as well. If it will be implemented in the future it will be great, I will wait.
Thanks :) π <|||||>I very quickly coded this so it is not really tested but it can gives you an idea of how to use `.fit()` method. It is based on the Colab version proposed for the `nlp` framework.
```python
from transformers import (
BertTokenizerFast,
TFBertForQuestionAnswering,
)
import tensorflow_datasets as tfds
import tensorflow as tf
ds = tfds.load("squad")
tokenizer = BertTokenizerFast.from_pretrained("bert-base-cased")
model = TFBertForQuestionAnswering.from_pretrained("bert-base-cased")
def get_correct_alignement(context, gold_text, start_idx):
end_idx = start_idx + len(gold_text)
if context[start_idx:end_idx] == gold_text:
return start_idx, end_idx # When the gold label position is good
elif context[start_idx-1:end_idx-1] == gold_text:
return start_idx-1, end_idx-1 # When the gold label is off by one character
elif context[start_idx-2:end_idx-2] == gold_text:
return start_idx-2, end_idx-2 # When the gold label is off by two character
else:
raise ValueError()
def convert_to_tf_features(example, training=True):
encodings = tokenizer.encode_plus(example["context"].numpy().decode("utf-8"), example["question"].numpy().decode("utf-8"), pad_to_max_length=True, max_length=512)
start_positions, end_positions = [], []
if training:
start_idx, end_idx = get_correct_alignement(example["context"].numpy().decode("utf-8"), example["answers"]["text"][0].numpy().decode("utf-8"), example["answers"]["answer_start"][0].numpy())
start = encodings.char_to_token(0, start_idx)
end = encodings.char_to_token(0, end_idx-1)
if start is None or end is None:
return None, None
start_positions.append(start)
end_positions.append(end)
else:
for i, start, text in enumerate(zip(example["answers"]["answer_start"], example["answers"]["text"])):
start_idx, end_idx = get_correct_alignement(example["context"].numpy().decode("utf-8"), example["context"].numpy().decode("utf-8"), text.numpy().decode("utf-8"), start.numpy())
start = encodings.char_to_token(0, start_idx)
end = encodings.char_to_token(0, end_idx-1)
if start is None or end is None:
return None, None
start_positions.append(start)
end_positions.append(end)
if start_positions and end_positions:
encodings.update({'output_1': start_positions,
'output_2': end_positions})
return encodings, {'output_1': start_positions, 'output_2': end_positions}
train_features = {}
train_labels = {}
for item in ds["train"]:
feature, label = convert_to_tf_features(item)
if feature is not None and label is not None:
for k, v in feature.items():
train_features.setdefault(k, []).append(v)
for k, v in label.items():
train_labels.setdefault(k, []).append(v)
train_tfdataset = tf.data.Dataset.from_tensor_slices((train_features, train_labels)).batch(8)
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE, from_logits=True)
opt = tf.keras.optimizers.Adam(learning_rate=3e-5)
model.compile(optimizer=opt,
loss={'output_1': loss_fn, 'output_2': loss_fn},
loss_weights={'output_1': 1., 'output_2': 1.},
metrics=['accuracy'])
model.fit(train_tfdataset, epochs=1, steps_per_epoch=3)
```<|||||>Thank for the help :)
I've succeded to use your code as reference with my dataset, converting examples to features:
```python
def get_tf_dataset(args, processor, tokenizer, dataset_type):
filename_by_case = {'train': args.train_file, 'dev': args.dev_file, 'test': args.test_file}
func_by_case = {'train': processor.get_train_examples, 'dev': processor.get_dev_examples, 'test': processor.get_dev_examples}
examples = func_by_case[dataset_type](args.data_dir, filename=filename_by_case[dataset_type])
train_features = {}
train_labels = {}
for item in examples:
feature, label = convert_to_tf_features(item, tokenizer)
if feature is not None and label is not None:
for k, v in feature.items():
train_features.setdefault(k, []).append(v)
for k, v in label.items():
train_labels.setdefault(k, []).append(v)
tfdataset = tf.data.Dataset.from_tensor_slices((train_features, train_labels)).batch(8)
return tfdataset
def convert_to_tf_features(example, tokenizer, training=True):
context = example.context_text # example["context"].numpy().decode("utf-8")
question = example.question_text # example["question"].numpy().decode("utf-8")
encodings = tokenizer.encode_plus(context, question, pad_to_max_length=True, max_length=512)
start_positions, end_positions = [], []
first_answer = example.answers[0] if len(example.answers) > 0 else "" # example["answers"]["text"][0].numpy().decode("utf-8")
first_answer_start = example.start_position # example["answers"]["answer_start"][0].numpy()
start_idx, end_idx = get_correct_alignement(context,
first_answer,
first_answer_start)
start = encodings.char_to_token(0, start_idx)
end = encodings.char_to_token(0, end_idx - 1) if end_idx > 0 else 0
if start is None or end is None:
return None, None
start_positions.append(start)
end_positions.append(end)
if start_positions and end_positions:
encodings.update({'output_1': start_positions,
'output_2': end_positions})
return encodings, {'output_1': start_positions, 'output_2': end_positions}
```
I will check how to deal with the impossible answers by another references. In this example its empty string "" when no answer and `end_position = 0`. Thanks.<|||||>HiοΌ how did you solve the **Try 1** problemοΌ
AttributeError: 'NoneType' object has no attribute 'strip' |
transformers | 4,452 | closed | Value matrix of self-attention | 05-19-2020 07:39:10 | 05-19-2020 07:39:10 | ||
transformers | 4,451 | closed | β Warning : This overload of addcdiv_ is deprecated | # β Questions & Help
When running the [official Colab example of GLUE](https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/trainer/01_text_classification.ipynb), during training I receive a `UserWarning` :
```
/pytorch/torch/csrc/utils/python_arg_parser.cpp:756: UserWarning: This overload of addcdiv_ is deprecated:
addcdiv_(Number value, Tensor tensor1, Tensor tensor2)
Consider using one of the following signatures instead:
addcdiv_(Tensor tensor1, Tensor tensor2, *, Number value)
```
---
**Is it expected ?**
| 05-19-2020 05:36:26 | 05-19-2020 05:36:26 | Not expected, but shouldn't be an issue. Feel free to open a PR swapping args in https://github.com/huggingface/transformers/blob/31eedff5a0fc47d60609089627af6698c21da88d/src/transformers/optimization.py#L165 |
transformers | 4,450 | closed | [Trainer] move model to device before setting optimizer | Fixes #4240
Thanks @shaoyent for diagnosing the issue | 05-19-2020 03:09:00 | 05-19-2020 03:09:00 | |
transformers | 4,449 | closed | [Questions & Help] The loss doesn't decrease correctly while training BERT from scratch | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I am now using [huggingface/transformers](https://github.com/huggingface/transformers) to train a BERT model on **1m** wiki data **from scratch**, but the training loss looks so weird. Before showing the details of the training process, I would first share the scripts and configs I used:
```
python run_language_modeling.py --output_dir $OUTPUT_DIR \
--model_type bert \
--mlm \
--config_name $CONFIG_AND_DATA_DIR \
--tokenizer_name $CONFIG_AND_DATA_DIR \
--do_train \
--do_eval \
--num_train_epochs 20 \
--learning_rate 1e-4 \
--save_steps 250 \
--per_gpu_train_batch_size 64 \
--evaluate_during_training \
--seed 404 \
--block_size 256 \
--train_data_file $DATA_DIR/train.txt \
--eval_data_file $DATA_DIR/valid.txt \
--evaluate_during_training \
--logging_steps 250 > log.bert
```
where [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py) is the python script provided by huggingface. I didn't change the config of BERT, except for the vocabulary size. The vocabulary, or the tokenizer, was trained using [huggingface/tokenizers](https://github.com/huggingface/tokenizers).
I put the log of training loss [here](https://gist.github.com/ghrua/01fd859707923f80f1e16af5c2bd3f6a). We can see that, after 20 epochs, the training loss decreases from `7.84` to `7.60`. That's so weird, since the number of lines of the raw data is just 1 million, the training loss should decrease sharply with such an amount of data. Note that I also used the same python script to train a GPT-2 from scratch with the same data, and it worked very well, the loss decreased as expected.
I have tried several ways to address this issue:
1. Set the batch size as big as the GPU can afford. Since BERT just predicts 15% tokens at each time step, big batch size may give the model more error signals while training. However, it doesn't help.
2. Maybe the learning rate is too small. I tried to adjust the `learning_rate` to 5e-4, unfortunately, the converged loss even became worse.
3. Maybe the `vocab.txt` has some problem, which is extracted from my training data using the toolkit [huggingface/tokenizer](https://github.com/huggingface/tokenizers). Then, I used the `vocab.txt` and `config.json` downloaded from this repo to run the python script, but it met the same problem.
4. I also run the same script on 2k data. I trained the BERT model for 200 epochs, and the converged training loss was at around `6.8`, which cannot even overfit on a toy dataset.
Thanks for your kindly help!!!
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
[Link to the question asked on SO](https://stackoverflow.com/questions/61873435/the-loss-doesnt-decrease-correctly-while-training-bert-from-scratch)
| 05-19-2020 03:00:42 | 05-19-2020 03:00:42 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,448 | closed | Correct TF formatting to exclude LayerNorms from weight decay | Fixes #4360
Layer Norm is formated in the wrong way for TensorFlow. This causes it not to be excluded from weight decay in the [run_tf_glue.py](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_tf_glue.py) script.
This pr simply formats the string to fit the TensorFlow naming.
Not sure if you want a test for this? One option would be to check that all elements in `_exclude_from_weight_decay` trigger a regexp match. But seems a bit overkill. | 05-19-2020 02:46:46 | 05-19-2020 02:46:46 | Hello!
Thanks for the fix! Just one suggestion above :)<|||||>Thanks for the feedback! Sounds like a good idea, added that. Are the failing ci tests an issue? Everything passes on my machine. <|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4448?src=pr&el=h1) Report
> Merging [#4448](https://codecov.io/gh/huggingface/transformers/pull/4448?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2d184cb553ee20943b03b253f44300e466357871&el=desc) will **increase** coverage by `0.84%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4448?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4448 +/- ##
==========================================
+ Coverage 77.30% 78.14% +0.84%
==========================================
Files 120 120
Lines 20027 20027
==========================================
+ Hits 15481 15651 +170
+ Misses 4546 4376 -170
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4448?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4448/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `79.24% <ΓΈ> (ΓΈ)` | |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4448/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4448/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.08% <0.00%> (+0.12%)` | :arrow_up: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4448/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `43.25% <0.00%> (+1.10%)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4448/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.78% <0.00%> (+1.37%)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4448/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.73% <0.00%> (+2.29%)` | :arrow_up: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4448/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `97.81% <0.00%> (+2.62%)` | :arrow_up: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4448/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <0.00%> (+10.00%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4448/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0.00%> (+81.20%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4448?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4448?src=pr&el=footer). Last update [2d184cb...d99d65c](https://codecov.io/gh/huggingface/transformers/pull/4448?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Awesome! LGTM :)
/cc @julien-c and @LysandreJik |
transformers | 4,447 | closed | TF Beam Search generation seems to be flaky sometimes | # π Bug
## Information
Model I am using (Bert, XLNet ...): ALL TF generate models
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
* [ ] all generate beam search tests in TF
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Some commits are failing due to `Beam size should alway be full` in circle ci - this should actually never happen. See a failed circle ci here: https://circleci.com/gh/huggingface/transformers/39780?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link .
## Expected behavior
Circle ci should not fail with this message. | 05-18-2020 23:12:20 | 05-18-2020 23:12:20 | |
transformers | 4,446 | closed | Make get_last_lr in trainer backward compatible | Fixes https://github.com/huggingface/transformers/issues/3959 .
@julien-c | 05-18-2020 23:08:18 | 05-18-2020 23:08:18 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4446?src=pr&el=h1) Report
> Merging [#4446](https://codecov.io/gh/huggingface/transformers/pull/4446?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/42e8fbfc51ae4990b24a3c92fa0c5d3481dfc821&el=desc) will **increase** coverage by `0.85%`.
> The diff coverage is `50.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4446?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4446 +/- ##
==========================================
+ Coverage 77.16% 78.02% +0.85%
==========================================
Files 120 120
Lines 20087 20088 +1
==========================================
+ Hits 15501 15673 +172
+ Misses 4586 4415 -171
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4446?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4446/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `41.60% <50.00%> (+1.22%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4446/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.51% <0.00%> (+0.12%)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4446/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.78% <0.00%> (+1.37%)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4446/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.73% <0.00%> (+2.29%)` | :arrow_up: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4446/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `97.81% <0.00%> (+2.62%)` | :arrow_up: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4446/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <0.00%> (+10.00%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4446/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0.00%> (+81.20%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4446?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4446?src=pr&el=footer). Last update [42e8fbf...740126d](https://codecov.io/gh/huggingface/transformers/pull/4446?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks! |
transformers | 4,445 | closed | Generation with EncoderDecoder Model | # β Questions & Help
Hi,
I am using the EncoderDecoder Model and I would like to use the generate method for sequence generation. As I have read in docs, the generate method can be used with any Pretrained HF model with LM head on top.
So wrapping a pretrained LM model (e.g. GPT2LMHeadModel, BertForMaskedLM) (and having a encoder too) with the Encoder-Decoder class gives null bos_token_id and the generate method does not work properly. However using only a pretrained LM model (without wrapping with the Encoder-Decoder class) gives a valid bos_token_id (because the config file contains bos_token_id).
How I should handle the above issue?
Thank you in advance. | 05-18-2020 19:00:52 | 05-18-2020 19:00:52 | Do we have any updates on this issue?<|||||>I will take a look at this at the end of next week - will get to you! <|||||>> I will take a look at this at the end of next week - will get to you!
Thanks a lot!<|||||>Hi @manzar96,
Multiple bugs were fixed in #4680 . Can you please take a look whether this error persists?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,444 | closed | model.save() does not save keras model that includes DIstillBert layer | # π Bug
## Information
I am trying to build a Keras Sequential model, where, I use DistillBERT as a non-trainable embedding layer. The model complies and fits well, even predict method works. But when I want to save it using model.save(model.h5), It fails and shows the following error:
```
> ---------------------------------------------------------------------------
> NotImplementedError Traceback (most recent call last)
> <ipython-input-269-557c9cec7497> in <module>
> ----> 1 model.get_config()
>
> /usr/local/lib/python3.7/site-packages/tensorflow/python/keras/engine/network.py in get_config(self)
> 966 if not self._is_graph_network:
> 967 raise NotImplementedError
> --> 968 return copy.deepcopy(get_network_config(self))
> 969
> 970 @classmethod
>
> /usr/local/lib/python3.7/site-packages/tensorflow/python/keras/engine/network.py in get_network_config(network, serialize_layer_fn)
> 2117 filtered_inbound_nodes.append(node_data)
> 2118
> -> 2119 layer_config = serialize_layer_fn(layer)
> 2120 layer_config['name'] = layer.name
> 2121 layer_config['inbound_nodes'] = filtered_inbound_nodes
>
> /usr/local/lib/python3.7/site-packages/tensorflow/python/keras/utils/generic_utils.py in serialize_keras_object(instance)
> 273 return serialize_keras_class_and_config(
> 274 name, {_LAYER_UNDEFINED_CONFIG_KEY: True})
> --> 275 raise e
> 276 serialization_config = {}
> 277 for key, item in config.items():
>
> /usr/local/lib/python3.7/site-packages/tensorflow/python/keras/utils/generic_utils.py in serialize_keras_object(instance)
> 268 name = get_registered_name(instance.__class__)
> 269 try:
> --> 270 config = instance.get_config()
> 271 except NotImplementedError as e:
> 272 if _SKIP_FAILED_SERIALIZATION:
>
> /usr/local/lib/python3.7/site-packages/tensorflow/python/keras/engine/network.py in get_config(self)
> 965 def get_config(self):
> 966 if not self._is_graph_network:
> --> 967 raise NotImplementedError
> 968 return copy.deepcopy(get_network_config(self))
> 969
>
> NotImplementedError:
```
The language I am using the model in English.
The problem arises when using my own modified scripts: (give details below)
```
from transformers import DistilBertConfig, TFDistilBertModel, DistilBertTokenizer
max_len = 8
distil_bert = 'distilbert-base-uncased'
config = DistilBertConfig(dropout=0.2, attention_dropout=0.2)
config.output_hidden_states = False
transformer_model = TFDistilBertModel.from_pretrained(distil_bert, config = config)
input_word_ids = tf.keras.layers.Input(shape=(max_len,), dtype = tf.int32, name = "input_word_ids")
distill_output = transformer_model(input_word_ids)[0]
cls_out = tf.keras.layers.Lambda(lambda seq: seq[:, 0, :])(distill_output)
X = tf.keras.layers.BatchNormalization()(cls_out)
X = tf.keras.layers.Dense(256, activation='relu')(X)
X = tf.keras.layers.Dropout(0.2)(X)
X = tf.keras.layers.BatchNormalization()(X)
X = tf.keras.layers.Dense(128, activation='relu')(X)
X = tf.keras.layers.Dropout(0.2)(X)
X = tf.keras.layers.BatchNormalization()(X)
X = tf.keras.layers.Dense(64, activation='relu')(X)
X = tf.keras.layers.Dropout(0.2)(X)
X = tf.keras.layers.Dense(2)(X)
model = tf.keras.Model(inputs=input_word_ids, outputs=X)
for layer in model.layers[:3]:
layer.trainable = False
```
The tasks I am working on is my own dataset.
## To reproduce
Steps to reproduce the behavior:
1. Run the above code
2. You will get the error when saving the model as
```
model.save('model.h5')
```
You can get the same error if you try:
```
model.get_config()
```
**_An interesting observation:_**
if you save the model without specifying ".h5" like
```
model.save('./model')
```
it saves the model as TensorFlow saved_model format and creates folders (assets (empty), variables, and some index files). But if you try to load the model, it produces different errors related to the DistillBert/Bert. It may be due to some naming inconsistency (input_ids vs. inputs, see below) inside the DistillBert model.
```
new_model = tf.keras.models.load_model('./model)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
/usr/local/lib/python3.7/site-packages/tensorflow/python/util/nest.py in assert_same_structure(nest1, nest2, check_types, expand_composites)
377 _pywrap_utils.AssertSameStructure(nest1, nest2, check_types,
--> 378 expand_composites)
379 except (ValueError, TypeError) as e:
ValueError: The two structures don't have the same nested structure.
First structure: type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')}
Second structure: type=TensorSpec str=TensorSpec(shape=(None, 8), dtype=tf.int32, name='inputs')
More specifically: Substructure "type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')}" is a sequence, while substructure "type=TensorSpec str=TensorSpec(shape=(None, 8), dtype=tf.int32, name='inputs')" is not
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
<ipython-input-229-b46ed71fd9ad> in <module>
----> 1 new_model = tf.keras.models.load_model(keras_model_path)
/usr/local/lib/python3.7/site-packages/tensorflow/python/keras/saving/save.py in load_model(filepath, custom_objects, compile)
188 if isinstance(filepath, six.string_types):
189 loader_impl.parse_saved_model(filepath)
--> 190 return saved_model_load.load(filepath, compile)
191
192 raise IOError(
/usr/local/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/load.py in load(path, compile)
114 # TODO(kathywu): Add saving/loading of optimizer, compiled losses and metrics.
115 # TODO(kathywu): Add code to load from objects that contain all endpoints
--> 116 model = tf_load.load_internal(path, loader_cls=KerasObjectLoader)
117
118 # pylint: disable=protected-access
/usr/local/lib/python3.7/site-packages/tensorflow/python/saved_model/load.py in load_internal(export_dir, tags, loader_cls)
602 loader = loader_cls(object_graph_proto,
603 saved_model_proto,
--> 604 export_dir)
605 root = loader.get(0)
606 if isinstance(loader, Loader):
/usr/local/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/load.py in __init__(self, *args, **kwargs)
186 self._models_to_reconstruct = []
187
--> 188 super(KerasObjectLoader, self).__init__(*args, **kwargs)
189
190 # Now that the node object has been fully loaded, and the checkpoint has
/usr/local/lib/python3.7/site-packages/tensorflow/python/saved_model/load.py in __init__(self, object_graph_proto, saved_model_proto, export_dir)
121 self._concrete_functions[name] = _WrapperFunction(concrete_function)
122
--> 123 self._load_all()
124 self._restore_checkpoint()
125
/usr/local/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/load.py in _load_all(self)
213
214 # Finish setting up layers and models. See function docstring for more info.
--> 215 self._finalize_objects()
216
217 @property
/usr/local/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/load.py in _finalize_objects(self)
504 layers_revived_from_saved_model.append(node)
505
--> 506 _finalize_saved_model_layers(layers_revived_from_saved_model)
507 _finalize_config_layers(layers_revived_from_config)
508
/usr/local/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/load.py in _finalize_saved_model_layers(layers)
675 call_fn = _get_keras_attr(layer).call_and_return_conditional_losses
676 if call_fn.input_signature is None:
--> 677 inputs = infer_inputs_from_restored_call_function(call_fn)
678 else:
679 inputs = call_fn.input_signature[0]
/usr/local/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/load.py in infer_inputs_from_restored_call_function(fn)
919 for concrete in fn.concrete_functions[1:]:
920 spec2 = concrete.structured_input_signature[0][0]
--> 921 spec = nest.map_structure(common_spec, spec, spec2)
922 return spec
923
/usr/local/lib/python3.7/site-packages/tensorflow/python/util/nest.py in map_structure(func, *structure, **kwargs)
609 for other in structure[1:]:
610 assert_same_structure(structure[0], other, check_types=check_types,
--> 611 expand_composites=expand_composites)
612
613 flat_structure = [flatten(s, expand_composites) for s in structure]
/usr/local/lib/python3.7/site-packages/tensorflow/python/util/nest.py in assert_same_structure(nest1, nest2, check_types, expand_composites)
383 "Entire first structure:\n%s\n"
384 "Entire second structure:\n%s"
--> 385 % (str(e), str1, str2))
386
387
ValueError: The two structures don't have the same nested structure.
First structure: type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')}
Second structure: type=TensorSpec str=TensorSpec(shape=(None, 8), dtype=tf.int32, name='inputs')
More specifically: Substructure "type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')}" is a sequence, while substructure "type=TensorSpec str=TensorSpec(shape=(None, 8), dtype=tf.int32, name='inputs')" is not
Entire first structure:
{'input_ids': .}
Entire second structure:
.
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I expect to have a normal saving and loading of the model.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.9.1
- Platform:
- Python version: 3.7.6
- Tensorflow version (CPU): 2.2.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 05-18-2020 18:00:40 | 05-18-2020 18:00:40 | Same issue<|||||>Hi, we don't fully support saving/loading these models using keras' save/load methods (yet). In the meantime, please use `model.from_pretrained` or `model.save_pretrained`, which also saves the configuration file.<|||||>Hello @LysandreJik ,
Thank you for the information.
Could you point me a direction and tell me a little more about the implementation procedure, so that I could do research and possibly implement the methods? If everything goes well, I could make a pull request that might benefit others as well.
Sabber<|||||>I had this exact error. I got around it by saving the weights and the code that creates the model. After training your model, run`model.save_weights('path/savefile')`. Note there is no .h5 on it.
When you want to reuse the model later, run your code until `model.compile()`. Then, `model.load_weights('path/savefile')`. <|||||>Thanks, works perfectly<|||||>Does this work now with newer versions?<|||||>I am also facing same issue. Any solution.<|||||>The issue still occurs on TF 2.6.0 which is very disappointing.
I tried training on Colab's TPU and on GPU.
- For TPU case I did not find a way to save & then load model properly;
- For GPU case model.save() throws 'NotImplemented' error. However, saving weights and then loading them into a compiled model works:
1. Save weights, either with callbacks or with `model.save_weights`;
2. When you need the model for inference, firstly create the model of the same architecture that was used for training (I packed everything into a create_model() function to ensure the architecture is the same)
3. Compile the model
4. Use `model.load_weights`
<|||||>cc @Rocketknight1 <|||||>This still occurs, not only with distilbert but also many others. I don't see why this issue was closed - The described workaround is quite cumbersome and error-prone, and I don't see why this cannot be implemented inside the library, given that the configuration should already be in place to allow overriding get_config / from_config methods?<|||||>Hi, TF maintainer here! You're right, and we're going to reopen this one. We're very constrained on time right now, though - I'll try to investigate it as soon as I get the chance.<|||||>Thanks for reopening this. I think i was able to work around it by using the model.distilbert property, which itself is the base layer. Maybe it would be as simple as returning the base layers get_config/from_config with some tweaks?<|||||>@Zahlii You are correct - the underlying issue is simply that `get_config` and `from_config` were never implemented correctly for most Transformers models! We only got away with it for this long because a lot of the standard training setups never called them. We're working on a PR right now.<|||||>We've attempted a patch at #14361 - if anyone has any suggestions, or wants to try it out, please let us know! You can test the PR branch with `pip install git+https://github.com/huggingface/transformers.git@add_get_config`<|||||>The patch has now been merged. It'll be in the next release, or if anyone else is encountering this issue before then, you can install from master with `pip install git+https://github.com/huggingface/transformers.git`<|||||>Since the patch in https://github.com/huggingface/transformers/pull/14361 has been reverted, is there a timeline for a fix? (Or is there a known workaround one could use?) Thanks :) <|||||>@skbaur Although that patch was reverted, we quickly followed up with a fixed one at https://github.com/huggingface/transformers/pull/14415 , so the issue should now be resolved. If you're still encountering this issue after updating to the most recent version of Transformers, please let me know!<|||||>> @skbaur Although that patch was reverted, we quickly followed up with a fixed one at #14415 , so the issue should now be resolved. If you're still encountering this issue after updating to the most recent version of Transformers, please let me know!
Hi @Rocketknight1 , thanks for your reply! You are right, it does work when saving in the tensorflow format (not hdf5). This does solve the issue I was facing.
What did not work for me was this (minimal example adapted from https://github.com/huggingface/transformers/issues/14430 ):
```
import tensorflow as tf
import transformers
import sys
print(sys.version)
print(tf.__version__)
print(transformers.__version__)
bert = transformers.TFBertModel(transformers.BertConfig())
input_ids = tf.keras.layers.Input(shape=(512,), dtype=tf.int32)
model = tf.keras.Model(inputs=[input_ids], outputs=[bert(input_ids).last_hidden_state])
model.compile()
# tf.keras.models.save_model(model, "model_tf", save_format='tf') # This works
tf.keras.models.save_model(model, "model_h5.h5", save_format='h5') # This fails
```
Output:
```
3.6.9 (default, Oct 8 2020, 12:12:24)
[GCC 8.4.0]
2.4.4
4.12.5
```
and then it fails with
```
~/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/functional.py in get_network_config(network, serialize_layer_fn)
1347 filtered_inbound_nodes.append(node_data)
1348
-> 1349 layer_config = serialize_layer_fn(layer)
1350 layer_config['name'] = layer.name
1351 layer_config['inbound_nodes'] = filtered_inbound_nodes
~/.local/lib/python3.6/site-packages/tensorflow/python/keras/utils/generic_utils.py in serialize_keras_object(instance)
248 return serialize_keras_class_and_config(
249 name, {_LAYER_UNDEFINED_CONFIG_KEY: True})
--> 250 raise e
251 serialization_config = {}
252 for key, item in config.items():
~/.local/lib/python3.6/site-packages/tensorflow/python/keras/utils/generic_utils.py in serialize_keras_object(instance)
243 name = get_registered_name(instance.__class__)
244 try:
--> 245 config = instance.get_config()
246 except NotImplementedError as e:
247 if _SKIP_FAILED_SERIALIZATION:
~/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py in get_config(self)
2247
2248 def get_config(self):
-> 2249 raise NotImplementedError
2250
2251 @classmethod
NotImplementedError:
```
<|||||>Hi @skbaur, your code runs fine for me! Here's my outputs:
```
3.9.6 (default, Aug 18 2021, 19:38:01)
[GCC 7.5.0]
2.6.0
4.13.0.dev0
```
Can you try, in order:
1) Installing transformers from master with `pip install git+https://github.com/huggingface/transformers.git`
2) Updating TF to version 2.6 or 2.7
and let me know if either of those fixes it for you?<|||||>> Hi @skbaur, your code runs fine for me! Here's my outputs:
>
> ```
> 3.9.6 (default, Aug 18 2021, 19:38:01)
> [GCC 7.5.0]
> 2.6.0
> 4.13.0.dev0
> ```
>
> Can you try, in order:
>
> 1. Installing transformers from master with `pip install git+https://github.com/huggingface/transformers.git`
> 2. Updating TF to version 2.6 or 2.7
>
> and let me know if either of those fixes it for you?
Option 1. already seems to work (Installing transformers from master with pip install git+https://github.com/huggingface/transformers.git , but not updating TF).
The error reappears when downgrading back to transformers 4.12.5.<|||||>@skbaur It seems like one of the relevant PRs didn't make it into the release, in that case - please use the master version for now, and hopefully once 4.13 is released you can just use that instead! |
transformers | 4,443 | closed | Issues with the EncoderDecoderModel for sequence to sequence tasks | # β Questions & Help
I have been attempting with various models to try to build an encoder-decoder, sequence to sequence transformer model. For the most part, I have been using BERT (bert-base-cased), but have encountered issues with various models.
The model is intended for an English to English sequence to sequence problem.
For reference, I had been trying to use the seq2seq example in this pull request as a template :
https://github.com/huggingface/transformers/pull/3402
But have needed to make some modifications to it to account for other recent changes in the EncoderDecoderModel class.
I have a hit a few main issues, three are posted here. I think at least some of them are possibly bugs in the EncoderDecoderModel code.
1. A recent commit made some major changes to the forward method, and I've been hitting issues with the section that defines the decoder_outputs (around line 253 of modeling_encoder_decoder.py.) The example in the pull request I linked does not provide decoder_input_ids when setting up the model, but that is now required by this code in your recent commit. When training, I modified the code to provide decoder_token_ids as the target tokens shifted one to the right with a PAD token in front, as described in various papers. However, I don't understand why this is required when in eval mode -- shouldn't the model not have any decoder input tokens when in test/eval mode, and only be able to see what the previous tokens it actually output were? I don't understand what I'm supposed to provide as decoder_input_ids when in evaluation mode, and haven't been able to find documentation on it.
The code I'm currently using for training looks something like this :
```
for step, batch in enumerate(epoch_iterator):
# Skip past any already trained steps if resuming training
if steps_trained_in_current_epoch > 0:
steps_trained_in_current_epoch -= 1
continue
model.train()
batch = tuple(t.to(args.device) for t in batch)
input_ids, output_ids, input_mask, output_mask, _, decoder_ids = batch
# add other inputs here, including kwargs
**inputs = {"input_ids": input_ids, "attention_mask": input_mask, 'decoder_input_ids': decoder_ids}**
# The output tuple structure depends on the model used and the arguments invoked
# For BERT-type models, this is
# decoder_predictions, encoded_embeddings, encoded_attention_mask = model(**inputs)
# For GPT2-type models, this at least starts with the decoder predictions
# See the EncoderDecoderModel class for more details
**output = model(**inputs)**
```
More context is given in the linked pull request, since again this is being copied from there. The initial pull request does not provide the 'decoder_input_ids' parameter, but it seems that is now required. My code is similar in eval mode, but without decoder_input_ids, and this code fails :
```
**for batch in tqdm(eval_dataloader, desc="Evaluating"):
batch = tuple(t.to(args.device) for t in batch)
input_ids, output_ids, input_mask, output_mask, _, decoder_ids = batch
with torch.no_grad():
inputs = {"input_ids": input_ids, "attention_mask": input_mask}
# The output tuple structure depends on the model used and the arguments invoked
# For BERT-type models, this is
# decoder_predictions, encoded_embeddings, encoded_attention_mask = model(**inputs)
# For GPT2-type models, this at least starts with the decoder predictions
# See the EncoderDecoderModel class for more details
output = model(**inputs)**
```
This code fails in modeling_encoder_decoder, line 283 with
ValueError: You have to specify either input_ids or inputs_embeds
2. The pull request uses a GPT2 model as an example, but that no longer works because the code mentioned from #1 requires some parameters like encoder_hidden_states that GPT2 does not take at initialization. When I try to create a GPT2 model I get exceptions regarding this extra parameter. In other words, when I switch from a bert-bert model to a gpt2-gpt2 model, the code posted above fails in the "forward" method of the EncoderDecoderModel (line 283 of modeling_encoder_decoder) because "encoder_hidden_states" is an unexpected param for GPT2. Is this intended / is GPT2 no longer supported for an encoder decoder architecture using this code?
3. This one is just more of a general question... but since I'm posting the above 2 as issues anyways, I figured I'd add it here in case anybody can clarify and save a separate issue being created..
I believe I'm doing this part correctly, but it was not handled in the example code so want to verify if possible... For the attention mask for the decoder, during training all non-PAD tokens are expected to be unmasked, and during evaluation no mask should be provided and a default causal mask will be used, right?
@patrickvonplaten , tagging you in this issue as requested.
Thank you for your time!! Let me know if you need more code, again my code is 95% or so identical to the run_seq2seq.py example in the linked PR, just with some changes to account for recent modifications in modeling_encoder_decoder.py | 05-18-2020 17:36:26 | 05-18-2020 17:36:26 | Update -- after hours working with this code, I somehow only now realized that the PR I linked had updates to modeling_encoder_decoder.py that fixed the issues I'm describing in part 2 of my issue, which is why the example works there.
I am still confused about part 1 (and 3) however, since it does not look like that PR changed anything about the input_ids for the decoder.<|||||>Yeah sorry, we changed the code base quite a bit since the PR you linked. So in general at the moment GPT2 cannot be used as a decoder because it is missing cross attention layers.
The only encoder-decoder model supported atm is a Bert-2-Bert model (this also included all models that inherit from BERT though: Roberta, ...). Do you currently use a Bert-2-Bert model?<|||||>Thanks Patrick. I did get a bert-2-bert model working for sequence to sequence but it really did not perform well on dummy tasks such as the one in the PR I linked. I am not sure I understand how a Bert-2-Bert model is supposed to work, isn't BERT an encoder architecture? How is it used as a decoder? (I was able to get the code working, but don't understand the theory behind a bert-2-bert model, and am wondering if that explains the poor performance with this model type.)<|||||>Can you link your code of your working Bert-2-Bert model here? Just a link to a GitHub repo or post it in the issue here would be great :-)<|||||>@patrickvonplaten My code was almost totally copied from that example in the pull request. I've been experimenting a bunch so it hasn't been constant, but I tried a bert2bert model again last night and while it looked like it was training properly etc, the model did not produce any results in the end.
I've pushed the code to a new repo here that you can look at https://github.com/dbaxter240/bert2bertexample
Since my original raising of this issue, I ended up cloning the transformers repo to manually make some of the changes that were in the pull request I linked. Since then, I've been able to get a GPT2 model to actually work reasonably well on the dummy problem, but Bert2Bert still fails.
The repo contains my modified copy of modeling_encoder_decoder.py so you can see what's going on. It's essentially a few of the same changes made to the file in the PR I linked.
I'm not sure if this now falls out of your realm to investigate since I've modified the source code now, but the Bert2Bert model should be working exactly as it was prior to me tweaking the source code. I've been reading into your documentation on how to use BERT as a decoder, and as far as I can tell I'm (or the existing source code is) providing the expected parameters correctly.
Thanks!<|||||>Hi @dbaxter240,
Multiple bugs were fixed in #4680. Can you please take a look whether this error persists?
I think ideally you should not copy paste old encoder decoder code into another repo since the code quickly becomes outdated and is hard to debug for us. The `EncoderDecoderModel` is still a very premature feature of this library and prone to change quickly. It would be great if you could try to use as much up-to-date code of this library as possible.
I'm very sorry, for only finding this big bug now! It seems like you have invested quite a lot of energy into your code. I will soon (~2 weeks) open-source a notebook giving a nice example of how the `EncoderDecoderModel` can be leverage to fine-tune a Bert2Bert model.
Also note that this PR #3402 is rather outdated and since we don't provide `EncoderDecoderModel` support for GPT2 at the moment still not possible.
<|||||>@patrickvonplaten Thank you very much for your time with this!
I haven't had too much time to play with the code including your change yet, but it looks like there are some differences in the behavior, so perhaps I will have better results once I'm able to put more time into training up the model!
I think the main question/issue I'm still hitting in my limited time toying with it is my question #1 from above. A main reason behind me trying to modify the source code originally was the required parameter of either decoder_input_ids or decoder_input_embeds, and not totally understanding what to provide there (at training vs. evaluation time.)
I'd taken a hint from the PR I'd mentioned which just passed the encoder hidden states as the decoder_input_embeds, so that's what I was trying to achieve. Using the code including your change, those parameters are required again and I can't quite use that approach.
It looks like the encoder hidden states **are** being passed into the decoder in the EncoderDecoderModel.forward() method via the encoder_hidden_states parameter, so that looks good, but then as mentioned in question 1 I'm not sure I understand what the expected input for decoder_input_ids or decoder_input_embeds is. Is the idea that you provide decoder_input_ids as the expected output token ids (shifted right with a PAD token) during training so the model has the expected output while training, but then completely mask those tokens during evaluation so your model can't "see the answer"?
I will keep playing with it to see if I can figure that piece out, but if you have any tips or input I would greatly appreciate it!
Thank you again for your help with this!<|||||>Maybe https://github.com/huggingface/transformers/issues/4647#issuecomment-636306986 might help as well here<|||||>@patrickvonplaten Thanks Patrick, that did clear up a fair bit for me (especially regarding not needing to shift the tokens, but I'm still not sure I understand the answer to my main question in #1 above.
In the issue you linked, you are providing the target sequence (converted to token ids) as decoder_input_ids for training. This makes sense to me, since the underlying code is shifting the tokens right by one for us. What I still don't understand is what to provide as the decoder_input_ids when doing evaluation.
1. If I do that same thing with my test set (feed the target sequence as decoder_input_ids), then I'm just basically feeding the answer to my model. I tested that it is in fact "cheating" and using this information by putting some crazy things in my test set which the model managed to classify accurately (it definitely should not have been able to.)
2. If I instead feed the source sequence converted to token ids during evaluation (as I've seen in some documentation) then I'm giving my model different information during training and evaluation.
3. If I try to not provide any decoder_input_ids during evaluation (after calling model.eval() ), then I get a "ValueError: You have to specify either input_ids or input_embeds."
My expectation was that during training, I would feed it the target sequence as decoder_input_ids and then during evaluation, I would not input decoder_input_ids and the model would only use the previous tokens it had generated. If I provide the target sequence as decoder_input_ids during training, what am I supposed to be providing as decoder_input_ids during evaluation?
Thank you again for your help!<|||||>Disregard the above comment -- as you hinted above I was confusing myself by looking at some outdated examples :)
I'm now generating my predictions with
`decoder_predictions = model.generate(input_ids, decoder_start_token_id=model.config.decoder.pad_token_id)`
I haven't been able to get too great of results yet, but haven't been able to dig into why with recent updates. I will be continuing to test over the next few days and will let you know if the bug fixes you mention above made the difference!<|||||>> I'm very sorry, for only finding this big bug now! It seems like you have invested quite a lot of energy into your code. I will soon (~2 weeks) open-source a notebook giving a nice example of how the `EncoderDecoderModel` can be leverage to fine-tune a Bert2Bert model.
@patrickvonplaten Any updates on the notebook or any other examples to fine-tune a Bert2Bert model? I find myself unsure of how to go about it and the examples would be a good starting point to understand the same. I have picked up some things from other issues (https://github.com/huggingface/transformers/issues/4647) regarding this but not sure if I am doing the right thing.
<|||||>@mitesh-mutha For what it's worth, I was able to get a model up and running with pretty reasonable results going off of the code linked in the last comment of that work item. Not sure if you when an official example will be available, but that code helped me a lot if you haven't looked at that code much yet.<|||||>@mitesh-mutha - very bad time estimation from my part again :D Next week (promise!), I will start working on notebook training / fine-tuning Bert2Bert on summarization. But the core code should not differ very much from the code I posted in the other comment.<|||||>@patrickvonplaten Hello Patrick. I have tried to fine tune a Bert2Bert model. The input to the model is a string of concatenated sentences and the output are the sentences reformulated in a paragraph. So far I implemented the model in Colab but the results are not that good. Here is my working code https://colab.research.google.com/drive/19G_wRPsc6FvXxxeoQZ3WzYaEkwm9YByv?usp=sharing .
It would be so nice if you can make a small tutorial on how to fine-tune a Bert2Bert model with a good result, such that I can find out where the problem lies in the code. Thank you :)<|||||>Great! Thanks, @patrickvonplaten!
I did look into the code that you and @dbaxter240 have mentioned. I implemented a similar thing, however, I am not getting great results. My code is similar to what @iliemihai has provided. Just for a quick try, I tried to fine-tune it to generate the same sentence but, as I mentioned, results were not good for me.
Looking at a sample tutorial or example would help me iron out any problems I might have in my code.
<|||||>Hey, as usual I'm very late on my own time timelines, but I started working on a Bert2Bert tutorial for summarization yesterday :-).
It's still work in progress, but it will be ready by next week.
The code should work as it is - I have to fine-tune the hyper parameters and want to add some nicer metrics to measure the performance during training.
If you want to follow the work live :D here is the google colab I'm working on at the moment:
https://colab.research.google.com/drive/13RXRepDN7bOJgdxPuIziwbcN8mpJLyNo?usp=sharing
@iliemihai, one thing I can directly see from your notebook is that I think you are not masking the loss of padded tokens so that the loss of all pad token id is back propagated through the network.
Since your `decoder_input_ids` are in PyTorch I think you can do the following for your `labels`:
```python
labels = decoder_input_ids.clone()
# mask loss for padding
labels[labels == tokenizer.pad_token_id] = -100
```<|||||>Thank you @patrickvonplaten I will watch into it. Think that I might have to tune the hyperparameters. Also my dataset is small (1000-2000 pairs of paragraphs with under 128 words) compared to other datasets.<|||||>Hey guys, small update from my side.
I have trained a Bert2Bert on summarization (without real hyper parameter search) and the results are quite promising.
You can check it out here: https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16
The training code to reproduce the results and some examples can be found in the model card.
Hope this helps for now. Will be off for two weeks, but plan on a bit more sophisticated training + clean notebook and docs for the `EncoderDecoder` framework with @sshleifer afterward. <|||||>Hi!
I was studying this tutorial: [https://huggingface.co/patrickvonplaten/bert2gpt2-cnn_dailymail-fp16?text=The+goal+of+life+is+%3Cmask%3E.](url) and I noticed that on the GPT2 tokenizer the pad_token, the unk_token, the bos_token and the eos_token are set as "<|endoftext|>". My question is why did you use "<|endoftext|>" for padding and unknown token?
Thank you in advance.<|||||>Hmm, there is no real reason behind it. Both `unk_token` and `pad_token` are not really important. On the pad_token the loss is never calculated and it does not matter for inference with batch_size=1. The unk_token does not really matter<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,442 | closed | [Communtiy notebooks] Fine-tuning / Training | Proposal of how notebooks for training could be added. | 05-18-2020 15:48:58 | 05-18-2020 15:48:58 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4442?src=pr&el=h1) Report
> Merging [#4442](https://codecov.io/gh/huggingface/transformers/pull/4442?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d9ece8233d584cdc2eeae5165dd3329328fae328&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4442?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4442 +/- ##
==========================================
+ Coverage 78.14% 78.16% +0.01%
==========================================
Files 120 120
Lines 20087 20087
==========================================
+ Hits 15697 15701 +4
+ Misses 4390 4386 -4
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4442?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4442/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.51% <0.00%> (+0.12%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4442/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.93% <0.00%> (+0.32%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4442/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4442?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4442?src=pr&el=footer). Last update [d9ece82...8ae2c5c](https://codecov.io/gh/huggingface/transformers/pull/4442?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Actually closing this, I think community notebooks should only be added in a single place. |
transformers | 4,441 | closed | [Community notebooks] General notebooks | A proposal how we could link community notebooks.
I'm using the awesome notebook of @patil-suraj (`nlp` + `Trainer` + `transformers` :-)) as an example of how community models can be added.
@patil-suraj - could you maybe review the PR and see whether it's ok for you? | 05-18-2020 15:38:31 | 05-18-2020 15:38:31 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4441?src=pr&el=h1) Report
> Merging [#4441](https://codecov.io/gh/huggingface/transformers/pull/4441?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/590adb130be8e99eb638bb22136dda537b2da71d&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4441?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4441 +/- ##
=======================================
Coverage 78.14% 78.15%
=======================================
Files 120 120
Lines 20087 20087
=======================================
+ Hits 15697 15698 +1
+ Misses 4390 4389 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4441?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4441/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.77% <0.00%> (+0.16%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4441?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4441?src=pr&el=footer). Last update [590adb1...df8c3a1](https://codecov.io/gh/huggingface/transformers/pull/4441?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>@patrickvonplaten This sounds good! Having a separate table for community models makes sense. |
transformers | 4,440 | closed | Reformer training error | # π Bug
When training a Reformer model from scratch, I got the following error:
**TypeError: forward() got an unexpected keyword argument 'masked_lm_labels'**
## Information
I am trying to train a Reformer model from scratch on English documents. My data is one document per line:
The problem arises when using:
* [ x ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ x ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Split my training documents on '.' to create a corpus of sentences. Use Google SentencePiece script to train a tokenization model.
2. Use run_language_modeling.py with --mlm --tokenizer_name=path/to/pretrained_SP_tokenizer to train the model.
3. My config.json looks like this:
```
{
"architectures": [
"ReformerModelWithLMHead"
],
"model_type": "reformer",
"vocab_size": 32000
}
```
File "/Users/a9dvzzz/.virtualenvs/cf-mlc/lib/python3.7/site-packages/transformers/trainer.py", line 506, in _training_step
outputs = model(**inputs)
File "/usr/local/Cellar/python/3.7.4_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
TypeError: forward() got an unexpected keyword argument 'masked_lm_labels'
## Expected behavior
The training process produces a saved Reformer model.
## Environment info
- `transformers` version: 2.9.1
- Platform: Darwin-19.4.0-x86_64-i386-64bit
- Python version: 3.7.4
- PyTorch version (GPU?): 1.3.0 (False)
- Tensorflow version (GPU?): 2.0.0 (False)
- Using GPU in script?: no, but it doesn't seem to matter, both failed.
- Using distributed or parallel set-up in script?: no
| 05-18-2020 15:15:00 | 05-18-2020 15:15:00 | Reformer does not support `mlm` training at the moment. Please make sure you use `lm` training :-) <|||||>@patrickvonplaten thanks! Is there a plan to support mlm in the near future?
I assume I can just remove the mlm flag to do lm training, right? How can I tell the script to pad the input sequences to a certain length as reformer requires the sequence length to be a multiple of least common multiple chunk_length? Thanks!<|||||>Yes, there are plans to add a `MaskedLM` version for Reformer. I will release a notebook this week (probs on Friday) on how to train the Reformer :-) |
transformers | 4,439 | closed | Avoid abort due to missing paths in case of '--save_total_limit' argument | Checkpoint path will be deleted when using --save_total_limit. torch.save() would not be able to store and abort. | 05-18-2020 15:07:00 | 05-18-2020 15:07:00 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,438 | closed | BERT Fine-tuning problems | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Hello. I'm going to do a fine-tuning of BERT-base-uncased using the QA Dataset I made. However, the following error occurs: Could you tell me how to solve this problem?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
```
Traceback (most recent call last):
File "./examples/question-answering/run_squad.py", line 830, in <module>
main()
File "./examples/question-answering/run_squad.py", line 768, in main
train_dataset = load_and_cache_examples(args, tokenizer, evaluate=False, output_examples=False)
File "./examples/question-answering/run_squad.py", line 452, in load_and_cache_examples
examples = processor.get_train_examples(args.data_dir, filename=args.train_file)
File "/home/address/anaconda3/lib/python3.7/site-packages/transformers/data/processors/squad.py", line 525, in get_train_examples
return self._create_examples(input_data, "train")
File "/home/address/anaconda3/lib/python3.7/site-packages/transformers/data/processors/squad.py", line 552, in _create_examples
title = entry["title"]
TypeError: string indices must be integers
Traceback (most recent call last):
File "/home/address/anaconda3/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/address/anaconda3/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/address/anaconda3/lib/python3.7/site-packages/torch/distributed/launch.py", line 253, in <module>
main()
File "/home/address/anaconda3/lib/python3.7/site-packages/torch/distributed/launch.py", line 249, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/home/address/anaconda3/bin/python', '-u', './examples/question-answering/run_squad.py', '--local_rank=1', '--model_type', 'bert', '--model_name_or_path', 'bert-base-uncased', '--do_train', '--do_eval', '--train_file', '/home/address/Desktop/address/train_split.json', '--predict_file', '/home/address/Desktop/address/val_split.json', '--learning_rate', '3e-5', '--num_train_epochs', '2', '--max_seq_length', '384', '--doc_stride', '128', '--output_dir', '../models/wwm_uncased_finetuned_squad/', '--per_gpu_eval_batch_size=3', '--per_gpu_train_batch_size=3']' returned non-zero exit status 1.
``` | 05-18-2020 14:47:31 | 05-18-2020 14:47:31 | Is your dataset following the SQuAD dataset format? It seems that what's making it crash is that there's no `title` entry.
You can take a look at how SQuAD is setup [here](https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json).<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,437 | closed | Added model cards for Romanian BERT models | Added model card for ``dumitrescustefan/bert-base-romanian-cased-v1`` and ``dumitrescustefan/bert-base-romanian-uncased-v1`` | 05-18-2020 13:50:49 | 05-18-2020 13:50:49 | Awesome, thanks for sharing
https://huggingface.co/dumitrescustefan/bert-base-romanian-cased-v1
I've added a filter for π·π΄ here: https://huggingface.co/models?filter=romanian |
transformers | 4,436 | closed | [T5 fp16] Fix fp16 in T5 | This PR fixes the issue: #4287.
- A test for T5 is added.
- The function self.invert_attention_mask included a if statement now so that no errors will occur when using the function in `fp16` mode. | 05-18-2020 13:49:42 | 05-18-2020 13:49:42 | > Bart doesn't use this method yet, but LGTM!
Yeah, I noticed that as well - it's Bert that is using it. |
transformers | 4,435 | closed | added model card for german-sentiment-bert | I added a description for my german sentiment model. If you have any feedback or questions please let me know. | 05-18-2020 13:18:08 | 05-18-2020 13:18:08 | Awesome model card. Link: https://huggingface.co/oliverguhr/german-sentiment-bert<|||||>Thanks a lot :+1: |
transformers | 4,434 | closed | albertModel object has no attribute bias | transformers version:2.9.0
model = AlbertModel.from_pretrained("xxx", from_tf=True)
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'key', 'kernel'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'query', 'bias'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'query', 'kernel'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'value', 'bias'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'value', 'kernel'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'ffn', 'bias'] from bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'ffn', 'kernel'] from bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'ffn_output', 'bias'] from bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'ffn_output', 'kernel'] from bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel
Initialize PyTorch weight ['albert', 'pooler', 'bias'] from bert/pooler/dense/bias
Initialize PyTorch weight ['albert', 'pooler', 'kernel'] from bert/pooler/dense/kernel
Traceback (most recent call last):
File "d:\python_workbase\project\transformers_test\test.py", line 7, in <module>
model = AlbertModel.from_pretrained("D:\work\model\\albert_tiny_zh_google", from_tf=True)
File "D:\Programs\Python\Python37\lib\site-packages\transformers\modeling_utils.py", line 640, in from_pretrained
model = cls.load_tf_weights(model, config, resolved_archive_file[:-6]) # Remove the '.index'
File "D:\Programs\Python\Python37\lib\site-packages\transformers\modeling_albert.py", line 139, in load_tf_weights_in_albert
pointer = getattr(pointer, "bias")
File "D:\Programs\Python\Python37\lib\site-packages\torch\nn\modules\module.py", line 591, in __getattr__
type(self).__name__, name))
AttributeError: 'AlbertModel' object has no attribute 'bias' | 05-18-2020 11:58:07 | 05-18-2020 11:58:07 | Hi! What is the `xxx` model? Is it one of our pre-trained checkpoints, is it an original TF checkpoint? Did this error happen in previous versions? Would you mind giving a bit of context?<|||||>Hi, I tried this model https://storage.googleapis.com/albert_models/albert_base_zh.tar.gz , which is new model release by Google on 2019 Dec. 30 on Albert's official [github page](https://github.com/google-research/albert). And I encountered the same error. My code is:
```Python
from transformers import AlbertModel, AlbertConfig
config = json.load(open('albert_base/albert_config.json'))
config = AlbertConfig(**config)
model = AlbertModel.from_pretrained('albert_base/model.ckpt-best', config=config, from_tf=True)
```
Thank you!<|||||>Thanks, I'll take a look.<|||||>That's because you're trying to load a checkpoint without first converting it. You should run the conversion script under `src/transformers/convert_albert_original_tf_checkpoint_to_pytorch.py`:
```
python src/transformers/convert_albert_original_tf_checkpoint_to_pytorch.py \
--tf_checkpoint_path=$PATH_TO_ALBERT/albert_base_chinese/model.ckpt-best \
--albert_config_file=$PATH_TO_ALBERT//albert_base_chinese/albert_config.json \
--pytorch_dump_path=$PATH_TO_ALBERT/albert_chinese.pt
```<|||||>It works, thank you!
|
transformers | 4,433 | closed | Create README.md | 05-18-2020 09:26:50 | 05-18-2020 09:26:50 | Also it could be interesting to convert and also upload PyTorch weights<|||||>I've tried - but the script unfortunately only works for TF 1.4 - would be glad share though! |
|
transformers | 4,432 | closed | Tag onnx export tests as slow | TensorFlow ONNX export test is very slow as it makes many many optimizations passes of the graph.
This PR marks both PyTorch & TensorFlow as slow, and keeps all the others (fast) as non-slow. | 05-18-2020 08:53:08 | 05-18-2020 08:53:08 | |
transformers | 4,431 | closed | Adding optimizations block from ONNXRuntime. | cc @EmmaNingMS | 05-18-2020 08:45:12 | 05-18-2020 08:45:12 | cc @tianleiwu <|||||>use_external_data_format has some side-effect we'd like to mitigate here, I set to False by default and let the possibility for the user to override through CLI args. |
transformers | 4,430 | closed | π Weird learning rate with TPU Trainer | # π Bug
## Information
Model I am using (Bert, XLNet ...): **ELECTRA**
Language I am using the model on (English, Chinese ...): **English**
The problem arises when using:
* [ ] the official example scripts:
* [x] my own modified scripts:
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task:
* [x] my own task or dataset: CNN/DM
## To reproduce
I use `run_glue.py` as example to build a training script for TPU, with the Trainer API. My task is sequence classification with CNN/DM dataset.
I initialized the Trainer with following optimizer / scheduler :
```python
optimizer = AdamW(optimizer_grouped_parameters, lr=training_args.learning_rate, eps=training_args.adam_epsilon)
scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=training_args.warmup_steps, num_training_steps=287113)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=cnn_dm["train"] if training_args.do_train else None,
eval_dataset=cnn_dm["validation"],
data_collator=MyCollator(),
prediction_loss_only=True,
optimizers=(optimizer, scheduler),
)
```
Now, the training procedure is working : the code run fine on 8 TPU core.
**But the loss is not decreasing.**
After looking into Tensorboards logs, I found the learning rate to be very weird :

A few points to note :
* I specified a learning rate of **1e-4** (with the command argument `--learning_rate 1e-4`), but as you can see, the maximum value for the learning rate is **3.5e-6**.
* The shape of learning rate is not what I expected : After warmup, learning rate is supposed to decrease linearly, but instead, it stays fixed.
I don't know why the learning rate is being like this. Any idea what I might be doing wrong ?
_I can't share my notebook, but this seems to be the exact same issue with the official script `run_glue.py`, as described in #4358_
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: **2.9.1**
- Platform: **Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic**
- Python version: **3.6.9**
- PyTorch version (GPU?): **1.6.0a0+176174a (False)**
- Tensorflow version (GPU?): **2.2.0 (False)**
- Using GPU in script?: **No**
- Using distributed or parallel set-up in script?: **Yes : `xla_spawn.py`**
@julien-c | 05-18-2020 08:25:03 | 05-18-2020 08:25:03 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,429 | closed | mbart config.json missing | I downloaded mbart from fairseq, there are dict.txt, model.pt, sentence.bpe.model in it but no config.json, where can we get it (and other necessary missing files)?
I use mbart as pretrained model.
Is bart-large trained on multilingual data?
Anyone has compared bart with t5? | 05-18-2020 07:28:46 | 05-18-2020 07:28:46 | fairseq doesn't use config.json.
`BartConfig.from_pretrained('mbart-large-en-ro').to_json_file('config.json')` gets the config.json for English-Romanian, which is the only mbart checkpoint that's usable in this repository. |
transformers | 4,428 | closed | How to extract the best candidate after token classification? | Let's assume the model predicts the following for an input sequence.
```
The O
creation. O
date O
is O
27 B-DATE
Aug I-DATE
2020 I-DATE
and O
update. O
date. O
is O
01-09-2020 B-DATE
```
How do you pick the best candidate for **creation date** from logist values? | 05-18-2020 05:55:51 | 05-18-2020 05:55:51 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,427 | closed | Refactored the README.md file | 05-18-2020 04:46:46 | 05-18-2020 04:46:46 | Ok for you @savasy?<|||||>Ya for sure @julien-c
thanks a lot<|||||>Thanks @girishponkiya!<|||||>Thanks @girishponkiya ! |
|
transformers | 4,426 | closed | Lack of funetune examples for T5 model | # π Feature request
It seems like examples under transformers/examples doesn't support T5 except for translation.
## Motivation
We need more examples! It should be easy for some simple benchmarks.
## Your contribution
None currently.. But I am working on it!
| 05-18-2020 03:22:57 | 05-18-2020 03:22:57 | I've setup T5 fine-tuning using lightning and also HF's new Trainer. I can submit a PR for that. Would like to hear from @patrickvonplaten <|||||>It would be awesome if you could open a PR for this! <|||||>Great! I'll organize my examples and submit PR as soon as I finish it.
<|||||>@Chenyzsjtu @patrickvonplaten Could you please suggest me a good task for this ? I've fine-tuned T5 on mostly non-generative tasks (IMDB sentiment, Emotion classification, SWAG multiple choice, SQuAD1.1) and 2 generative tasks, cnn/dm and question generation. Which tasks should I consider adding ?<|||||>The GLUE and SuperGLUE tasks would be an obvious choice (mainly classification though). The [DecaNLP](http://decanlp.com/) tasks also have a nice mix of classification and generation.<|||||>> @Chenyzsjtu @patrickvonplaten Could you please suggest me a good task for this ? I've fine-tuned T5 on mostly non-generative tasks (IMDB sentiment, Emotion classification, SWAG multiple choice, SQuAD1.1) and 2 generative tasks, cnn/dm and question generation. Which tasks should I consider adding ?
There are many benchmarks tested in the original paper. Since we only need a example for demonstration purpose, a single task in GLUE or SuperGLUE should be enough.
Mayber MRPC? It needs less training steps, and was finetuned by itself rather than by the GLUE mixture as descriped in paper. Plus, it is also the example for bert here in examples/text-classification.<|||||>@ghomasHudson @Chenyzsjtu
DecaNLP sounds good. So we can include one generative task and one non-generative.
Let's see what @patrickvonplaten says then I'll move ahead with this.
Till then can you check my fine-tuning examples and give me some feedback. Here are the notebooks.
For SQuAD [here](https://colab.research.google.com/drive/176NSaYjc2eeI-78oLH_F9-YV3po3qQQO?usp=sharing)
For (IMDB sentiment, Emotion classification, SWAG multiple choice) [here](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb)<|||||>That's a great notebook!
Also note that you can now also use our `nlp` library, here: https://github.com/huggingface/nlp which will reduce your whole data preprocessing code to just a couple of lines. I think we have all the datasets you are using in your notebook(s) in the library :-).
I think @sshleifer and @julien-c have worked more on the examples lately, so they probably would know better how to integrate it. @julien-c, @sshleifer - do you think we can add a pytorch lightning T5 notebook to our examples? <|||||>@patrickvonplaten
Yes, using nlp library makes more sense. The SQuAD notebook above uses nlp library for data processing. Just ~10 lines of data processing code, and also uses HF trainer instead of lightning. So I have both the trainers ready, lightning as well as HF trainer.
IMO we should use HF trainer instead of lightning since most of the examples now use HF trainer. Converting above tasks in HF trainer is fairly easy.<|||||>Only just saw the SQuAD notebook - amazing!
Ok, we had some internal discussions on how to add notebooks and decided to add a table to the README as shown in this PR: https://github.com/huggingface/transformers/pull/4441. @patil-suraj I use your SQuAD notebook as an example of how a notebook could be added. Can you maybe check if that's ok for you?
If that's fine for you I'll merge the PR and you can also add the other notebook for IMDB, Emotion classification, ... in a new PR - I would be awesome if you could also use `nlp` there, but you don't have to add it. Everything that's useful is welcome :-) <|||||>@patrickvonplaten
Thank you for considering this! This sounds good to me.
I'll also use the `nlp` library in the other notebook and open another PR for that.<|||||>> @patrickvonplaten
> Thank you for considering this! This sounds good to me.
> I'll also use the `nlp` library in the other notebook and open another PR for that.
That sounds awesome :-) <|||||>Iβve also worked on an example notebook for tweet sentiment span extraction with T5 that I can share around this weekend (kaggle compe dataset).
Would it be ok to PR this as well? Would I have to add the dataset to nlp? π<|||||>For sure feel free to open a PR :-) It would be nice if you use `nlp`, but that's definitely not a must!
We are happy about every community notebook :-) <|||||>@patil-suraj
Thanks a lot for your contribution of fine-tuning notebook!
I notice that in the notebook your final performance for SQuAD1.1 on t5-base is:
"{'exact_match': 81.56102175969725, 'f1': 89.96016967193422}"
but in the paper it is: F1/EM = 92.08/85.44
It seems that there is something we need to take care of here.
<|||||>@Chenyzsjtu
The goal of the notebook was to get T5 working on TPU and show how we can fine-tune it for QA. So I didn't pay much attention to exact metrics. You can train it by following the learning rate and number of epochs used in the paper. That might increase it. <|||||>> @Chenyzsjtu
> The goal of the notebook was to get T5 working on TPU and show how we can fine-tune it for QA. So I didn't pay much attention to exact metrics. You can train it by following the learning rate and number of epochs used in the paper. That might increase it.
I will have a try. Thanks!<|||||>> @Chenyzsjtu
> The goal of the notebook was to get T5 working on TPU and show how we can fine-tune it for QA. So I didn't pay much attention to exact metrics. You can train it by following the learning rate and number of epochs used in the paper. That might increase it.
There is one more tiny problem...
Have you tried evaluating the very first checkpoint (the pretrained model itself) on SQuAD?
It seems that your posted finetune-performance
"{'exact_match': 81.56102175969725, 'f1': 89.96016967193422}"
is worse than that of the pretrained model, which is
83.122/90.958
<|||||>Hmm, interesting. I'll have a look. <|||||>@patil-suraj hi, I'm very new to `t5`. How can use `t5` for sentiment classification (simply just binary). I want to try on [this data sets](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge) but don't know how to approach. I have bit understanding in nlp. Would anyone please suggest. AFAIK, `t5` performs `text-to-text`, so if I want to make binary (numeric), I've to map the 1 and 0 as positive and negative. <|||||>Hi @Lincoln93
You are right, you can map 0 and 1 as positive and negative and ask the model to predict the text.
Have a look at [this](https://colab.research.google.com/drive/176NSaYjc2eeI-78oLH_F9-YV3po3qQQO?usp=sharing) notebook. It shows how to fine-tune t5 for binary as well as multi-class classification. <|||||>We have a bunch of T5 notebooks now thanks to you guys :-) Closing the issue...<|||||>@patil-suraj Very cool notebooks indeed!<|||||>Hi @patil-suraj awesome notebooks! I noticed you always call `model.generate(...)` to evaluate, I wonder, is there a reason for this, and is that really necessary for `t5`? why not just use simple inference? `model(**inputs)` like BERT and others do?
<|||||>> Hi @patil-suraj awesome notebooks! I noticed you always call `model.generate(...)` to evaluate, I wonder, is there a reason for this, and is that really necessary for `t5`? why not just use simple inference? `model(**inputs)` like BERT and others do?
You may need n-gram generation for more correct sentencesοΌ<|||||>> Hi @patil-suraj awesome notebooks! I noticed you always call `model.generate(...)` to evaluate, I wonder, is there a reason for this, and is that really necessary for `t5`? why not just use simple inference? `model(**inputs)` like BERT and others do?
Hi @saareliad , BERT models are mostly used for discriminative tasks i.e (classification, token classification, span extraction), so you just need to call the `model(**input)` only once. Where as T5 is a seq-to-seq generative model, which generates a single token at a time.
So to sample a sequence without `.generate`
1. feed in the start token as `input_ids` to `forward`
2. sample the next token by `argmax`
3. add that token to `input_ids`
4. repeat until you reach max len or sample `eos`
this quickly becomes complicated if you want beam search, or other sampling methods like top-k, top-p, temperature etc. So `.generate` is actually a powerful wrapper for all SOTA decoding methods.
Check [this](https://huggingface.co/blog/how-to-generate) awesome blog post by @patrickvonplaten to see what `.generate` has to offer<|||||>Thanks @patil-suraj ,
If we reduce the problem just to SQUAD, If I'm not wrong the extra `.generate` features are not used there at all?
For example, according the the code of your squad example:
```
answers = []
for batch in tqdm(dataloader):
outs = model.generate(input_ids=batch['input_ids'],
attention_mask=batch['attention_mask'],
max_length=16,
early_stopping=True)
outs = [tokenizer.decode(ids) for ids in outs]
answers.extend(outs)
```
since I didn't see there are beams for squad, `early_stopping=True` is not needed, and what happens is, more or less, the loop you described?
I ask because I experience similar problem to what you had with TPU - I have to choose between running generate on CPU or running the aforementioned simplistic version on many (8-40) GPUs, which of course will be much faster even without using cache/past.<|||||>Hi,
Is there an example showing T5 is finetuned on multiple tasks? with allowing to access the model architecture? thanks<|||||>Hi @rabeehk
by multiple tasks do you mean multitask or different tasks ?
if it's the latter, the this community [notebook ](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) shows how to fine-tune T5 for different tasks.
If multitask, then have a look at this [project ](https://github.com/patil-suraj/question_generation) which fine-tunes T5 for question generation, QA and answer extraction.<|||||>Hi
I mean a mixture of multiple tasks like the original T5 paper on TPU so to
run efficiently for large scale and large datasets. Is there an
example/script by huggingface showing it?
thanks alot
On Thu, Oct 22, 2020, 4:10 PM Suraj Patil <[email protected]> wrote:
> Hi @rabeehk <https://github.com/rabeehk>
> by multiple tasks do you mean multitask or different tasks ?
> if it's the latter, the this community notebook
> <https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb>
> shows how to fine-tune T5 for different tasks.
>
> If multitask, then have a look at this project
> <https://github.com/patil-suraj/question_generation> which fine-tunes T5
> for question generation, QA and answer extraction.
>
> β
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/4426#issuecomment-714521374>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ABP4ZCA7OJV3PS6ECZXEGKTSMA4N3ANCNFSM4NDWJKVA>
> .
>
<|||||>I'm pretty sure there isn't any examples which replicate the multitask training used by t5. [This Notebook](https://colab.research.google.com/github/zphang/zphang.github.io/blob/master/files/notebooks/Multi_task_Training_with_Transformers_NLP.ipynb) would be good start (You'd need to select the right tasks, format them in the t5 style, use `T5ForConditionalGeneration` as the model, and adjust everything so it's doing things with a single seq2seq model).
I'm doing something similar as part of my research so I might have something closer at some point.<|||||>i see some script on original author repo but it is not with data
parellism... so not usable for scale...
On Fri, Oct 23, 2020, 12:35 PM Thomas Hudson <[email protected]>
wrote:
> I'm pretty sure there isn't any examples which replicate the multitask
> training used by t5. This Notebook
> <https://colab.research.google.com/github/zphang/zphang.github.io/blob/master/files/notebooks/Multi_task_Training_with_Transformers_NLP.ipynb>
> would be good start (You'd need to select the right tasks, format them in
> the t5 style, use T5ForConditionalGeneration as the model, and adjust
> everything so it's doing things with a single seq2seq model).
>
> I'm doing something similar as part of my research so I might have
> something closer at some point.
>
> β
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/4426#issuecomment-715258017>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ABP4ZCD5YS22DOTF7NBMN7DSMFL5VANCNFSM4NDWJKVA>
> .
>
<|||||>Hi, @ghomasHudson, I've tried did T5 multitask training using task prefixes for my que gen project with pretty good results.
@rabeehk
You can use the [Seq2SeqTrainer](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/seq2seq_trainer.py) which supports TPU training, just build your own multitask dataset using task prefixes.
This [script](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune_trainer.py) shows how to use `Seq2SeqTrainer`, it should be easy to modify using your own dataset.
As T5 treats every task as text generation, we don't need any special changes to model. We just need a multi-task dataset and right sampling strategy if the number of examples are not balanced between different tasks.<|||||>Good to hear @patil-suraj - I've been getting good results out of some very basic scripts I wrote using `transformers + datasets + pytorch_lightning`.
I'd love to see multitask learning have proper support in huggingface with options for multiple sampling methods etc... (#4426 #4340 #6872 #7270 huggingface/datasets#217). Getting the implementation right is not trivial though.<|||||>do you think this script is working fine?
https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/models/hf_model.py
this seems to be adding this, but not with data parallelism, I would
greatly appreciate it if huggingface group could have a look,
and trying to add this script to their repository, with data
parallelism thanks
On Fri, Oct 23, 2020 at 12:59 PM Thomas Hudson <[email protected]>
wrote:
> Good to hear @patil-suraj <https://github.com/patil-suraj> - I've been
> getting good results out of some very basic scripts I wrote using transformers
> + datasets + pytorchlightning.
>
> I'd love to see multitask learning have proper support in huggingface with
> options for multiple sampling methods etc... (#4426
> <https://github.com/huggingface/transformers/issues/4426> #4340
> <https://github.com/huggingface/transformers/issues/4340> #6872
> <https://github.com/huggingface/transformers/issues/6872> #7270
> <https://github.com/huggingface/transformers/issues/7270>
> huggingface/datasets#217
> <https://github.com/huggingface/datasets/issues/217>). Getting the
> implementation right is not trivial though.
>
> β
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/4426#issuecomment-715269787>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ABP4ZCF7EIGT4AYWJ26YRYDSMFOYVANCNFSM4NDWJKVA>
> .
>
<|||||>Hi Everyone,
I am looking for an example showing how to train T5 on multiple datasets
using huggingface repo, hopefully at scale.
This example
https://colab.sandbox.google.com/github/zphang/zphang.github.io/blob/master/files/notebooks/Multi_task_Training_with_Transformers_NLP.ipynb
does not show it with T5, and this example does not work
https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/models/hf_model.py
I appreciate assisting me with suggesting me the example showing this in
huggnigface repo, thanks.
Best
Rabeeh
On Fri, Oct 23, 2020 at 3:39 PM Rabeeh Karimi <[email protected]> wrote:
> do you think this script is working fine?
> https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/models/hf_model.py
> this seems to be adding this, but not with data parallelism, I would
> greatly appreciate it if huggingface group could have a look,
> and trying to add this script to their repository, with data
> parallelism thanks
>
> On Fri, Oct 23, 2020 at 12:59 PM Thomas Hudson <[email protected]>
> wrote:
>
>> Good to hear @patil-suraj <https://github.com/patil-suraj> - I've been
>> getting good results out of some very basic scripts I wrote using transformers
>> + datasets + pytorchlightning.
>>
>> I'd love to see multitask learning have proper support in huggingface
>> with options for multiple sampling methods etc... (#4426
>> <https://github.com/huggingface/transformers/issues/4426> #4340
>> <https://github.com/huggingface/transformers/issues/4340> #6872
>> <https://github.com/huggingface/transformers/issues/6872> #7270
>> <https://github.com/huggingface/transformers/issues/7270>
>> huggingface/datasets#217
>> <https://github.com/huggingface/datasets/issues/217>). Getting the
>> implementation right is not trivial though.
>>
>> β
>> You are receiving this because you were mentioned.
>> Reply to this email directly, view it on GitHub
>> <https://github.com/huggingface/transformers/issues/4426#issuecomment-715269787>,
>> or unsubscribe
>> <https://github.com/notifications/unsubscribe-auth/ABP4ZCF7EIGT4AYWJ26YRYDSMFOYVANCNFSM4NDWJKVA>
>> .
>>
>
<|||||>> Hi ghomasHudson, could you tell me please why getting implementation right is not trivial in your view? what are the existing difficutlies? thanks
Well if you look at huggingface/datasets#217, that's probably the most complete discussion.
My impression is that part of the difficulty is finding the right level of abstraction for this functionality (should it be a dataset-level feature? Dataloader? Something else?). My current intuition is that it belongs at the `DataLoader`-level as you need the concept of batches. Deciding on the API is a little tricky as we have to try and allow a range of sampling methods, and create a general enough framework that we don't over-fit to the ideas of t5.
It's fairly trivial to get this working for an individual example, but a little harder to implement this properly into the library in a general way.<|||||>Hi
Thanks for the reply. I read it. For now just being able to move forward
using huggingface repo for training T5, is there any avilable codes showing
how to handle mixture of tasks with simplified manner at least? thanks a
lot for your help
On Sat, Oct 24, 2020, 2:12 PM Thomas Hudson <[email protected]>
wrote:
> Hi ghomasHudson, could you tell me please why getting implementation right
> is not trivial in your view? what are the existing difficutlies? thanks
>
> Well if you look at the discussion in huggingface/datasets#217
> <https://github.com/huggingface/datasets/issues/217>, that's probably the
> most complete discussion.
>
> My impression is that part of the difficulty is finding the right level of
> abstraction for this functionality (should it be a dataset-level feature?
> Dataloader? Something else?). My current intuition is that it belongs at
> the DataLoader-level as you need the concept of batches. Deciding on the
> API is a little tricky as we have to try and allow a range of sampling
> methods, and create a general enough framework that we don't over-fit to
> the ideas of t5.
>
> It's fairly trivial to get this working for an individual example, but a
> little harder to implement this properly into the library in a general way.
>
> β
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/4426#issuecomment-715906142>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ABP4ZCFT7PECYHNFAKNLNZLSMLADTANCNFSM4NDWJKVA>
> .
>
<|||||>Hi @ghomasHudson, would it be possible for me to set a quick 30 minutes chat with you? I could not find your email to contact you directly, I appreciate if I could ask more on dataset handling. it would be really helpful for me. thanks. |
transformers | 4,425 | closed | BERT and other models pretraining from scratch example | Hi,
I've been finetuning lots of tasks using this repo. Thanks :)
But I couldn't find any pretraining from scratch examples.
Please let me know if you guys have any advices on that.
It would be very helpful for me to do my research.
| 05-18-2020 03:12:46 | 05-18-2020 03:12:46 | https://huggingface.co/blog/how-to-train<|||||>Thank you for your swift reply :)
How about Electra model? Is it possible to pretrain from scratch as well?<|||||>Did you read the article? Section 3<|||||>Yup, I've read Section 3. :)
As long as I know Electra uses replaced token detection with discriminator and generator (GAN style).
That's why I thought that there could be something different from BERT-like masked lm.
And I found the open issue below as well.
https://github.com/huggingface/transformers/issues/3878
<|||||>I modified https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py script few days ago for training electra from scratch. But there were some problems(maybe bugs) i had to solve for this task.
Currently IΒ΄m setting up a clean running version for training a electra language model from scratch with an additional document classification head based on the script.<|||||>I got it. Thank you for your effort!<|||||>@miketrimmel Hi, Is there still a bug if I try to train electra from scratch using run_language_modeling.py or it is available now? Thanks!<|||||>I had issues with the tb_writer. i tried it for new now and there were no issues with the writer any more(maybe I had an old version).
If youΒ΄re using a pretrained tokenizer it should work now. Training a new tokenizer is not supported. I have to say IΒ΄m new into the tokenization things. IΒ΄m training a Twitter language model from scratch so i wasnΒ΄t sure if the model will perform as good with the pretrained tokenizer (can be that there is a lot of vocabulary missing because of the "Twitter-slang"). So I trained a custom tokenizer. I will verify the different tokenizers the next days. I will also provide the model and tokenizer when its finished if someone wants to fine-tune it on his Twitter task.<|||||>Great! Thanks for explanation :)<|||||>@miketrimmel Could you please share the code for pretraining electra from scratch?<|||||>Yes, I will share it the next days here. Actually IΒ΄m busy with other things and I have to make it pretty before :D <|||||>Could i know what's the meaning of "You are instantiating a new tokenizer from scratch. This is not supported, but you can do it from another script, save it, and load it from here, using --tokenizer_name" @miketrimmel
Could i use a tokenizer from `https://github.com/huggingface/tokenizers` for initiation? I'd like to train a model from scratch.
<|||||>Yes you could use a tokenizer from https://github.com/huggingface/tokenizers. But there is no batch_encode_plus method. I used the solution from another issue https://github.com/huggingface/tokenizers/issues/259 here. The solution with the wrapper from @theblackcat102 worked for me.<|||||>There is code for training ELECTRA from scratch still undergoing testing here https://github.com/huggingface/transformers/pull/4656
It's still under development but it pretty stable now.<|||||>> I modified https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py script few days ago for training electra from scratch. But there were some problems(maybe bugs) i had to solve for this task.
>
> Currently IΒ΄m setting up a clean running version for training a electra language model from scratch with an additional document classification head based on the script.
Any chance you could share the code? I've been trying to do this myself, but am failing at getting results (whether in finetuning, or in running electra with TF in HF). Thanks!<|||||>can you give me some advices about how to pretrain the bart model on my own dataset? thank you soooooo much<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Detailed Explanation
https://mlcom.github.io/<|||||>> I modified https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py script few days ago for training electra from scratch. But there were some problems(maybe bugs) i had to solve for this task.
>
> Currently IΒ΄m setting up a clean running version for training a electra language model from scratch with an additional document classification head based on the script.
location is currently not available...please share the exact location<|||||>> Detailed Explanation
> https://mlcom.github.io/Create-Language-Model/
location is currently not available...please share the exact location<|||||>> > Detailed Explanation
> > https://mlcom.github.io/Create-Language-Model/
>
> location is currently not available...please share the exact location
mlcom.github.io |
transformers | 4,424 | closed | Update README.md (model_card) | - add a citation.
- modify the table of the BLUE benchmark.
The table of the first version was not displayed correctly on https://huggingface.co/seiya/oubiobert-base-uncased.
Could you please confirm that this fix will allow you to display it correctly? | 05-18-2020 02:22:12 | 05-18-2020 02:22:12 | Yes, looks good now |
transformers | 4,423 | closed | How to change transformers model embedding layer weights | I trained my own tokenizer and added new words. Now I need to change the embedding size from the pretrained model. What I do is like this:
```
import transformers as tfm
import tensorflow as tf
backbone = tfm.TFRobertaModel.from_pretrained(PRETRAINED_PATH, output_hidden_states=True)
add_emb = tf.random.uniform(shape=(new_vocab_size, 768), minval=-1., maxval=1.)
new_emb = tf.concat((backbone.roberta.embeddings.word_embeddings, add_emb), 0)
backbone.roberta.weights[194] = new_emb
```
However, the shape of embedding weight is still the original vocab size.
But if I do
```
backbone = tfm.TFRobertaModel.from_pretrained(PRETRAINED_PATH, output_hidden_states=True)
add_emb = tf.random.uniform(shape=(new_vocab_size, 768), minval=-1., maxval=1.)
backbone.roberta.embeddings.word_embeddings= tf.concat((backbone.roberta.embeddings.word_embeddings, add_emb), 0)
```
Then the embedding weights will be removed from model ```trainable_weights``` and has only 198 elements instead of 199 in the original list.
Am I doing something wrong to change the embedding weights? Thanks!
The original stack overflow question is also posted:
https://stackoverflow.com/questions/61860156/how-to-change-transformers-model-embedding-layer-weights | 05-17-2020 23:32:45 | 05-17-2020 23:32:45 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,422 | closed | [T5 Conf] rename docstring to acuatly argument names | As mentioned in issue: #4139, the docstring names of the T5 Config are confusing since those names cannot be used to set the arguments.
This PR renames the arguments in the docstring and adds an explanation that those arguments can also be accessed via their properties.
To not break backward compatibility, renaming the docstring is better than renaming the actual variables IMO. | 05-17-2020 23:05:12 | 05-17-2020 23:05:12 | |
transformers | 4,421 | closed | [test_pipelines] Mark tests > 10s @slow, small speedups | - pass in num_beams=2 to `SummarizationPipelines` | 05-17-2020 22:54:10 | 05-17-2020 22:54:10 | |
transformers | 4,420 | closed | BERT Tokenization problem when the input string has a "." in the string, like floating number | # π Bug
## Information
Model I am using (Bert, XLNet ...): BERT
Language I am using the model on (English, Chinese ...): English
The problem arises when using: Tokenizer
* [ ] the official example scripts: (give details below) N/A
* [ ] my own modified scripts: (give details below) N/A
The tasks I am working on is: Any
* [ ] an official GLUE/SQUaD task: (give the name) N/A
* [ ] my own task or dataset: (give details below) N/A
## To reproduce
Steps to reproduce the behavior:
1. Load any BERT tokenizer
2. Tokenize something with a "." in between
3. Decode these ids, you will find it mismatch
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
x = tokenizer.encode('AN.C', add_special_tokens=False)
z = tokenizer.decode(x)
```
It prints:
```
AN. C
```
## Expected behavior
```
AN.C
```
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Ubuntu
- Python version: 3.6
- PyTorch version (GPU?): GPU
- Tensorflow version (GPU?): GPU
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
| 05-17-2020 22:37:40 | 05-17-2020 22:37:40 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,419 | closed | [TF generate] Fix issue for batch output generation of different output length. | This PR fixes the issue: #4088.
A wrong variable was used in TF generate to determine the sentence length in the case when multiple outputs have different sentence lengths and the max sentence lengths is < the user defined `max_length`.
Also, both PT and TF generate are refactored a bit so that the `cur_length` variable is incremented directly after the `input_ids` are incremented. | 05-17-2020 21:45:52 | 05-17-2020 21:45:52 | |
transformers | 4,418 | closed | Scaling text classification / reusing models | If I have a system, where I want to train many text classifiers for many users, how could I go about it with the transformers library in a scalable way?
Right now I would have to run let's say a 10min training per user on a RTX 2080 ti for Albert for the dataset I have. That doesn't scale if I have thousands of users.
If I understand correctly, in the sequence classification models, the whole transformer model is being trained, so the backpropagation happens through the whole network.
However, if I now want to reuse the model for another user, maybe just passing in a bit more labeled data to customize a base classifier, how could I go about that?
It seems to me, that I would have to basically "freeze" the whole "Bert" model, no matter which one I would use, and then only train a thin layer on top.
One possibility I see would be KNN using sentence transformers, I already asked in the repo there https://github.com/UKPLab/sentence-transformers/issues/209
Maybe someone here has an idea which approach would make sense for such a situation.
Thanks! | 05-17-2020 20:09:50 | 05-17-2020 20:09:50 | You can pretty easily "freeze" parameters you don't want to backpropagate against, in PyTorch:
```python
for param in parameters:
param.requires_grad = False
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,417 | closed | TypeError: add_() takes 1 positional argument but 2 were given | # π Bug
## Information
I was trying to reproduce the GLUE fine-tuning example (https://huggingface.co/transformers/examples.html#fine-tuning-example) when I ran into this error:
```
File "~/anaconda3/lib/python3.7/site-packages/transformers/optimization.py", line 155, in step
exp_avg.mul_(beta1).add_(grad, 1.0 - beta1)
TypeError: add_() takes 1 positional argument but 2 were given
```
The problem arises when using:
* [x] the official example scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
## To reproduce
Steps to reproduce the behavior:
Follow the official guide example here: https://huggingface.co/transformers/examples.html#fine-tuning-example
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.9.1
- Platform: Linux 5.4.0-29-generic x86_64 Ubuntu 20.04 LTS
- Python version: 3.7.6
- PyTorch version (GPU?): 1.5.0 (Yes)
- Tensorflow version (GPU?):
- Using GPU in script?: Yes, 1
- Using distributed or parallel set-up in script?:
| 05-17-2020 19:25:49 | 05-17-2020 19:25:49 | This was fixed on master on Friday, can you try pulling from master again? |
transformers | 4,416 | closed | Fixed spelling of training | Spelling of training was incorrect. Fixed it. Sorry for such a bad PR :( | 05-17-2020 18:15:40 | 05-17-2020 18:15:40 | |
transformers | 4,415 | closed | GPT2 perplexity rolling/striding way for evaluating a document. | As I understand GPT2 uses TextDataset as a loader and it produces the example list in block sizes. So say we have a sentence "**we are in a climate crisis**" and have block size 3. So this will produce the example list as
`ex = [["we","are","in"],["a","climate","crisis"]]`
So in such a scenario when calculating overall perplexity for the document the word "a" has no previous context and "climate" only has "a" as context. What I would want would ideally the context to be a in rolling/striding way. So I edited the text loader to produce a list like:
`ex = [["we","are","in"],["are","in","a"],["in,"a","climate],["a","climate","crisis"]]`
Now, if I calculate perplexity for this ex list surely a lot of words will be counted multiple times as they are in the lists in multiple times. But in in a way for every instance from 2nd onwards (ex[1] above) i Would only want to consider last word for perplexity/loss calculation. So my query is how to tackle it, should I use attention mask so that say in the case of
`ex[1] =["are","in","a"] `
with mask of [0,0,1] for loss calculation (and hence perplexity) it only takes "a" into account but for getting the context of "a' will also take previous 2 words in account?
Any help on the best way to approach this problem will be much appreciated. | 05-17-2020 17:47:43 | 05-17-2020 17:47:43 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,414 | closed | Get BERT sentence encoding | I am trying to access to encoding of sentences in the vaious layers in pre-trained BERT model.
So it should be something like this:
```
sentence = 'We bought a new car'
bert_encoder = load_encoder('bert-base-uncased')
enc = bert_encoder.encode(sentence)
enc.get_layer[0] #this is the first layer
enc.get_layer[-1] #this is the last layer
```
What is the best way to do it?
Thanks! | 05-17-2020 15:33:36 | 05-17-2020 15:33:36 | I believe you can't do like that, you have to run the model just as is with all the necessary inputs(pertaining to the sentence) as mentioned in the docs : https://huggingface.co/transformers/model_doc/bert.html#transformers.BertModel
and then add the configuration : `config.output_hidden_states=True` for getting the embeddings from each intermediate encoding layers. <|||||>@Sriharsha-hatwar Thanks, do you have a code sample maybe?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,413 | closed | Modify example of usage | I followed the google example of usage for its electra small model but i have seen it is not meaningful, so i created a better example | 05-17-2020 15:16:17 | 05-17-2020 15:16:17 | |
transformers | 4,412 | closed | Tensorflow NER Training script Not working | # π Bug
Tensorflow NER Training script Not working
## Information
I am following the exact guide at https://github.com/huggingface/transformers/tree/master/examples/token-classification
Model I am using (Bert, XLNet ...): BERT
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
python run_tf_ner.py --data_dir ./ \
--labels ./labels.txt \
--model_name_or_path $BERT_MODEL \
--output_dir $OUTPUT_DIR \
--max_seq_length $MAX_LENGTH \
--num_train_epochs $NUM_EPOCHS \
--per_device_train_batch_size $BATCH_SIZE \
--save_steps $SAVE_STEPS \
--seed $SEED \
--do_train \
--do_eval \
--do_predict
The tasks I am working on is:
Training NER on germeval data
Steps to reproduce the behavior:
Follow the official guide at https://github.com/huggingface/transformers/tree/master/examples/token-classification
<!-- ValueError: Variable <tf.Variable 'tf_bert_for_token_classification/bert/pooler/dense/kernel:0' shape=(768, 768) dtype=float32> has `None` for gradient. Please make sure that all of your ops have
a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!--
- `transformers` version: 2.9.1
- Platform: Linux-3.10.0-1062.9.1.el7.x86_64-x86_64-with-centos-7.7.1908-Core
- Python version: 3.6.10
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.2.0 (True)
- Using GPU in script?: Yes, both Single and multi-GPU training
- Using distributed or parallel set-up in script?: Yes-->
- `transformers` version:
The documentation i guess is not updated for Tensorflow training, an additional parameter "logging_dir" is required in case of TF.
| 05-17-2020 14:28:21 | 05-17-2020 14:28:21 | - `transformers` version: 2.9.1
- Platform: Linux-3.10.0-1062.9.1.el7.x86_64-x86_64-with-centos-7.7.1908-Core
- Python version: 3.6.10
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.2.0 (True)
- Using GPU in script?: Yes both single gpu and multi-gpu training
- Using distributed or parallel set-up in script?: Yes<|||||>Error log:
ValueError: Variable <tf.Variable 'tf_bert_for_token_classification/bert/pooler/dense/kernel:0' shape=(768, 768) dtype=float32> has `None` for gradient. Please make sure that all of your ops have
a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.<|||||>Tried with TF 2.0:
Error Message: AttributeError: module 'tensorflow_core._api.v2.config' has no attribute 'list_physical_devices'
TF 2.1: Same as TF 2.2 i.e. ValueError: Variable <tf.Variable 'tf_bert_for_token_classification/bert/pooler/dense/kernel:0' shape=(768, 768) dtype=float32> has `None` for gradient. Please make sure that all of your ops have
a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.<|||||>I had same issue.
add ```--mode token-classification``` to the command
reference code https://github.com/huggingface/transformers/blob/18d233d52588b4e08dc785fbfecd77529e9effa6/src/transformers/trainer_tf.py#L380<|||||>Thanks @linhx13 ..Works fine.. |
transformers | 4,411 | closed | Pipeline for Conditional Generation (T5 type models) | As text-to-text models (like T5) increase the accessibility of multi-task learning, it also makes sense to have a flexible "Conditional Generation" pipeline.
For example, I should be able to use this pipeline for a multitude of tasks depending on how I format the text input (examples in Appendix D of the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf)). As a baseline, this should be able to work on `T5ForConditionalGeneration` and allow for any of the tasks that are learned by the open sourced T5 model.
Since T5 isn't usable for `TextGenerationPipeline`, I propose we add a `ConditionalGenerationPipeline`.
Please do let me know if there is an existing way to perform the above via pipelines, or if adding a pipeline doesn't makes sense for this; otherwise, I can submit a PR for the above `ConditionalGenerationPipeline` π | 05-17-2020 12:10:22 | 05-17-2020 12:10:22 | Yes having a "Conditional Generation" pipeline makes sense given that variety of tasks can be solved using it. We can use T5, BART for these tasks as well as the new Encoder-Decoder. I would like to call it `TextToTextPipeline` though, since we can solve non-generative tasks also as demonstrated in the T5 paper. I think this pipeline will be really useful.<|||||>Technically, any task using Text-To-Text is generative in nature right? But yeah, agree `TextToTextPipeline` will make the use case clearer :smile:
Hoping to get feedback from @patrickvonplaten before attempting this<|||||>Yeah. To be honest, I'm not sure whether this is a good idea. The pipelines are supposed to be directly related to a task such as `translation`, `summarization` which are specific cases of `text2text` applications.
I think for every task we should introduce a new `pipeline` before starting to have different levels of abstractions in `pipelines`. A `TextToTextPipeline could become quite a mess regarding different possible input formats, different prefixes (for T5), etc...For general tasks such as these ones I'd prefer to just implement your own code using the `.generate()` function.
@LysandreJik - what do you think? <|||||>I think from a high level, more than just thinking about `text2text`, I'm foreseeing the future where multi-task learning becomes a standard way of deploying ML models. Having a pipeline to introduce this can be one step to accelerating that future.
Although, I do understand that `text2text` is just one approach to doing this, but in my opinion, it's the most promising one at the moment, so it's a good interface to start with for a multi task model pipeline.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I'm not sure that T5 is the most promising place to do a multi-task pipeline, since their results in that paper suggested it was hard to significantly beat the baseline of just fine tuning on the target task.
The recent AdapterHub library built off of HuggingFace seems a better place for building out multitask systems/pipelines imo. But of course the library designers have more intuition on this.<|||||>I'm don't think anyone is arguing for the T5 model specifically, just that there is a trend towards `text2text` as a common method of doing multitask learning for NLP (GPT-3 frames tasks like this too for example).<|||||>> I'm don't think anyone is arguing for the T5 model specifically, just that there is a trend towards `text2text` as a common method of doing multitask learning for NLP (GPT-3 frames tasks like this too for example).
Fair enough. I'm not one to argue against a feature, even if I wouldn't use it much myself. I've been using `text2text` myself for multiple tasks.
Mostly I just meant the multitask part of `text2text` is going to be a little tricky to abstract away conveniently into a pipeline. The main complexity there is mixing the proportion of each task / batch correctly. The T5 paper suggests performance and weights are very specific to the multitask learning, and if its not tuned properly the performance will be hurt by using multitasks. Uniform mixing for example performs quite poorly. I suspect that problem would apply to most `text2text` paradigms.
What I've been doing myself is using a custom DataLoader class that handles the mixing of batch proportions of each task. A pipeline that can integrate something like that would be terrific to have.<|||||>Hey everybody, after thinking a bit more about it, I think it does make sense to add a `ConditionalTextGeneration` pipeline which will be the equivalent of `TextGenerationPipeline` for all models in `AutoModelForSeq2Seq`. It should look very similar to the `TextGenerationPipeline` (probably we more or less the same at the moment), but it will give us more freedom in the future (for example when we add `decoder_input_ids` to the generation).
@sshleifer , @yjernite , @LysandreJik - what are your thoughts on this?<|||||>@patrickvonplaten happy to work on a PR for this if team agrees it makes sense :smile:<|||||>I think we definitely need something like that.
I'd probably go with a more explicit name though: e.g. `TextToTextPipeline` or `Text2TextGenerationPipeline`. `ConditionalTextGeneration` might cover other uses in the future (e.g. multiple input texts or multimodal inputs)<|||||>Such a pipeline would be very welcome, indeed!<|||||>Awesome, will send a PR in the next week or so :smile:<|||||>I also want to work on this, @enzoampil let me know if you want to collaborate on the PR :)<|||||>Sure thing, maybe we can collab on the same fork? :) |
transformers | 4,410 | closed | Remove pytorch codes in Class TFXLNetMainLayer | This PR removes pytorch codes in Class TFXLNetMainLayer. | 05-17-2020 12:05:55 | 05-17-2020 12:05:55 | Sorry I don't really understand how this PR removes pytorch code in `TFXLNetMainLayer` - can you explain a bit more in-detail?<|||||>@patrickvonplaten
The codes in file modeling_tf_xlnet.py is the XLNet Model of tensorflow implementation. So the parameter **head_mask** should have a type of tf.Tensor or Numpy array (see Line 780 ).
But in Lines 643-650, the parameter **head_mask** uses the methods of the pytorch, such as "expand" or "unsqueeze" which can't apply to tf.Tensor or Numpy array.
Actually these codes are copied from modeling_xlnet.py by mistake and should be removed.<|||||>Perfect thanks a lot!
@LysandreJik - the RUN_SLOW=1 tests all poss for TFXLNetMainLayer.
Good to merge for me! |
transformers | 4,409 | closed | add model card for t5-base-squad | Model card for https://huggingface.co/valhalla/t5-base-squad | 05-17-2020 07:16:30 | 05-17-2020 07:16:30 | Nice! cc @patrickvonplaten |
transformers | 4,408 | closed | Request to add MobileBert | # π New model addition
MobileBERT
## Model description
MobileBERT is a thin version of BERT_LARGE, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks.
## Open source status
* [ ] the model implementation is available: (give details)
https://github.com/google-research/google-research/tree/master/mobilebert
* [ ] the model weights are available: (give details)
https://storage.googleapis.com/cloud-tpu-checkpoints/mobilebert/uncased_L-24_H-128_B-512_A-4_F-4_OPT.tar.gz
* [ ] who are the authors: (mention them, if possible by @gh-username)
Google LLC
Xiaodan Song
Zhiqing Sun
Hongkun Yu
Denny Zou
| 05-17-2020 06:33:25 | 05-17-2020 06:33:25 | Duplicate of #4185 |
transformers | 4,407 | closed | fix(run_language_modeling): use arg overwrite_cache | In `run_language_modeling.py`, arg `overwrite_cache` was unused. | 05-17-2020 04:57:00 | 05-17-2020 04:57:00 | |
transformers | 4,406 | closed | Summarization Fine Tuning | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I tried using T5 and Bart but the abstraction summarization on scientific texts does not seem to give the results I want since I think they are both trained on news corpora. I have scraped all of the free PMC articles and I am thinking about fine-tuning a seq2seq model between the articles and their abstracts to make an abstractive summarizer for scientific texts. This Medium article (https://medium.com/huggingface/encoder-decoders-in-transformers-a-hybrid-pre-trained-architecture-for-seq2seq-af4d7bf14bb8) provides a bit of an introduction to how to approach this but does not quite go into detail so I am wondering how to approach this.
I'm not really asking for help being stuck but I just don't really know how to approach this problem.
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
https://stackoverflow.com/questions/61826443/train-custom-seq2seq-transformers-model
| 05-17-2020 01:50:39 | 05-17-2020 01:50:39 | First thing you can try is fine-tune T5/BART for summarization on your corpus and see how it performs.<|||||>@patil-suraj where can I find a guide to this? I'm a bit confused by the documentation. <|||||>[Here's](https://github.com/huggingface/transformers/tree/master/examples/summarization/bart) the official example which fine-tunes BART on CNN/DM, you can just replace the cnn/dm dataset with your own summerization dataset.<|||||>@patil-suraj Thanks for the example. I'm wondering if there is any simpler way to get started since I'm planning on training it in a Kaggle notebook due to GPU constraints, because otherwise I may need to copy paste entire folder into a Kaggle notebook.<|||||>@kevinlu1248
This [colab](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) shows how to fine-tune T5 with lightening. This is just the self-contained version of official example. You should be able to use the same `Trainer`, just replace the model with BART and use you own dataset.<|||||>@patil-suraj Thanks, I'll look into it.<|||||>> [Here's](https://github.com/huggingface/transformers/tree/master/examples/summarization/bart) the official example which fine-tunes BART on CNN/DM, you can just replace the cnn/dm dataset with your own summerization dataset.
Hi @patil-suraj, I am following that example and have my data in that format, and I can see the process using GPU/CPU, but I can't get tensorboard working. Do you have any hints? I am happy to contribute to documentation once I get it working.<|||||>@sam-qordoba lightning handles logging itself and by default the tensorboard logs are saved in lightning_logs directory. So you should be able see the logs by passing lightning_logs as the logdir to tensorboard command.<|||||>Thanks @patil-suraj <|||||>Hey @patil-suraj, I had OOM issues on Colab, so moved to a VM with 56GB RAM, and the behaviour is the same as on Colab: memory usage grows, until it uses up everything available (I even added 32GB of swap, so, it's a really impressive amount of memory usage), until I get locked out of the machine... and the only time it writes to `lightning_logs` is right when it starts.
```sh
jupyter@pytorch-20200529-155153:~/lightning_logs$ tree
.
βββ version_0
βββ events.out.tfevents.1590794134.pytorch-20200529-155753.8733.0
βββ hparams.yaml
1 directory, 2 files
```
`nvidia-smi` looks like this:
```
jupyter@pytorch-20200529-155753:~$ nvidia-smi
Sat May 30 00:07:12 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.87.01 Driver Version: 418.87.01 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 |
| N/A 77C P0 35W / 70W | 2579MiB / 15079MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 8733 C /opt/conda/bin/python 2569MiB |
+-----------------------------------------------------------------------------+
```
The cell `trainer.fit(model)` outputs the model definition, but no progress bar on anything,
```
| Name | Type | Params
-----------------------------------------------------------------------------------------------------------------
0 | model | T5ForConditionalGeneration | 222 M
1 | model.shared | Embedding | 24 M
2 | model.encoder | T5Stack | 109 M
...
514 | model.decoder.block.11.layer.2.dropout | Dropout | 0
515 | model.decoder.final_layer_norm | T5LayerNorm | 768
516 | model.decoder.dropout | Dropout | 0
517 | model.lm_head | Linear | 24 M
Selected optimization level O1: Insert automatic casts around Pytorch functions and Tensor methods.
Defaults for this optimization level are:
enabled : True
opt_level : O1
cast_model_type : None
patch_torch_functions : True
keep_batchnorm_fp32 : None
master_weights : None
loss_scale : dynamic
Processing user overrides (additional kwargs that are not None)...
After processing overrides, optimization options are:
enabled : True
opt_level : O1
cast_model_type : None
patch_torch_functions : True
keep_batchnorm_fp32 : None
master_weights : None
loss_scale : dynamic
```
Sorry to keep bothering you, but do you have any hints? It's hard to know what's going on because it doesn't seem to log<|||||>It shouldn't take that much memory, did you try reducing the batch size ?
Also seems that you are using fp16 here. I haven't tried it with fp16 yet.
tagging @sshleifer <|||||>Ok, I tried fp16 as a "maybe this will use less memory" experiment, I will try without. I tried batch size of 4, could go lower I guess. Should I just double the learning rate each time I halve the batch size, or are other changes needed?<|||||>Could somebody who has fine-tuned BART give me an estimate of how long it takes / how many epochs until convergence? Also any tricks to speed it up (weight freezing etc)?
1 epoch takes c. 150 hrs for my dataset so wondering how many I need...<|||||>Sounds like you have a huge dataset?
It's tough to know exactly how many you will need, but for xsum and cnn most of the model's I've need have required 4-6 to converged.
The [original authors] https://github.com/pytorch/fairseq/blob/master/examples/bart/README.summarization.md#4-fine-tuning-on-cnn-dm-summarization-task say 15-20K Steps.
I have had to go down to batch size=1 or 2 on some occasions.
You can use `--gradient_accumulation_steps` to keep the "effective" batch size (how many examples your model processes per backward pass) consistent.
@sam-qordoba is your `Dataset/DataLoader` putting all the examples in memory before training? That could be an issue on a large dataset.
<|||||>You can also freeze the `BartForConditionalGeneration.model.encoder` using the function below to reduce memory cost.
```
def freeze_part(model: nn.Module):
for par in model.parameters():
par.requires_grad = False
```
You can also use `val_check_interval` in lightning to check validation statistics more frequently, but unfortunately your checkpoints will still be saved at the end of every epoch.<|||||>@sshleifer thanks for coming back with this- all very helpful.
Yes- essentially I am just trying out using BART to for longer docs (arXiv/PubMed) as a baseline to compare more sophisticated methods against. This means training set has 300k samples and only 1 sample fits on the GPU at once (12Gb- using 1,024 input length).
Lots for me to play around with and see what works well. Thanks for your help.<|||||>> Yes- essentially I am just trying out using BART to for longer docs (arXiv/PubMed) as a baseline to compare more sophisticated methods against
@alexgaskell10 If you are interested in using BART for long documents then keep an eye here.
https://github.com/patil-suraj/longbart
I'm trying to convert BART to it's long version using longformer's sliding-window attention.
I've been able to replace BART encoder's `SelfAttention` with `LongformerSelfAttention` with 4096 max length. Now I'm working on adding gradient checkpointing to allow it to train on smaller GPU's. Hope to finish it soon.
gradient checkpointing and fp16 with '02' opt level should allow to use larger batch size<|||||>@patil-suraj thanks for this- adapting BART for LongformerSelfAttention was actually something I was going to start looking into over the next couple of weeks. Thanks for sharing- I'll be sure to give it a go soon.<|||||>Hey @patil-suraj, any updates on your latest progress on LongBART? Thinking about diving into a similar block of work: expanding BART via Longformer<|||||>Hi @virattt , I've been able to replace bart encoder's self attention with sliding window attention. Also added gradient checkpoiting in the encoder.
Gradient checkpoiting in decoder is not working so going to remove it for now. Will update the repo this weekend and will put some instructions in the readme.<|||||>Sounds great, thanks @patil-suraj <|||||>Would love to hear `LongBart` experimental results whenever they are available!<|||||>@sshleifer I have been playing around with `LongBart` recently and have some preliminary experimental results. This is using @patil-suraj 's longbart repo fine-tuned on the PubMed dataset using the hf summarization finetune.py script.
The best result so far is ROUGE-1 = 36.8 (for comparison, fine-tuning vanilla `BART` on PubMed and truncating articles at 1024 tokens I got 42.3 ROUGE-1). I have only run a few configs so far and will be running many more so I expect this to improve. Next steps:
- Have been only using a 12Gb GPU so far so have frozen the embeddings and encoder otherwise too large. I have a much larger cluster I can move to so will start running trials on this soon which will give more freedom to try different configs
- I am only fine-tuning at the moment. Might explore doing some pre-training although this may be too expensive.
Let me know if there is anything you would like to see and I'll try to schedule it in.<|||||>Hi @alexgaskell10 , did you use the code as it is ? I think we'll need to train the embeddings for few epochs then we can freeze it.
However without freezing the embeddings I ran into OOM halfway through the epoch even with bart-base with '02' fp16 on 16GB V100.
@sshleifer do you have any ideas why this might be happening ? It went well till 60% of first epoch then OOM. Batch size was 1 and max_seq_len 4096 ?
@alexgaskell10 can you share more details, how many epochs, batch size, fp16 or not ? <|||||>Yes, I used the code as is (minor changes to integrate with hf finetune.py script). I agree that the embeddings and encoder should not be frozen from the beginning but I couldn't fit it on my 12Gb GPU. Once I get setup on the cluster I'll try this.
More details on all my runs so far can be found in my [wandb project](https://app.wandb.ai/alexgaskell/Covid01-scripts_models/overview?workspace=user-alexgaskell). To answer your question, max a couple epochs so far, batch size between 4 and 16 depending on what fits, not fp16 so far (haven't set up yet but will do soon).<|||||>Thanks @alexgaskell10 , I think you'll be able to use bart-base with fp16 and max 2048 seq len without frezzing embdddings on 12GB GPU <|||||>@patil-suraj:
- 4096 is a very large max_seq_len, but I know that doesn't answer your question. I would guess that the answer is that you got a really big batch. The batches are not all the same size. We trim them to save padding computation. If you are on one GPU you can use `--sortish_sampler` which ensures that the first batch is the largest, so you get OOM at the beginning of the epoch at least. You also get hopefully a further reduction in padding computation.
- I would be interested to know how much `--sortish_sampler` reduces the training cost of 1 epoch with other parameters fixed.
@alexgaskell10 :
Thanks for sharing your wandb, it makes understanding what you're doing way easier.
- From [pegasus](https://arxiv.org/pdf/1912.08777.pdf) Table 2, it seems like SOTA for PubMed is around `45.49/19.90/27.69`. (Rouge 1, Rouge 2, Rouge L) So there is still some headroom! (Note we will add pegasus in the library sometime in July).
- From looking at your wandb logs, your models are still getting better when training stops. When you move to a beefier setup you might consider training for longer.
- I think there is general consensus that Rouge2 and Rouge-L are better metrics than Rouge-1.
Some questions I would love to know the answer to (for any dataset):
1. which `--model_name_or_path` is the best starting point for finetuning: bart-base vs. bart-large vs. bart-large-xsum vs distilbart-xsum-12-6, for example.
2. How does `LongBart` compare in performance to `BartForConditionalGeneration`?
3. Does increasing `--adam_eps` improve performance? Jeremy Howard at fastai recommended this once, and the default 1e-8 seems to be a fairly low setting.
4. What is the impact of `--freeze-encoder` and `--freeze_embeds` on time per epoch, max batch size, and performance.
<|||||>@sshleifer thanks for coming back to me. Several of your questions I can answer immediately, the others I will try to take a look at. If you're interested, I have a [separate wandb project](https://app.wandb.ai/alexgaskell/transformers-examples_summarization?workspace=user-alexgaskell) containing a bunch of fine-tuning runs for `BartForConditionalGeneration` on PubMed to act as a baseline for `Longformer`. All of these runs have frozen embs and enc because of size constraints- only batch size 1 or 2 fit on GPU and that didn't perform well. If I get a bigger setup I'll try with these unfrozen and a larger batch size.
Addressing your questions:
1. I have been using facebook/bart-large-cnn so far- will investigate if I get time
2. This can be seen in the two wandb repos I've shared here and above. So far my best `BartForConditionalGeneration` is 0.426/0.177/0.263 and my best `Longformer` is 0.367/0.120/0.222 so BART is much better so far. However, both of these have frozen embs and enc (and presumably PEGASUS was fine-tuned without these frozen) so there are more experiments to run
3. Haven't looked at this. Will give it a go
4. Freezing both has a big impact (haven't looked at freezing each separately).
- Time per epoch I think order of 3-4x quicker (8hrs vs 24+hrs per epoch using 12Gb GPU)
- Batch size >8x improvement (2 vs 16)
- Performance seemed much better when frozen. Probably due to small batch size training was unstable when using a small batch size. The img below shows a comparison between frozen (grey, bsz=16) and unfrozen (blue, bsz=2).
<img width="1093" alt="Screenshot 2020-07-01 at 22 06 57" src="https://user-images.githubusercontent.com/51463426/86291868-96cbce80-bbe7-11ea-8427-52619710d2fa.png">
<|||||>> BartForConditionalGeneration is 0.426/0.177/0.263 and my best Longformer is 0.367/0.120/0.222
There's a bug related to masking that could be the reason for the performance drop. I started working on `LongformerEncoderDecoder` and have a fix [here](https://github.com/allenai/longformer/blob/encoderdecoder/longformer/longformer_encoder_decoder.py#L66).
<|||||>Thanks for flagging, I will take a look.
In any case, I have much better `LongBart` results now (0.433, 0.189, 0.273). I found that fine-tuning `LongBart` without freezing the embs or enc worked much better, whereas `Bart` performed better when embs and enc were frozen. This probably makes sense given that `LongBart` is using weight transfer so needs more comprehensive training to be effective. Hopefully the bug fix will improve these results even more. <|||||>Is this still with seqlen=1024? what is the maximum seqlen your dataset requires?<|||||>The above results are using seqlen=1024 and window size=512 (only using a 12Gb GPU currently so nothing larger fits; I'm trying to get a beefier setup sorted). This is PubMed dataset so max seqlen is well above this, probably in the region of 6k tokens.
I did experiment with using longer inputs with enc and embs weights frozen and it didn't improve performance. My hypothesis is that using longer inputs for PubMed doesn't actually help as the abstract is often extractive from the introduction so using longer inputs doesn't help- I'll test this once I get the bigger setup. Pegasus is SOTA and I believe it only uses the introduction...<|||||>> window size=512
Just an FYI, this is one-sided window size. The actual window size is 1024.
> only using a 12Gb GPU currently so nothing larger fits
I added gradient checkpointing which will help. With 12Gb I think you can run the large model with seqlen=4096.
> This is PubMed dataset so max seqlen is well above this, probably in the region of 6k tokens.
With fp16, gradient checkpointing and 48Gb gpu, I was able to run the large model on seqlen=12k. I have the pretrained model and gradient checkointing instructions in the [readme](https://github.com/allenai/longformer/tree/encoderdecoder). This is still early WIP though. <|||||>> Just an FYI, this is one-sided window size. The actual window size is 1024.
Ah ok, thanks for flagging.
> I added gradient checkpointing which will help. With 12Gb I think you can run the large model with seqlen=4096.
Will take a look, thanks!
<|||||>Hi @ibeltagy , Thank you for the `LongformerEncoderDecoder` .
>I added gradient checkpointing which will help.
is it only in encoder or in both encoder and decoder ? I've been able to add gradient checkpointing in encoder, but I still got OOM with bart-base, fp-16 with '02', attention window 1024 and max seq len 4096.
> I was able to run the large model on seqlen=12k
What is the output length for this ?<|||||>both encoder and decoder. The commit is [here](https://github.com/ibeltagy/transformers/commit/f5cd72a73ab01461fa9db0d1cb0a800bdf01db08).
> What is the output length for this?
It is pretty small. Maybe I should try it again with a longer output.<|||||>> With fp16, gradient checkpointing and 48Gb gpu, I was able to run the large model on seqlen=12k. I have the pretrained model and gradient checkointing instructions in the [readme](https://github.com/allenai/longformer/tree/encoderdecoder). This is still early WIP though.
@ibeltagy How did you create this pre-trained model? Did you use the `create_long_model` function from [notebook](https://colab.research.google.com/github/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) in the readme with the `BartLarge` model? Or did you use [patil-suraj's modified conversion function](https://github.com/patil-suraj/longbart/blob/master/longbart/convert_bart_to_longbart.py)?
Did you change the position embedding matrix for the decoder in addition to the encoder?<|||||>With both `longbart` and the `LongformerEncoderDecoder` I get the below error on the line `query = query.view(bsz, tgt_len, embed_dim)`:
```
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
```
Anyone have any ideas as to why `query` is not contiguous?<|||||>@HHousen, the conversion code is [here](https://github.com/allenai/longformer/blob/encoderdecoder/scripts/convert_bart_to_longformerencoderdecoder.py), and yes, it extends the position embeddings of both, the encoder and the decoder.
> Anyone have any ideas as to why query is not contiguous?
Not sure, do you have an example to reproduce this error? and can you share the full stack trace? Also, can you try `input_ids.contiguous()`?<|||||>@ibeltagy Thanks. Yep, calling `input_ids.contiguous()` fixed that problem.<|||||>The issue reappears when using a batch size greater than 1.
Stack Trace:
```
Traceback (most recent call last):
File "main.py", line 342, in <module>
main(args)
File "main.py", line 96, in main
trainer.fit(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 1003, in fit
results = self.single_gpu_train(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/distrib_parts.py", line 186, in single_gpu_train
results = self.run_pretrain_routine(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 1196, in run_pretrain_routine
False)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 293, in _evaluate
output = self.evaluation_forward(model, batch, batch_idx, dataloader_idx, test_mode)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 470, in evaluation_forward
output = model.validation_step(*args)
File "/content/abstractive.py", line 703, in validation_step
cross_entropy_loss = self._step(batch)
File "/content/abstractive.py", line 686, in _step
outputs = self.forward(source, target, source_mask, target_mask, labels=labels)
File "/content/abstractive.py", line 233, in forward
labels=None, # `labels` is None here so that huggingface/transformers does not calculate loss
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_bart.py", line 1041, in forward
output_hidden_states=output_hidden_states,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_bart.py", line 901, in forward
output_hidden_states=output_hidden_states,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_bart.py", line 333, in forward
x, attention_mask,
File "/usr/local/lib/python3.6/dist-packages/torch/utils/checkpoint.py", line 155, in checkpoint
return CheckpointFunction.apply(function, preserve, *args)
File "/usr/local/lib/python3.6/dist-packages/torch/utils/checkpoint.py", line 74, in forward
outputs = run_function(*args)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_bart.py", line 328, in custom_forward
val, _ = module(*inputs, output_attentions=False)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_bart.py", line 229, in forward
query=x, key=x, key_padding_mask=encoder_padding_mask, output_attentions=output_attentions
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/longformer/longformer_encoder_decoder.py", line 63, in forward
query = query.view(bsz, tgt_len, embed_dim)
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
```
I called `.contiguous()` on all of the inputs to the model as a test. I call `forward()` like so but the problem persists:
```python
outputs = self.model.forward(
input_ids=source.contiguous(),
attention_mask=source_mask.contiguous(),
decoder_input_ids=target.contiguous(),
decoder_attention_mask=target_mask.contiguous(),
use_cache=False,
labels=None
)
```
The problem is not gradient checkpointing since I have tried with it enabled and disabled yet the error persists.<|||||>> With both `longbart` and the `LongformerEncoderDecoder` I get the below error on the line `query = query.view(bsz, tgt_len, embed_dim)`:
>
> ```
> RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
> ```
>
> Anyone have any ideas as to why `query` is not contiguous?
Hi @HHousen , what was your transformers version ? I've used `LongBART` successfully with `v2.11.0`, didn't try it with latest version. I think @alexgaskell10 might be able to help as he's used it extensively.
Anyway, I think you should now use `LongformerEncoderDecoder` now instead of `LongBART`<|||||>Hi @HHousen, I also had this issue with `LongBart` (but not `LongformerEncoderDecoder`). I solved it by replacing the line above with `query = query.contiguous().view(bsz, tgt_len, embed_dim)` and this fixed it for all batch sizes.
@patil-suraj @ibeltagy I was doing some side-by-side of the `LongformerEncoderDecoder` and `LongBart` last week. It seems as though LED is about 30% slower than LongBart- shown by the graph below (the purple line is LongBart, green is LongformerEncoderDecoder on the same machine and y-axis is model steps). I did some quick digging and the two models gave the same outputs for my test cases but I didn't manage to bottom out why the new one is slower than the old before I ran out of time. Just thought I should flag this.
<img width="500" alt="Screenshot 2020-07-14 at 14 05 36" src="https://user-images.githubusercontent.com/51463426/87429058-2788ae00-c5db-11ea-9256-60e03f9c0509.png">
<|||||>@HHousen, thanks for reporting.
@alexgaskell10, which version of transformers are you using? there has been a code refactor in v3.0.1, so can you try v.2.11.0 to see if you still get the same speed? <|||||>@ibeltagy yes I saw there was refactor, I presume that is the cause. The LED code (green line) is using v.3.0.1 and LongBart code (purple line) uses v.2.11.0. I'm not sure it will be straightforward to run the LED code on v.3.0.1, I tried originally with v.2.11.0 but couldn't get it to work so moved to v.3.0.1.<|||||>@patrickvonplaten, is this something you can help with? I know you have tools to benchmark different models, would it be possible to benchmark longformer v2.11.0 and v3.0.1?<|||||>Could it be because of gradient checkpoiting, LongBART uses it only encoder and LED( I like this short form π) uses it in both encoder and decoder.<|||||>Gradient checkpointing was off for both runs above so can't have been that.<|||||>> Hi @HHousen , what was your transformers version ? I've used `LongBART` successfully with `v2.11.0`, didn't try it with latest version. I think @alexgaskell10 might be able to help as he's used it extensively.
> Hi @HHousen, I also had this issue with `LongBart` (but not `LongformerEncoderDecoder`). I solved it by replacing the line above with `query = query.contiguous().view(bsz, tgt_len, embed_dim)` and this fixed it for all batch sizes.
@patil-suraj @alexgaskell10, I have tested `LongformerEncoderDecoder` with huggingface/transformers versions 2.11.0, 3.0.1 (ibeltagy/transformers version for BART gradient checkpointing), and 3.0.2 and got the same error message that `query` is not contiguous, despite calling `.contiguous()` on all inputs.
<|||||>@HHousen did you try using `.reshape()` as the error message suggests? I believe this also worked for me. <|||||>@alexgaskell10 Yes. Changing to `.reshape()` solves the problem. But you were able to use `LongformerEncoderDecoder` without making that change, right?<|||||>Yes thats right. I spent quite a while going through the code and made several changes so maybe I changed something else upstream which helped it to work. Can't remember exactly what though!<|||||>> @patrickvonplaten, is this something you can help with? I know you have tools to benchmark different models, would it be possible to benchmark longformer v2.11.0 and v3.0.1?
Hmm, yeah it would be great to benchmark the models between v2.11.0 and v3.0.1. The easiest would probably be to just switch between `master` and the 2.11 branch: https://github.com/huggingface/transformers/tree/v2.11.0.
Then just running the benchmark script:
```
python examples/benchmarking/run_benchmark.py --models longformer-base-4096
```
should be good enough to compare the performance<|||||>I ran the benchmark scripts for each version: `python examples/benchmarking/run_benchmark.py --models allenai/longformer-base-4096 --training`.
Latest `master` branch:
```
2020-07-14 18:21:34.487221: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
Downloading: 100% 725/725 [00:00<00:00, 583kB/s]
1 / 1
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
allenai/longformer-base-4096 8 8 1.031
allenai/longformer-base-4096 8 32 1.015
allenai/longformer-base-4096 8 128 1.037
allenai/longformer-base-4096 8 512 1.028
--------------------------------------------------------------------------------
==================== INFERENCE - MEMORY - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Memory in MB
--------------------------------------------------------------------------------
allenai/longformer-base-4096 8 8 2117
allenai/longformer-base-4096 8 32 2117
allenai/longformer-base-4096 8 128 2117
allenai/longformer-base-4096 8 512 2117
--------------------------------------------------------------------------------
==================== TRAIN - SPEED - RESULTS ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
allenai/longformer-base-4096 8 8 2.0
allenai/longformer-base-4096 8 32 1.999
allenai/longformer-base-4096 8 128 2.103
allenai/longformer-base-4096 8 512 2.366
--------------------------------------------------------------------------------
==================== TRAIN - MEMORY - RESULTS ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Memory in MB
--------------------------------------------------------------------------------
allenai/longformer-base-4096 8 8 10001
allenai/longformer-base-4096 8 32 10155
allenai/longformer-base-4096 8 128 10207
allenai/longformer-base-4096 8 512 12559
--------------------------------------------------------------------------------
==================== ENVIRONMENT INFORMATION ====================
- transformers_version: 3.0.2
- framework: PyTorch
- use_torchscript: False
- framework_version: 1.5.1+cu101
- python_version: 3.6.9
- system: Linux
- cpu: x86_64
- architecture: 64bit
- date: 2020-07-14
- time: 18:30:48.341403
- fp16: False
- use_multiprocessing: True
- only_pretrain_model: False
- cpu_ram_mb: 13021
- use_gpu: True
- num_gpus: 1
- gpu: Tesla T4
- gpu_ram_mb: 15079
- gpu_power_watts: 70.0
- gpu_performance_state: 0
- use_tpu: False
```
Version 2.11.0 (`git checkout tags/v2.11.0`):
```
2020-07-14 18:31:04.379166: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
1 / 1
======= INFERENCE - SPEED - RESULT =======
======= MODEL CHECKPOINT: allenai/longformer-base-4096 =======
allenai/longformer-base-4096/8/8: 0.356s
allenai/longformer-base-4096/8/32: 0.359s
allenai/longformer-base-4096/8/128: 0.364s
allenai/longformer-base-4096/8/512: 0.367s
======= INFERENCE - MEMORY - RESULT =======
======= MODEL CHECKPOINT: allenai/longformer-base-4096 =======
allenai/longformer-base-4096/8/8: 8178 MB
allenai/longformer-base-4096/8/32: 8170 MB
allenai/longformer-base-4096/8/128: 8162 MB
allenai/longformer-base-4096/8/512: 8162 MB
======= TRAIN - SPEED - RESULT =======
======= MODEL CHECKPOINT: allenai/longformer-base-4096 =======
allenai/longformer-base-4096/8/8: 0.357s
allenai/longformer-base-4096/8/32: 0.359s
allenai/longformer-base-4096/8/128: 0.363s
allenai/longformer-base-4096/8/512: 0.366s
======= TRAIN - MEMORY - RESULT =======
======= MODEL CHECKPOINT: allenai/longformer-base-4096 =======
allenai/longformer-base-4096/8/8: 9320 MB
allenai/longformer-base-4096/8/32: 9416 MB
allenai/longformer-base-4096/8/128: 9514 MB
allenai/longformer-base-4096/8/512: 11866 MB
======== ENVIRONMENT - INFORMATION ========
- transformers_version: 2.11.0
- framework: PyTorch
- framework_version: 1.5.1+cu101
- python_version: 3.6.9
- system: Linux
- cpu: x86_64
- architecture: 64bit
- date: 2020-07-14
- time: 18:34:31.155709
- cpu_ram_mb: 13021
- use_gpu: True
- num_gpus: 1
- gpu: Tesla T4
- gpu_ram_mb: 15079
- gpu_power_watts: 70.0
- gpu_performance_state: 0
```<|||||>I also tested the differences before and after d697b6ca751e7727e92d4fa1de35e5e62fd541fa ([Longformer] Major Refactor (#5219)).
Training time changes:
```
Before --> After
1.323 --> 1.995
1.353 --> 2.016
1.416 --> 2.094
1.686 --> 2.378
```
Before d697b6ca751e7727e92d4fa1de35e5e62fd541fa (at commit e0d58ddb65eff1a52572dff75944d8b28ea706d3):
```
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
allenai/longformer-base-4096 8 8 0.332
allenai/longformer-base-4096 8 32 0.342
allenai/longformer-base-4096 8 128 0.35
allenai/longformer-base-4096 8 512 0.357
--------------------------------------------------------------------------------
==================== INFERENCE - MEMORY - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Memory in MB
--------------------------------------------------------------------------------
allenai/longformer-base-4096 8 8 2117
allenai/longformer-base-4096 8 32 2117
allenai/longformer-base-4096 8 128 2117
allenai/longformer-base-4096 8 512 2117
--------------------------------------------------------------------------------
==================== TRAIN - SPEED - RESULTS ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
allenai/longformer-base-4096 8 8 1.323
allenai/longformer-base-4096 8 32 1.353
allenai/longformer-base-4096 8 128 1.416
allenai/longformer-base-4096 8 512 1.686
--------------------------------------------------------------------------------
==================== TRAIN - MEMORY - RESULTS ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Memory in MB
--------------------------------------------------------------------------------
allenai/longformer-base-4096 8 8 9617
allenai/longformer-base-4096 8 32 9771
allenai/longformer-base-4096 8 128 9823
allenai/longformer-base-4096 8 512 12175
--------------------------------------------------------------------------------
==================== ENVIRONMENT INFORMATION ====================
The current process just got forked. Disabling parallelism to avoid deadlocks...
To disable this warning, please explicitly set TOKENIZERS_PARALLELISM=(true | false)
- transformers_version: 3.0.0
- framework: PyTorch
- use_torchscript: False
- framework_version: 1.5.1+cu101
- python_version: 3.6.9
- system: Linux
- cpu: x86_64
- architecture: 64bit
- date: 2020-07-14
- time: 18:59:34.297304
- fp16: False
- use_multiprocessing: True
- only_pretrain_model: False
- cpu_ram_mb: 13021
- use_gpu: True
- num_gpus: 1
- gpu: Tesla T4
- gpu_ram_mb: 15079
- gpu_power_watts: 70.0
- gpu_performance_state: 0
- use_tpu: False
```
After d697b6ca751e7727e92d4fa1de35e5e62fd541fa (at commit d697b6ca751e7727e92d4fa1de35e5e62fd541fa):
```
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
allenai/longformer-base-4096 8 8 1.028
allenai/longformer-base-4096 8 32 1.01
allenai/longformer-base-4096 8 128 1.013
allenai/longformer-base-4096 8 512 1.061
--------------------------------------------------------------------------------
==================== INFERENCE - MEMORY - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Memory in MB
--------------------------------------------------------------------------------
allenai/longformer-base-4096 8 8 2117
allenai/longformer-base-4096 8 32 2117
allenai/longformer-base-4096 8 128 2117
allenai/longformer-base-4096 8 512 2117
--------------------------------------------------------------------------------
==================== TRAIN - SPEED - RESULTS ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
allenai/longformer-base-4096 8 8 1.995
allenai/longformer-base-4096 8 32 2.016
allenai/longformer-base-4096 8 128 2.094
allenai/longformer-base-4096 8 512 2.378
--------------------------------------------------------------------------------
==================== TRAIN - MEMORY - RESULTS ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Memory in MB
--------------------------------------------------------------------------------
allenai/longformer-base-4096 8 8 9617
allenai/longformer-base-4096 8 32 9771
allenai/longformer-base-4096 8 128 9823
allenai/longformer-base-4096 8 512 12175
--------------------------------------------------------------------------------
==================== ENVIRONMENT INFORMATION ====================
The current process just got forked. Disabling parallelism to avoid deadlocks...
To disable this warning, please explicitly set TOKENIZERS_PARALLELISM=(true | false)
- transformers_version: 3.0.0
- framework: PyTorch
- use_torchscript: False
- framework_version: 1.5.1+cu101
- python_version: 3.6.9
- system: Linux
- cpu: x86_64
- architecture: 64bit
- date: 2020-07-14
- time: 19:10:37.139177
- fp16: False
- use_multiprocessing: True
- only_pretrain_model: False
- cpu_ram_mb: 13021
- use_gpu: True
- num_gpus: 1
- gpu: Tesla T4
- gpu_ram_mb: 15079
- gpu_power_watts: 70.0
- gpu_performance_state: 0
- use_tpu: False
```
<|||||>@patrickvonplaten @alexgaskell10 @ibeltagy The training time increased from tags/v2.11.0 (0.361s) to right before d697b6c (at commit e0d58dd) (1.445s) by 1.084s.
The training time increased from right before d697b6c (at commit e0d58dd) (1.445s) to directly after d697b6c (at commit d697b6c) (2.121s) by 0.676s.
I ran the benchmarks twice and got similar results both times.<|||||>nice finding. Thanks, @HHousen.
@patrickvonplaten, we can check the refactoring more carefully to find the reason for the second slowdown. Any thoughts on what could be the reason for the first one? It is a span of 270 commits!!<|||||>Thanks a lot for running the benchmark @HHousen !
Very interesting indeed! I will take a look tomorrow.
The benchmarking tools were changed quite significantly from 2.11 to 3.0.1 => so I will run both Longformer versions (2.11 and master) with the same benchmarking tools tomorrow to make sure that the performance degradation is really due to changes in Longformer. <|||||>@patrickvonplaten You're correct about the first training time increase. I tracked down the time change to commit fa0be6d76187e0639851f6d762b9ffae7fbd9202. At 18a0150bfa1b47065ce8a8ac22fc1791ed0ac2b3 (right before fa0be6d76187e0639851f6d762b9ffae7fbd9202) the training time is about 0.35s. But at fa0be6d76187e0639851f6d762b9ffae7fbd9202 it's about 1.4s. So the first time increase can be safely ignored because it was caused by a change in the benchmark scripts.
The second time increase, caused by d697b6ca751e7727e92d4fa1de35e5e62fd541fa seems to be the main issue.<|||||>@HHousen @ibeltagy,
I just ran the same benchmarking scripts on different versions and I can confirm that there is quite a drastic slow-down at master.
Here is the branch: https://github.com/huggingface/transformers/tree/benchmark_for_2_11 in case it's useful for you.
My results for `master`:
```
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
allenai/longformer-base-4096 8 8 0.644
allenai/longformer-base-4096 8 32 0.64
allenai/longformer-base-4096 8 128 0.64
allenai/longformer-base-4096 8 512 0.637
--------------------------------------------------------------------------------
Saving results to csv.
==================== INFERENCE - MEMORY - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Memory in MB
--------------------------------------------------------------------------------
allenai/longformer-base-4096 8 8 2023
allenai/longformer-base-4096 8 32 2023
allenai/longformer-base-4096 8 128 2023
allenai/longformer-base-4096 8 512 2023
--------------------------------------------------------------------------------
Saving results to csv.
==================== ENVIRONMENT INFORMATION ====================
- transformers_version: 3.0.2
- framework: PyTorch
- use_torchscript: False
- framework_version: 1.5.0
- python_version: 3.7.7
- system: Linux
- cpu: x86_64
- architecture: 64bit
- date: 2020-07-15
- time: 18:30:00.426834
- fp16: False
- use_multiprocessing: True
- only_pretrain_model: False
- cpu_ram_mb: 32089
- use_gpu: True
- num_gpus: 1
- gpu: TITAN RTX
- gpu_ram_mb: 24217
- gpu_power_watts: 280.0
- gpu_performance_state: 0
- use_tpu: False
```
results for `2.11.0`:
```
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
allenai/longformer-base-4096 8 8 0.144
allenai/longformer-base-4096 8 32 0.144
allenai/longformer-base-4096 8 128 0.144
allenai/longformer-base-4096 8 512 0.145
--------------------------------------------------------------------------------
Saving results to csv.
==================== INFERENCE - MEMORY - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Memory in MB
--------------------------------------------------------------------------------
allenai/longformer-base-4096 8 8 2023
allenai/longformer-base-4096 8 32 2023
allenai/longformer-base-4096 8 128 2023
allenai/longformer-base-4096 8 512 2023
--------------------------------------------------------------------------------
Saving results to csv.
==================== ENVIRONMENT INFORMATION ====================
- transformers_version: 2.11.0
- framework: PyTorch
- use_torchscript: False
- framework_version: 1.5.0
- python_version: 3.7.7
- system: Linux
- cpu: x86_64
- architecture: 64bit
- date: 2020-07-15
- time: 18:39:00.315564
- fp16: False
- use_multiprocessing: True
- only_pretrain_model: False
- cpu_ram_mb: 32089
- use_gpu: True
- num_gpus: 1
- gpu: TITAN RTX
- gpu_ram_mb: 24217
- gpu_power_watts: 280.0
- gpu_performance_state: 0
- use_tpu: False
```
It was probably caused by me, when I did the major longformer refactoring... => will investigate more tomorrow!
Thanks a lot for pointing this out @HHousen - this is super useful.
I guess we should have tests that automatically check if the PR causes a significant slow down. (also @sshleifer , @thomwolf, @mfuntowicz )<|||||>Ok fixed it. @ibeltagy @HHousen - it would be great if you can try again on your end with the current version of master to make sure the inference speed is back to normal. <|||||>@patrickvonplaten I ran the benchmark on master and the speeds do look to be normal again.
The training speeds are 1.328s, 1.378s, 1.457s, and 1.776s for sequences of length 8, 32, 128, 512 respectively, which is similar to the speeds before the major refactor at d697b6c.
Inference speeds are 0.326s, 0.343s, 0.348s, and 0.367s, which are appear to be back to normal.<|||||>@ibeltagy Should you merge ibeltagy/transformers@longformer_encoder_decoder into huggingface/transformers@master yet to add gradient checkpointing to BART? Or are you waiting for the final LongformerEncoderDecoder implementation to be completed?<|||||>@HHousen, I had to disable certain features of the model [here](https://github.com/ibeltagy/transformers/blob/longformer_encoder_decoder/src/transformers/modeling_bart.py#L541-L542) to implement gradient checkpointing, so merging it will require more work.
@LysandreJik started working on gradient checkpointing in this PR https://github.com/huggingface/transformers/pull/5415 and he might have better ideas.<|||||>> @kevinlu1248
> This [colab](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) shows how to fine-tune T5 with lightening. This is just the self-contained version of official example. You should be able to use the same `Trainer`, just replace the model with BART and use you own dataset.
I modified this example to adapt to the BART. And only use `Positive </s>` as target , but after training for a epoch, the model output all 0, `tensor([[2, 0, 0, 2]])`, decoded as `''` .
What could be reason for the model failed on such simple task?
Using training_rate = 2*10^-5<|||||>@WangHexie , not sure. One suggestion, with BART you won't need to manually add </s> at the end as the BART tokenizer automatically add the `eos` token at the end of the text<|||||>@patil-suraj Thanks to your prompt. These models' behaviour is quite different, the problem is solved by shifting decoder input to the right manually.<|||||>@alexgaskell10, @HHousen, the `query.reshape()`solution is wrong. The code runs but it is not doing the right thing. It should be `query.transpose(0, 1)`. I just pushed a fix. This bug will affect all your results if you are using a batch size > 1<|||||>@ibeltagy @HHousen Thanks for the update, it still is not working well for me with bsz > 1. I think you also need to change `attn_output = attn_output.contiguous().view(tgt_len, bsz, embed_dim)` to `attn_output = attn_output.transpose(0,1)` in longformer/longformer_encoder_decoder.py, line 75. <|||||>@alexgaskell10, you are right. Just pushed a fix for that one as well. <|||||>@ibeltagy it still isn't working correctly for me (even at bsz=1). On some runs and at random points training the training becomes corrupted as per the image below. Taking a look into this now but not really sure where to start as it only happens sometimes and at random points during training so I haven't got much to work with. Any ideas?
<img width="328" alt="Screenshot 2020-08-12 at 15 45 19" src="https://user-images.githubusercontent.com/51463426/90029942-5f0a7900-dcb3-11ea-95b0-1070e8781ad5.png">
<|||||>- what is the effective batch size? bsz x gradient accumulation x number of gpus? make sure it is not very small, try at least 8 if not 32.
- how does the learning ~~curve~~ rate curve look like? can you draw it next to the loss curve? are you using warmup and decay? try lowering the learning rate?<|||||>Thanks for the suggestions- a couple of good thoughts. I have only been using small bsz so far (< 4) so I think that is somewhere to start alongside playing with the LR. Thanks!
- I am not using warmup and decay. I don't think warmup is the issue as it rarely begins soon into training. Will try with decay though
- What are you referring to as the learning curve in this instance? The validation loss?<|||||>oh, sorry, I meant plotting learning rate curve vs. steps. <|||||>> Gradient checkpointing in the decoder is not working so going to remove it for now. Will update the repo this weekend and will put some instructions in the readme.
Hello @patil-suraj
Do you have any advance in this work? I checked the repository but the README is still empty.
Can you help me please @alexgaskell10?
Thank you so much.<|||||>> ```
> RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
> ```
Just adding that I get the same error for `AlbertForMaskedLM` with albert-large-v2, batch size of 8, using version 3.1.0 (pytorch), and training with Trainer
It doesn't appear immediately, but a little way into the warm-up phase of the training.<|||||>I had the same problem with FunnelTransformer. But it seems resolved after I set WANDB_WATCH=false or disable --fp16. You can try if it works for you.
> > ```
> > RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
> > ```
>
> Just adding that I get the same error for `AlbertForMaskedLM` with albert-large-v2, batch size of 8, using version 3.1.0 (pytorch), and training with Trainer
>
> It doesn't appear immediately, but a little way into the warm-up phase of the training.
<|||||>@patil-suraj what is the best way to save the fine tune model in order to reuse it again with `T5ForConditionalGeneration.from_pretrained()`?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,405 | closed | add BERT trained from review corpus. | add BERT trained from review corpus. | 05-16-2020 22:33:40 | 05-16-2020 22:33:40 | Hi @howardhsu, the file path is not correct, it should be something like `model_cards/activebus/BERT_Review/README.md` <|||||>I updated paths as suggested, thanks.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4405?src=pr&el=h1) Report
> Merging [#4405](https://codecov.io/gh/huggingface/transformers/pull/4405?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3e0f06210646a440509efa718b30d18322d6a830&el=desc) will **increase** coverage by `0.02%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4405?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4405 +/- ##
==========================================
+ Coverage 78.16% 78.19% +0.02%
==========================================
Files 120 120
Lines 20058 20058
==========================================
+ Hits 15679 15684 +5
+ Misses 4379 4374 -5
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4405?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.93% <0.00%> (+0.82%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4405?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4405?src=pr&el=footer). Last update [3e0f062...466c62e](https://codecov.io/gh/huggingface/transformers/pull/4405?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Great: [model pages](https://huggingface.co/activebus) |
transformers | 4,404 | closed | feat(wandb): display logger | Logger info when `wandb` not installed was set to `info` which does not display by default.
It has been changed to `warning`. | 05-16-2020 21:16:01 | 05-16-2020 21:16:01 | It don't think it should really be a warning if you don't use wandb.
The root issue here is probably discoverability of wandb for users who don't know it? Then it would probably be better solved in documentation.
We will start some documentation on `Trainer`/`TFTrainer` in the coming weeks (cc @LysandreJik) we'll mention wandb there. (if you want to help with this let us know)<|||||>Makes sense, let me know when it's started and I can help writing the section related to wandb. |
transformers | 4,403 | closed | Map optimizer to correct device after loading from checkpoint. | Loading from `optimizer.pt` causes `optimizer` to be mapped to the same device as the saved `optimizer.pt`. In most cases it's `cuda:0`(saved by local master), which puts all optimizers on
gpu0, causing OOM more easily in multi-gpu training.
Might fix issues like [#3730](https://github.com/huggingface/transformers/issues/3730). | 05-16-2020 18:48:42 | 05-16-2020 18:48:42 | Thank you! |
transformers | 4,402 | closed | Run Language Modeling on 8 TPU cores doesn't seem to terminate | # π Bug
## Information
Model I am using (Bert, XLNet ...): DistilGPT2 & GPT2
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
I'm trying to test ```run_language_modeling.py``` on DistilGPT2 using all 8 TPU cores. Running on 1 core executes fine, but when I attempt to run on all 8 cores, it finishes finetuning then gets stuck on "Training completed. Do not forget to share your model on huggingface.co/models =)" and doesn't terminate.
When I check the output directory, I only see two files: config.json and pytorch_model.bin. There should be seven files in the output directory.
I'm running this on a Colab TPU Notebook.
## To reproduce
Steps to reproduce the behavior:
```
VERSION = "nightly" #@param ["1.5" , "20200325", "nightly"]
!curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py
!python pytorch-xla-env-setup.py --version $VERSION
!wget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-2-v1.zip
!unzip wikitext-2-v1.zip && rm wikitext-2-v1.zip
!pip install transformers
!git clone https://github.com/huggingface/transformers.git
!python transformers/examples/xla_spawn.py --num_cores 8 \
transformers/examples/language-modeling/run_language_modeling.py \
--output_dir=output \
--model_type=distilgpt2 \
--model_name_or_path=distilgpt2 \
--train_data_file=wikitext-2/wiki.train.tokens \
--do_train \
--overwrite_output_dir
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Script terminates and 7 files are in the output folder:
* config.json
* pytorch_model.bin
* tokenizer_config.json
* vocab.json
* merges.txt
* special_tokens_map.json
* training_args.bin
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.9.1
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0a0+83df3be (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: yes
| 05-16-2020 12:03:33 | 05-16-2020 12:03:33 | @jcblaisecruz02
Yes, there's a bug in version 2.91 which hangs the trainer. It's been fixed in master branch. Install from master branch for TPU training.
See this pull request #4339<|||||>Is it possible to run `run_language_modeling.py` on more than 8 cores when using pytorch and `xls_spawn`?
And what about tensorflow? |
transformers | 4,401 | closed | [TF T5] More coherent naming for inputs | In TF we have to name the first argument of the `call` function "inputs", due to some inner keras logic (I think), see: https://github.com/huggingface/transformers/pull/3547 . Having both names `inputs` and `input_ids` can thus lead to confusion, see #3626 .
This PR adopts the consistent naming `inputs` in the whole file. | 05-16-2020 10:51:57 | 05-16-2020 10:51:57 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4401?src=pr&el=h1) Report
> Merging [#4401](https://codecov.io/gh/huggingface/transformers/pull/4401?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2d184cb553ee20943b03b253f44300e466357871&el=desc) will **increase** coverage by `0.85%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4401?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4401 +/- ##
==========================================
+ Coverage 77.30% 78.15% +0.85%
==========================================
Files 120 120
Lines 20027 20027
==========================================
+ Hits 15481 15652 +171
+ Misses 4546 4375 -171
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4401?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `95.16% <100.00%> (ΓΈ)` | |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.77% <0.00%> (+0.16%)` | :arrow_up: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `43.25% <0.00%> (+1.10%)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.78% <0.00%> (+1.37%)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.73% <0.00%> (+2.29%)` | :arrow_up: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `97.81% <0.00%> (+2.62%)` | :arrow_up: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <0.00%> (+10.00%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0.00%> (+81.20%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4401?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4401?src=pr&el=footer). Last update [2d184cb...fa80cf3](https://codecov.io/gh/huggingface/transformers/pull/4401?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,400 | closed | BertWordPieceTokenizer cannot be pickled | # π Bug
## Information
Model I am using (Bert, XLNet ...):
**Bert**
Language I am using the model on (English, Chinese ...):
**English**
The problem arises when using:
* [X] my own modified scripts:
The tasks I am working on is:
* [X] my own task or dataset:
## To reproduce
``` python
import torch
import tokenizers
import pandas as pd
from torch.utils import data
class config:
MAX_LEN = 128
TRAIN_BATCH_SIZE = 64
VALID_BATCH_SIZE = 16
EPOCHS = 5
BERT_PATH = "../input/bert-base-uncased/"
MODEL_PATH = "model.bin"
TRAINING_FILE = "../input/tweet-sentiment-extraction/train_folds.csv"
TOKENIZER = tokenizers.BertWordPieceTokenizer(
f"{BERT_PATH}/vocab.txt",
lowercase=True
)
def process_data(tweet, selected_text, sentiment, tokenizer, max_len):
len_st = len(selected_text)
idx0 = -1
idx1 = -1
for ind in (i for i, e in enumerate(tweet) if e == selected_text[0]):
if tweet[ind: ind+len_st] == selected_text:
idx0 = ind
idx1 = ind + len_st - 1
break
char_targets = [0] * len(tweet)
if idx0 != -1 and idx1 != -1 :
for ct in range(idx0, idx1 + 1):
char_targets[ct] = 1
tok_tweet = tokenizer.encode(tweet)
input_ids_orig = tok_tweet.ids[1:-1]
tweet_offsets = tok_tweet.offsets[1:-1]
target_idx = []
for j, (offset1, offset2) in enumerate(tweet_offsets):
if sum(char_targets[offset1: offset2]) > 0:
target_idx.append(j)
targets_start = target_idx[0]
targets_end = target_idx[-1]
sentiment_id = {
'positive': 3893,
'negative': 4997,
'neutral': 8699
}
input_ids = [101] + [sentiment_id[sentiment]] + [102] + input_ids_orig + [102]
token_type_ids = [0, 0, 0] + [1] * (len(input_ids_orig) + 1)
mask = [1] * len(token_type_ids)
tweet_offsets = [(0, 0)] * 3 + tweet_offsets + [(0, 0)]
targets_start += 3
targets_end += 3
padding_length = max_len - len(input_ids)
if padding_length > 0:
input_ids = input_ids + ([0] * padding_length)
mask = mask + ([0] * padding_length)
token_type_ids = token_type_ids + ([0] * padding_length)
tweet_offsets = tweet_offsets + ([(0, 0)] * padding_length)
return {
'ids': input_ids,
'mask': mask,
'token_type_ids': token_type_ids,
'targets_start': targets_start,
'targets_end': targets_end,
'orig_tweet': tweet,
'orig_selected': selected_text,
'sentiment': sentiment,
'offsets': tweet_offsets
}
class TweetDataset(data.Dataset):
def __init__(self, tweet, sentiment, selected_text):
self.tweet = tweet
self.sentiment = sentiment
self.selected_text = selected_text
self.tokenizer = config.TOKENIZER
self.max_len = config.MAX_LEN
def __len__(self):
return len(self.tweet)
def __getitem__(self, item):
data = process_data(
self.tweet[item],
self.selected_text[item],
self.sentiment[item],
self.tokenizer,
self.max_len
)
return {
'ids': torch.tensor(data["ids"], dtype=torch.long),
'mask': torch.tensor(data["mask"], dtype=torch.long),
'token_type_ids': torch.tensor(data["token_type_ids"], dtype=torch.long),
'targets_start': torch.tensor(data["targets_start"], dtype=torch.long),
'targets_end': torch.tensor(data["targets_end"], dtype=torch.long),
'orig_tweet': data["orig_tweet"],
'orig_selected': data["orig_selected"],
'sentiment': data["sentiment"],
'offsets': torch.tensor(data["offsets"], dtype=torch.long)
}
dfx = pd.read_csv(config.TRAINING_FILE)
fold = 4
df_train = dfx[dfx.kfold != fold].reset_index(drop=True)
df_valid = dfx[dfx.kfold == fold].reset_index(drop=True)
train_dataset = TweetDataset(
tweet=dfx.text.values,
sentiment=dfx.sentiment.values,
selected_text=dfx.selected_text.values
)
train_data_loader = torch.utils.data.DataLoader(
train_dataset,
batch_size=config.TRAIN_BATCH_SIZE,
num_workers=1
)
if __name__ =='__main__':
a = enumerate(train_data_loader)
```
## Expected behavior
The enumerate should return the iterable.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
Output of `transformers-cli env`
transformers version: 2.9.1
Platform: Windows-10-10.0.18362-SP0
Python version: 3.8.2
PyTorch version (GPU?): 1.5.0 (True)
Tensorflow version (GPU?): not installed (NA)
Using GPU in script?: Yes
Using distributed or parallel set-up in script?: No | 05-16-2020 05:32:49 | 05-16-2020 05:32:49 | If needed I can even provide the dataset (Did not want to clutter):
the error stacktrace :
```
Traceback (most recent call last):
File "error.py", line 129, in <module>
a = enumerate(train_data_loader)
File "C:\Users\admin\miniconda3\envs\machine_learning\lib\site-packages\torch\utils\data\dataloader.py", line 279, in __iter__
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\admin\miniconda3\envs\machine_learning\lib\site-packages\torch\utils\data\dataloader.py", line 719, in __init__
w.start()
File "C:\Users\admin\miniconda3\envs\machine_learning\lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
File "C:\Users\admin\miniconda3\envs\machine_learning\lib\multiprocessing\context.py", line 224, in _Popen
Traceback (most recent call last):
File "<string>", line 1, in <module>
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\admin\miniconda3\envs\machine_learning\lib\multiprocessing\context.py", line 326, in _Popen
File "C:\Users\admin\miniconda3\envs\machine_learning\lib\multiprocessing\spawn.py", line 116, in spawn_main
return Popen(process_obj)
exitcode = _main(fd, parent_sentinel) File "C:\Users\admin\miniconda3\envs\machine_learning\lib\multiprocessing\popen_spawn_win32.py", line 93, in __init__
File "C:\Users\admin\miniconda3\envs\machine_learning\lib\multiprocessing\spawn.py", line 126, in _main
reduction.dump(process_obj, to_child)
self = reduction.pickle.load(from_parent) File "C:\Users\admin\miniconda3\envs\machine_learning\lib\multiprocessing\reduction.py", line 60, in dump
EOFError: Ran out of input
ForkingPickler(file, protocol).dump(obj)
TypeError: cannot pickle 'Tokenizer' object
```
But when run in kaggle notebook this works perfectly well. (The same script having same tokenizer and transformers version)
@julien-c @sshleifer any help here?<|||||>~Yes, this is fixed by PR #4389 , so you could `pip install -e .` off of that branch.~<|||||>Hi @sshleifer I did these steps :
1. git fetch origin pull/4389/head:temp_fix
2. git checkout temp_fix
3. pip install -e .
still the above fix doesn't seem to work.
By looking into the PR , I am guessing that it is fixed for `MarianTokenizer ` and not for `BertWordPieceTokenizer` that I am using in the above script.
<|||||>Tested with another environment with python = 3.7.7, same issue is observed.<|||||>Just wanted to mention that providing `num_workers = 0` bypasses the problem. So it only fails when multiprocessing is involved. This issue is not only in `BertWordPieceTokenizer`, It also fails with `ByteLevelBPETokenizer` .<|||||>And this probably should be moved to the tokenizer repo @sshleifer to confirm.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,399 | closed | Pipeline for question generation | # π Feature request
I can see there are pipelines for question answering, text summarisation and text generation. In my field I'm researching how question generation can be used in education research. I would love to see this pipeline added. I imagine it's a variation of question answering and text summarisation.
The paper 'Question Generation by Transformers' by Kettip Kriangchaivech, Artit Wangperawong and provides a good overview of using the SQUAD data set using questions as the output sequence with reference questions and contexts.
Artit also has an implementation called text2text https://github.com/artitw/text2text
## Motivation
It would be useful to have an official pipeline part of the hunggingface library for this use case.
## Your contribution
I'm happy to contribute some funds to pay some developers if need be but I don't have enough Python technical expertise to contribute an PRs myself.
| 05-16-2020 05:01:39 | 05-16-2020 05:01:39 | That would be an interesting project.<|||||>I have worked on question generation using T5. I've trained answer aware question generator on SQuAD 2.2 which achieves 41.4641 BLUE1 score and 41.0823 ROUGE_L on the dev set given gold answers.
I've also trained T5 for extracting answers from the text, and written a simple pipeline where the answer generator generates answers and then the answer-aware que generator generates questions with those answers. You can check the demo [here](https://colab.research.google.com/drive/1_2_mS5l29QHI1pXaqa4YLzAO5xm-HmH9?usp=sharing)
I've also trained T5 for direct question generation on Yahoo questions dataset. It generates single question given a context.
I would be happy to contribute to this project.<|||||>@julien-c
Any update on this ?<|||||>We don't have any immediate plan to work on this ourselves but feel free to take a stab<|||||>@julien-c
Okay, I just need your feedback on one thing. I'm not sure if adding a pipeline here will make sense since there are multiple ways to generate questions
1. ans aware
2. generate 1 question directly
3. generate multiple questions simultaneously
Also ans aware model will need an ans extractor/generator or the user will need to supply answers. And different models could process inputs differently.
So does adding pipeline makes sense here ? If not I can just upload the models and provide a inference script and decide next steps with community feedback.
Thank you!<|||||>@patil-suraj I think it makes sense to generate questions that are answer aware as this has more use cases. I also think that questions should not be so narrow that a single word from the context is the answer. Artit's [Text2Text ](https://github.com/artitw/text2text) did a pretty good job. I used this to generate 1,000 random questions from a random context and plan to have them judged by human raters. One of the main issues I saw in these questions is that sometimes the answer is in the question text.
I'll take a look at your demo.
I also think it should generate multiple questions that can be so that some evaluation metric (BLUE1, ROUGE_L and METEOR) can be produced and have them ranked. Then the user can decide what to do with the highest metric questions. There's some research that shows the METEOR correlates better with human judgement, but this was for evaluating machine translation tasks and might not apply here.<|||||>@danielduckworth
Yes answer aware seems to be the best chose right now. And we can generate multiple question if we have multiple answers. If you find the above demo interesting then I'll share the models so you can play with it and then we can decide how to proceed from there.
Also the METEOR score on dev set for demo model is 26.0676<|||||>@patil-suraj Yes that would be great. How do you want to share the models?<|||||>@danielduckworth
I've setup everything in the same [colab](https://colab.research.google.com/drive/1_2_mS5l29QHI1pXaqa4YLzAO5xm-HmH9?usp=sharing). Please have a look.<|||||>Great, thanks. I'll have a play over the next few days and get back to you.
So just to confirm, where do the models come from? Is the base model T5 and it has been tuned using the SQUAD data reference questions, contexts and answers?<|||||>@patil-suraj I've had a quick look, it's very impressive! I've only tried two passages, but the questions are sensible and the answers are more than one word which is better than the Text2Text pipeline. I definitely want to do some more work with this.
First, I'll generate a large set of questions I can get human raters to score to validate whether the quantitative metrics (METEOR etc) correlate with human judgement.
Then I would like to work on some architecture experiments to investigate the following:
1. Can questions be generated where the corresponding generated answer is not explicitly stated in the text? This would require the reader to make connections between information in the text (implied information) and make inferences. I think this could be achieved with some NLTK work but is not ideal as I think the semantics of these questions are best learned from text data rather than expert-system rules.
2. Can additional tuning of the que_gen model be done with other question/answer/context datasets that are of a different text type. For example, Wikipedia is primary factual information texts. But what about discursive texts? Or narrative texts?
Anyway, I'll continue to explore what you have with a developer I work with and maybe we can form repository we can work in with the goal of creating a pipeline for inclusion in the Transformer package.
What do you think?<|||||>> Great, thanks. I'll have a play over the next few days and get back to you.
>
> So just to confirm, where do the models come from? Is the base model T5 and it has been tuned using the SQUAD data reference questions, contexts and answers?
Yes both of the models are t5-base trained on SQuAD<|||||>> @patil-suraj I've had a quick look, it's very impressive! I've only tried two passages, but the questions are sensible and the answers are more than one word which is better than the Text2Text pipeline. I definitely want to do some more work with this.
>
> First, I'll generate a large set of questions I can get human raters to score to validate whether the quantitative metrics (METEOR etc) correlate with human judgement.
>
> Then I would like to work on some architecture experiments to investigate the following:
>
> 1. Can questions be generated where the corresponding generated answer is not explicitly stated in the text? This would require the reader to make connections between information in the text (implied information) and make inferences. I think this could be achieved with some NLTK work but is not ideal as I think the semantics of these questions are best learned from text data rather than expert-system rules.
> 2. Can additional tuning of the que_gen model be done with other question/answer/context datasets that are of a different text type. For example, Wikipedia is primary factual information texts. But what about discursive texts? Or narrative texts?
>
> Anyway, I'll continue to explore what you have with a developer I work with and maybe we can form repository we can work in with the goal of creating a pipeline for inclusion in the Transformer package.
>
> What do you think?
@danielduckworth
I'm not sure about the first, we will need to run small experiments and see if it can be achieved.
2) Yes, I do think additional fine-tuning on more diverse datasets should improve the results. My goal is to first get factual questions correct and then move to narrative texts.
And sure we can create different repo and take this forward.<|||||>@patil-suraj Thanks for the examples. Would you mind sharing your fine-tuning code used to train the model as well? <|||||>Sure. I'm planning to release model as well as the fine-tuning code. I'll comment here once I do that.<|||||>@danai-antoniou Thanks for the wonderful suggestion.
@patil-suraj, I also played your colab code, and it looks super great. I look forward to the release.<|||||>@danielduckworth, I am looking into recent works in Question Generation especially using Transformer based architecture leveraging fine-tuning for small data sets. This thread is very interesting to me.
> Can questions be generated where the corresponding generated answer is not explicitly stated in the text? This would require the reader to make connections between information in the text (implied information) and make inferences. I think this could be achieved with some NLTK work but is not ideal as I think the semantics of these questions are best learned from text data rather than expert-system rules.
I have some previous experience in generating non-factoid questions from a text, with the goal of having descriptive answers rather than quiz-like. In my project, the data was not sufficient for DL models, even for fine-tuning.
We had done some user studies on generated questions as well, and found out METEOR is better correlated with human judgment on how reasonable or well-formed the questions are rather than BLEU or ROUGE scores.
For extracting relations in text, to capture more complex answers, Semantic Role Labeling (SRL) + Dependency parse tree of the text might be useful for extracting some descriptive answers. I used a tool called ClearNLP to do that.
@patil-suraj, I also checked out your Collab demo and the questions generated are looking good, much better than other models I worked with so far. Great job. Definitely looking forward to the release of the models and knowing more about the fine-tuning process.
For non-factoid questions, there is room for improvement.<|||||>
Hi @emadg, @hunkim Thank you for your interest! :)
@emadg
>For extracting relations in text, to capture more complex answers, Semantic Role Labeling (SRL) + Dependency parse tree of the text might be useful for extracting some descriptive answers. I used a tool called ClearNLP to do that.
This sure sounds like a good idea. My current goal is to have a end-to-end model for generating questions and answers.<|||||>> For non-factoid questions, there is room for improvement.
For those looking for less factual questions, I was actually able to get some reasonable results with a T5 pre-trained for query prediction, but there's definitely room for improvement.
https://github.com/castorini/docTTTTTquery<|||||>Hey people, I've setup few experiments for question generation. Let me know if anyone wants to collaborate on this, I would really appreciate some help and maybe some multi-gpu compute. Everything will be open sourced after the experiments are finished.
Thank you! <|||||>This is very interesting to me. I'm writing a master's thesis over the summer, working on transfer learning for question generation. I don't have much experience with contributing to large pre-existing frameworks like this but would definitely be happy to contribute wherever I can.
@patil-suraj
I had a look at your notebook. This is very impressive! Looking forward to seeing the fine-tuning process to get an idea of how this can be done using the transformer framework. <|||||>@patil-suraj
I would like to help. Let me know if we can collaborate. Although I have somewhat limited time to contribute.<|||||>Hi @emadg and @vegarab, thank you for your interest,
My goal is to do open source study on que generation. Here's what I have planned
For ans aware que generation we usually need 3 models
first which will extract ans like spans
second model will generate question on that answer
and third will be a QA model which will take the question and produce an answer,
then we can compare the two answers to see if the generated question is correct or not.
Having 3 models for single task is lot of complexity, so goal is to create a multi-task model which can do all of these 3 tasks
1. extract ans like spans
2. generate question based on the answer
3. QA
Also I want to see if we can generate multiple questions end-to-end without answers.
Another experiment is generating non-factoid questions. First we need to find a right dataset for this.
I've trained t5-small model in multi-task way and its giving really good results, so now I want to train more models (t5-base, t5-large, bart-base, bart-large, bert-2-bert) and see if they improve the results.
I've also trained t5-small and t5-base for end-2-end QG and that too is giving interesting results.
So regarding help I'm looking for some compute to train large models, multitask t5-small took 10hrs on single V100 GPU. I also want someone to provide rigorous feedback on the work (find out mistakes, asses quality of questions etc) and help with creating a write-up for study.<|||||>Hi all! @patil-suraj really great work! I am using your approach and seems to be working very well.
I have one question though, when fine-tuning the models, did you use a case or an uncased model? Because when giving too much uppercase text as context, it's generating questions partially or totally in uppercase and are much worse. It seems to be a cased model because when lowercasing the text, the questions are much better.
Thanks in advance!<|||||>@patil-suraj,
> and third will be a QA model which will take the question and produce an answer
I think using a QA model to evaluate the generated Question is an interesting approach. Why should do this instead of evaluating with BLEU or METEOR score?
> So regarding help I'm looking for some compute to train large models, multitask t5-small took 10hrs on single V100 GPU.
Can we use GCP for the compute? if it is not going to cost a lot. I don't have GPUs myself.
> I also want someone to provide rigorous feedback on the work (find out mistakes, asses quality of questions etc) and help with creating a write-up for study.
I think I can spend some time and provide feedback about the work. Although, I need to catch up with the details related to T5 model and the fine-tuning method used here.<|||||>Interesting thread @patil-suraj I had the similar thoughts on multi task training
- Finetune the model combining the data for both question generation & answering(one example is **context:c1 answer: a1 ---> question : q1** & another example context:c1 question : q1 ----> answer:a1)
- Way to generate multiple questions is either using topk and topp sampling or using multiple beams.
<|||||>Hey everyone,
here's a sneak peek of whats coming, everything will be available by the end of this week. stay tuned !
<|||||>@santhoshkolloju
> * Way to generate multiple questions is either using topk and topp sampling or using multiple beams.
Yes, this is what I have tried in another model.
Hi @emadg
> Why should do this instead of evaluating with BLEU or METEOR score?
BLEU and METEOR can be used to evaluate the model when you have the original reference questions, but at inference time how can we decide if the generated question is correct (makes sense or not, has answer or not) or not without original question ? Which is why the QA model.
>I think I can spend some time and provide feedback about the work. Although, I need to catch up with the details related to T5 model and the fine-tuning method used here.
You can start your analysis once I make it available. Human feedbacks will be most valuable. Thanks !
<|||||>Hi all, tagging everyone for notification
@emadg , @vegarab , @gabisurita , @hunkim , @ugmSorcero , @santhoshkolloju .
Happy to finally release the project. You can find everything in [this repo](https://github.com/patil-suraj/question_generation).
[All models](https://huggingface.co/models?filter=question-generation) are available on hub with configured inference API. You can search using question-generation tag.
Hereβs a [colab](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb) if anyone wants to play more with it.<|||||>@patil-suraj Thanks!!<|||||>Any thoughts about sentence-aware question generation techniques?
1) For each context from Squad dataset, extract the sentence where the answer is present and provide the triplet (context, sentence, question) as model inputs for training. This would decouple our necessity during inference to select an answer from the context since we can now randomly select an input sentence (or all the sentence) from the context and generate a question for each sentence from the context.
P(Question | Content, Sentence) would be the objective. <|||||>Hi @kaushalshetty , you can try this with the current model as well, try this [model](https://huggingface.co/valhalla/t5-base-qg-hl) and instead of highlighting a span highlight the entire sentence and see what you get <|||||>It's finally here! Really great work @patil-suraj !!!
I have one question:
When generating questions given answers and context, you try to find the answer within the context in order to highlight it, but there might be two problems with this approach:
- First, with the first model that you released for generating answers, sometimes I found that the generated answer was not corresponding to the original text, thus this approach would not actually find the answer within the context. But I don't know if the new model always generates answers that appear exactly in the same way than in the original text
- Second, what if an answer appears several times in the same sentence?
I don't know if you have thought about this, but maybe in these cases it would be better to take the approach to append the context after the answers.
Tell me what you think
Thanks!!<|||||>Thanks @ugmSorcero !
1. Yes, I have also observed these issues, see issue patil-suraj/question_generation#11 The initial goal of the project was to generate reading comprehension style questions like SQuAD where answers will always be in the text and I wanted to train BERT like model for answer extraction modelled as a span extraction task. But I wanted to keep everything as simple as possible and the text-to-text approach gave good results, decided to do it using T5. Only in few cases it produces answers that are somewhat different from the context text. See the issue for details.
2. This is the exact reason I choose the highlighting approach. If you simply prepend the answer and if that answer occurs multiple times in text then the model might get confused. When extracting the answer, each sentence is highlighted and then answers are extracted only for that sentence and it's less likely that the same answer will occur several times in the same sentence, so this takes care of the issue.
I've added one model which uses the prepend approach. You can train more if you want using the prepend approach, it'll just take two commands!
Feel free to raise an issue [here](https://github.com/patil-suraj/question_generation) if you want to discuss more. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>For future reference, my pipeline using doc2query is published on https://github.com/unicamp-dl/corpus2question.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,398 | closed | Trainer is missing sampler.set_epoch for distributed mode | # π Bug
## Information
`train_dataloader.sampler.set_epoch(epoch)` is missing before the start of each epoch in [trainer.py](https://github.com/huggingface/transformers/blob/3e0f06210646a440509efa718b30d18322d6a830/src/transformers/trainer.py#L406).
According to [here](https://pytorch.org/docs/stable/data.html#torch.utils.data.distributed.DistributedSampler) :
> In distributed mode, calling the `set_epoch` method is needed to make shuffling work; each process will use the same random seed otherwise.
| 05-16-2020 01:14:40 | 05-16-2020 01:14:40 | Good spot! I think @julien-c's latest PR (https://github.com/huggingface/transformers/pull/4243) for distributed eval will also take care of this! |
transformers | 4,397 | closed | Training TFBertForQuestionAnswering on custom SquadV1 data | Hello.
TLDR: If there is any minimal code that trains a TFBertForQuestionAnswering on custom squad-v1 (not from `nlp.load_dataset`)
I've tried in several ways and encountered some problems.
This is the minimal code i'm trying to activate:
```python
args = argparse.Namespace(**bert_config)
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
processor = SquadV1Processor()
# processor = SquadV2Processor()
examples = processor.get_train_examples(args.data_dir, filename=args.train_file)
train_dataset = squad_convert_examples_to_features(
examples=examples,
tokenizer=tokenizer,
max_seq_length=args.max_seq_length,
doc_stride=args.doc_stride,
max_query_length=args.max_query_length,
is_training=True,
return_dataset="tf"
)
model = TFBertForQuestionAnswering.from_pretrained("bert-base-cased")
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE, from_logits=True)
opt = tf.keras.optimizers.Adam(learning_rate=3e-5)
model.compile(optimizer=opt,
loss={'start_position': loss_fn, 'end_position': loss_fn},
loss_weights={'start_position': 1., 'end_position': 1.},
metrics=['accuracy'])
# Now let's train our model
try:
history = model.fit(train_dataset, epochs=1, steps_per_epoch=3)
except Exception as ex:
print(f"Failed using fit, {ex}")
history = model.fit_generator(train_dataset, epochs=1, steps_per_epoch=3)
```
The current errors are:
with fit:
```python
x = standardize_function(x)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 660, in standardize_function
standardize(dataset, extract_tensors_from_dataset=False)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 2360, in _standardize_user_data
self._compile_from_inputs(all_inputs, y_input, x, y)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 2580, in _compile_from_inputs
target, self.outputs)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_utils.py", line 1341, in cast_if_floating_dtype_and_mismatch
if target.dtype != out.dtype:
AttributeError: 'str' object has no attribute 'dtype'
```
with fit_generator:
```python
ValueError: Unknown entries in loss dictionary: ['start_position', 'end_position']. Only expected following keys: ['output_1', 'output_2']
```
The dataset that returns from squad_convert_examples_to_features is of type- `tensorflow.python.data.ops.dataset_ops.FlatMapDataset` and i'm not sure how to change it's columns from start_position to output_1 and end_position to output_2. I've also asked it on stackoverflow: https://stackoverflow.com/questions/61830361/how-the-change-column-name-in-tensorflow-flatmapdataset
I've seen the colab tutorial of the nlp package. It has simple code:
```python
train_tf_dataset = nlp.load_dataset('squad', split="train")
tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased')
def convert_to_tf_features(example_batch):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = list(zip(example_batch['context'], example_batch['question']))
encodings = tokenizer.batch_encode_plus(input_pairs, pad_to_max_length=True)
# Compute start and end tokens for labels using Transformers's fast tokenizers alignement methods.
start_positions, end_positions = [], []
for i, (context, answer) in enumerate(zip(example_batch['context'], example_batch['answers'])):
start_idx, end_idx = get_correct_alignement(context, answer)
start_positions.append([encodings.char_to_token(i, start_idx)])
end_positions.append([encodings.char_to_token(i, end_idx-1)])
if start_positions and end_positions:
encodings.update({'start_positions': start_positions,
'end_positions': end_positions})
return encodings
train_tf_dataset = train_tf_dataset.map(convert_to_tf_features, batched=True)
columns = ['input_ids', 'token_type_ids', 'attention_mask', 'start_positions', 'end_positions']
train_tf_dataset.set_format(type='tensorflow', columns=columns)
features = {x: train_tf_dataset[x] for x in columns[:3]}
labels = {"output_1": train_tf_dataset["start_positions"]}
labels["output_2"] = train_tf_dataset["end_positions"]
tfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8)
# Let's load a pretrained TF2 Bert model and a simple optimizer
from transformers import TFBertForQuestionAnswering
model = TFBertForQuestionAnswering.from_pretrained("bert-base-cased")
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE, from_logits=True)
opt = tf.keras.optimizers.Adam(learning_rate=3e-5)
model.compile(optimizer=opt,
loss={'output_1': loss_fn, 'output_2': loss_fn},
loss_weights={'output_1': 1., 'output_2': 1.},
metrics=['accuracy'])
# Now let's train our model
model.fit(tfdataset, epochs=1, steps_per_epoch=3)
```
I can't do the same as this code, because the dataset here is of type - `nlp.arrow_dataset.Dataset`.
I've tried to convert my `tensorflow.python.data.ops.dataset_ops.FlatMapDataset` to `nlp.arrow_dataset.Dataset` (and then mimic the last code here) but didn't find suitable way.
Edit:
I've succeeded to change the names of the output in the `FlatMapDataset` to output_1 and output_2, and now I receive the following error:
```
tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found.
(0) Invalid argument: logits and labels must have the same first dimension, got logits shape [384,1] and labels shape [1]
[[node loss/output_1_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits (defined at /yonatab/ZeroShot/transformers_experiments/src/minimal_example_for_git.py:53) ]]
[[Reshape_820/_546]]
(1) Invalid argument: logits and labels must have the same first dimension, got logits shape [384,1] and labels shape [1]
[[node loss/output_1_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits (defined at /yonatab/ZeroShot/transformers_experiments/src/minimal_example_for_git.py:53) ]]
```
How can I create a tf dataset with `squad_convert_examples_to_features` (and return type `tf`) and train a TF model on it?
Thanks | 05-16-2020 00:59:19 | 05-16-2020 00:59:19 | I succeeded to do it somehow, but i'm sure it's not the way it should work, and it won't scale well for large datasets. I would be happy to know if there is a better way.
What worked:
1. squad_convert_examples_to_features ( return_dataset = False) - getting the features
2. Creating a dictionary of features and labels, where each item is list of tensorflow vectors obtained by `tf.convert_to_tensor`
3. Constructing the dataset with `tfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8)`
4. Training with `fit_generator` method (`fit` fails)
Full code:
```python
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
processor = SquadV1Processor()
# processor = SquadV2Processor()
examples = processor.get_train_examples(args.data_dir, filename=args.train_file)
train_dataset = squad_convert_examples_to_features(
examples=examples,
tokenizer=tokenizer,
max_seq_length=args.max_seq_length,
doc_stride=args.doc_stride,
max_query_length=args.max_query_length,
is_training=True
)
def create_features_and_labels_tf_tensors_from_dataset(train_dataset):
all_input_ids = []
all_token_type_ids = []
all_attention_mask = []
all_start_pos = []
all_end_pos = []
ex: SquadFeatures
for ex in train_dataset:
all_input_ids.append(ex.input_ids)
all_token_type_ids.append(ex.token_type_ids)
all_attention_mask.append(ex.attention_mask)
all_start_pos.append(ex.start_position)
all_end_pos.append(ex.end_position)
all_input_ids_tensor = tf.convert_to_tensor(all_input_ids)
all_token_type_ids_tensor = tf.convert_to_tensor(all_token_type_ids)
all_attention_mask_tensor = tf.convert_to_tensor(all_attention_mask)
all_start_pos_tensor = tf.convert_to_tensor(all_start_pos)
all_end_pos_tensor = tf.convert_to_tensor(all_end_pos)
features = {'input_ids': all_input_ids_tensor, 'token_type_ids': all_token_type_ids_tensor,
'attention_mask': all_attention_mask_tensor}
labels = {"output_1": all_start_pos_tensor, 'output_2': all_end_pos_tensor}
return features, labels
features, labels = create_features_and_labels_tf_tensors_from_dataset(train_dataset)
tfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8)
model = TFBertForQuestionAnswering.from_pretrained("bert-base-cased")
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE, from_logits=True)
opt = tf.keras.optimizers.Adam(learning_rate=3e-5)
model.compile(optimizer=opt,
loss={'output_1': loss_fn, 'output_2': loss_fn},
loss_weights={'output_1': 1., 'output_2': 1.},
metrics=['accuracy'])
# Now let's train our model
try:
history = model.fit(tfdataset, epochs=1, steps_per_epoch=3)
print(f'Success with fit')
except Exception as ex:
traceback.print_exc()
print(f"Failed using fit, {ex}")
history = model.fit_generator(tfdataset, epochs=1, steps_per_epoch=3)
print(f'Success with fit_generator')
print("Done")
```
Error message for `fit`:
```python
File "/home/ec2-user/yonatab/ZeroShot/transformers_experiments/src/minimal_example_for_git.py", line 73, in main
history = model.fit(tfdataset, epochs=1, steps_per_epoch=3)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 819, in fit
use_multiprocessing=use_multiprocessing)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 235, in fit
use_multiprocessing=use_multiprocessing)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 593, in _process_training_inputs
use_multiprocessing=use_multiprocessing)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 706, in _process_inputs
use_multiprocessing=use_multiprocessing)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/data_adapter.py", line 702, in __init__
x = standardize_function(x)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 660, in standardize_function
standardize(dataset, extract_tensors_from_dataset=False)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 2360, in _standardize_user_data
self._compile_from_inputs(all_inputs, y_input, x, y)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 2580, in _compile_from_inputs
target, self.outputs)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_utils.py", line 1341, in cast_if_floating_dtype_and_mismatch
if target.dtype != out.dtype:
AttributeError: 'str' object has no attribute 'dtype'
Failed using fit, 'str' object has no attribute 'dtype'
WARNING:tensorflow:From /home/ec2-user/yonatab/ZeroShot/transformers_experiments/src/minimal_example_for_git.py:78: Model.fit_generator (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
Please use Model.fit, which supports generators.
```
It also fails when trying to add `validation_data` to the `fit` function<|||||>I think it's a bug, i'm closing & opening another bug issue. |
transformers | 4,396 | closed | Wrong model or tokenizer for MarianMT | While we [can't save](https://github.com/huggingface/transformers/issues/4371) `MarianTokenizer` to local directory, I found model weights and configs on [this page](https://huggingface.co/Helsinki-NLP/opus-mt-en-ROMANCE), in `List all files in model ` link.
I downloaded these files, but I think this is wrong configs, because I can't reproduce even simplest example from [here](https://huggingface.co/transformers/model_doc/marian.html)
This text
```
1) '>>fr<< this is a sentence in english that we want to translate to french',
2) '>>pt<< This should go to portuguese',
3) '>>es<< And this to Spanish'
should became
1) "c'est une phrase en anglais que nous voulons traduire en franΓ§ais",
2) 'Isto deve ir para o portuguΓͺs.',
3) 'Y esto al espaΓ±ol'
```
with model and configs downloaded from the link above produces these results:
```
1) "c'est une phrase en anglais que nous voulons traduire en franΓ§ais" (as expected)
2) "Questo deve ir in portughese"
3) "E questo a spagnol"
```
This is definitely wrong, can you help? | 05-16-2020 00:55:21 | 05-16-2020 00:55:21 | Never mind, my mistake<|||||>What was your mistake?
Is your system Windows?
I am trying to reproduce the colab tutorial from https://blogs.helsinki.fi/language-technology/2020/05/14/helsinkinlp-in-huggingface/ on Windows but I get errors.
<|||||>@R4ZZ3 Hello! I just appended `>>tag<<` for text in cycle, and my text quickly became a mess of tags. And no Windows, sorry, can't help.<|||||>Can anyone help with this issue: #5040 ? |
transformers | 4,395 | closed | MarianMT = How to return 5 best candidates for a translation. | This is a code for normal translation. This is the most probable translation. How can I return lets says 5 best candidates for translation for every single word (beam size would be 1)?
This model only return just the best word, which gives us a better translation. But I want to use it as a language model.
Is this even possible. I was looking at classes and code but I am not sure how would I do it.
from transformers import MarianMTModel, MarianTokenizer
src_text = [
'>>fr<< this is a sentence in english that we want to translate to french',
]
model_name = 'Helsinki-NLP/opus-mt-en-ROMANCE'
tokenizer = MarianTokenizer.from_pretrained(model_name)
print(tokenizer.supported_language_codes)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer.prepare_translation_batch(src_text))
tgt_text = [tokenizer.decode(t, skip_special_tokens=True) for t in translated] | 05-15-2020 23:19:27 | 05-15-2020 23:19:27 | That is available on this page - https://marian-nmt.github.io/faq:
Can I generate n-best lists?
Yes. Just use --n-best and the set --beam-size 6 for an n-best list size of 6.
I do not know how to apply it here.<|||||>That feature is not supported in our implementation, unfortunately.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>You can try passing `num_return_sequences=5` to generate, but that behavior is untested, and I have never used it.<|||||>> You can try passing `num_return_sequences=5` to generate, but that behavior is untested, and I have never used it.
However, do not forget to set the beams accordingly. The beams must be defined at least as large as the desired number of alternative results. |
transformers | 4,394 | closed | the special token of XLNet | hello,
may i ask you the special tokens of XLNet are same as BERT, in which they are '[CLS]' and '[SEP]'? Because i found the special tokens of XLNet are '<cls>' and '<sep>' in the original code, however, many public introduction about Xlnet still use the same tokens '[CLS]' and '[SEP]' as BERT. is it ok? are they same and don't matter? | 05-15-2020 21:48:59 | 05-15-2020 21:48:59 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.