repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 5,800 | closed | GPT2 weights don't initialize from checkpoint | OS: OSX 10.15.5 (Catalina)
Transformers version 3.0.2
I'm running into this warning when I'm trying to initialize a pre-trained GPT2 model.
This looks a bit worrying to me because it looks like like it ignores all the pre-trained attention heads, or am I missing something here?
<img width="1087" alt="Screenshot 2020-07-16 at 11 19 10" src="https://user-images.githubusercontent.com/8694790/87654081-b837c900-c756-11ea-925e-106314ad9942.png">
Any idea of what's gone wrong? I've been running the same code on another computer earlier without encountering this problem, but on my current setup I haven't been able to get around it. I also tried deleting the downloaded files from the cache and re-loading the models with no luck.
Any help is very appreciated! | 07-16-2020 09:29:03 | 07-16-2020 09:29:03 | Hey @almaLindborg,
This is a known warning. We should probably disable this...a couple of other models have it as well. But the model works as intended, so no worries about the message.
Also pinging @sshleifer for notification. |
transformers | 5,799 | closed | Issue when load pretrained weights | I got the following error when running
AutoModelWithLMHead.from_pretrained("bert-base-chinese")
OSError: Can't load weights for 'bert-base-chinese'. Make sure that:
- 'bert-base-chinese' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'bert-base-chinese' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt. | 07-16-2020 07:43:38 | 07-16-2020 07:43:38 | Cannot reproduce. The command:
```python
from transformers import AutoModelWithLMHead
model = AutoModelWithLMHead.from_pretrained("bert-base-chinese")
```
works fine on master. Can you update to v3.0.2 `pip install --upgrade transformers` and check again? :-) <|||||>> Cannot reproduce. The command:
>
> ```python
> from transformers import AutoModelWithLMHead
> model = AutoModelWithLMHead.from_pretrained("bert-base-chinese")
> ```
>
> works fine on master. Can you update to v3.0.2 `pip install --upgrade transformers` and check again? :-)
Still not work for me. I tried to download the weights directly, but another error occurred...
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte<|||||>Can you please post your environment info here `python src/transformers/commands/env.py`?<|||||>I'm seeing the same error when trying to load a GTP2 checkpoint model (using both `GPT2Model` and `AutoModel`):
```
model = GPT2Model.from_pretrained('./test_01/test_01.index', from_tf=True) # throws UnicodeDecodeError
model = GPT2Model.from_pretrained('./test_01/test_01.index') # throws UnicodeDecodeError
model = AutoModel.from_pretrained('./test_01/test_01.index', from_tf=True) # throws UnicodeDecodeError
model = AutoModel.from_pretrained('./test_01/test_01.index') # throws UnicodeDecodeError
```
I could probably try every possible variation of loading that model and hit the same error.
I've also used checkpoint models that, in theory, should work.
If I use `GPT2LMHeadModel.from_pretrained('gpt2-medium')` (or any thing that allows me to load a model by name) it works fine.
My env:
```
- `transformers` version: 3.0.2
- Platform: macOS-10.15.5-x86_64-i386-64bit
- Python version: 3.8.3
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: Not sure
```
<|||||>@elbowdonkey,
can you try just running:
```python
model = GPT2Model.from_pretrained('./test_01/", from_tf=True)
```
where the relevant files can be found in `test_01`?<|||||>I get a different error:
```python
model = GPT2Model.from_pretrained("./test_02/", from_tf=True)
2020-08-10 12:32:10.047629: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-08-10 12:32:10.064526: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7f8eb76b8f80 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-08-10 12:32:10.064543: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-08-10 12:32:10.071086: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
2020-08-10 12:32:27.584426: W tensorflow/core/util/tensor_slice_reader.cc:95] Could not open ./test_02/pytorch_model.bin: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/--/.pyenv/versions/3.8.3/lib/python3.8/site-packages/transformers/modeling_utils.py", line 808, in from_pretrained
model = load_tf2_checkpoint_in_pytorch_model(model, resolved_archive_file, allow_missing_keys=True)
File "/Users/--/.pyenv/versions/3.8.3/lib/python3.8/site-packages/transformers/modeling_tf_pytorch_utils.py", line 261, in load_tf2_checkpoint_in_pytorch_model
tf_model.load_weights(tf_checkpoint_path, by_name=True)
File "/Users/--/.pyenv/versions/3.8.3/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 2204, in load_weights
with h5py.File(filepath, 'r') as f:
File "/Users/--/.pyenv/versions/3.8.3/lib/python3.8/site-packages/h5py/_hl/files.py", line 406, in __init__
fid = make_fid(name, mode, userblock_size,
File "/Users/--/.pyenv/versions/3.8.3/lib/python3.8/site-packages/h5py/_hl/files.py", line 173, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 88, in h5py.h5f.open
OSError: Unable to open file (file signature not found)
```
The model I'm trying to use is a model that was converted from a checkpoint to a pytorch model. I have no idea what kind of checkpoint model it was (it has several files: `checkpoint` and `vocab.bpe`, `hparams.json`, and `test_02.data-00000-of-00001`, among others.)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Hi,
I would like to request that this ticket be opened back up. I'm having the same issue but with the default pretrained model:
```
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
~\Anaconda3\lib\site-packages\transformers\modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
654 if resolved_archive_file is None:
--> 655 raise EnvironmentError
656 except EnvironmentError:
OSError:
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-81-7e9fe0224671> in <module>
3 #Load AutoModel from huggingface model repository
4 tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/bert-base-nli-mean-tokens")
----> 5 model = AutoModel.from_pretrained("sentence-transformers/bert-base-nli-mean-tokens", from_tf=True)
~\Anaconda3\lib\site-packages\transformers\modeling_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
500 for config_class, model_class in MODEL_MAPPING.items():
501 if isinstance(config, config_class):
--> 502 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
503 raise ValueError(
504 "Unrecognized configuration class {} for this kind of AutoModel: {}.\n"
~\Anaconda3\lib\site-packages\transformers\modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
660 f"- or '{pretrained_model_name_or_path}' is the correct path to a directory containing a file named one of {WEIGHTS_NAME}, {TF2_WEIGHTS_NAME}, {TF_WEIGHTS_NAME}.\n\n"
661 )
--> 662 raise EnvironmentError(msg)
663
664 if resolved_archive_file == archive_file:
OSError: Can't load weights for 'sentence-transformers/bert-base-nli-mean-tokens'. Make sure that:
- 'sentence-transformers/bert-base-nli-mean-tokens' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'sentence-transformers/bert-base-nli-mean-tokens' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt.
```
**To reproduce:**
```
from transformers import AutoTokenizer, AutoModel
#Load AutoModel from huggingface model repository
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/bert-base-nli-mean-tokens")
model = AutoModel.from_pretrained("sentence-transformers/bert-base-nli-mean-tokens", from_tf=True)
```
My env:
- `transformers` version: 3.0.2
- Platform: Window 10 Enterprise, version 1909, 16GB RAM, 64 Bit OS, x64-based processor
- Python version: 3.8.3
- Torch version: 1.6.0+cpu<|||||>Hey @Ecanlilar,
This model exists only in PT so either you do:
```python
from transformers import AutoTokenizer, AutoModel
#Load AutoModel from huggingface model repository
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/bert-base-nli-mean-tokens")
model = AutoModel.from_pretrained("sentence-transformers/bert-base-nli-mean-tokens")
```
or
```python
from transformers import AutoTokenizer, TFAutoModel
#Load AutoModel from huggingface model repository
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/bert-base-nli-mean-tokens")
model = TFAutoModel.from_pretrained("sentence-transformers/bert-base-nli-mean-tokens", from_pt=True)
```<|||||>> Hey @Ecanlilar,
>
> This model exists only in PT so either you do:
>
> ```python
> from transformers import AutoTokenizer, AutoModel
>
> #Load AutoModel from huggingface model repository
> tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/bert-base-nli-mean-tokens")
> model = AutoModel.from_pretrained("sentence-transformers/bert-base-nli-mean-tokens")
> ```
>
> or
>
> ```python
> from transformers import AutoTokenizer, TFAutoModel
>
> #Load AutoModel from huggingface model repository
> tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/bert-base-nli-mean-tokens")
> model = TFAutoModel.from_pretrained("sentence-transformers/bert-base-nli-mean-tokens", from_pt=True)
> ```
This isn't working for me. I'm using the latest version of the transformers library (4.10.2). I'm getting the same error as Ecanlilar. |
transformers | 5,798 | closed | Lightning Updates for v0.8.5 | Fixing #5361 ...battling with unittests. | 07-16-2020 06:04:51 | 07-16-2020 06:04:51 | @sshleifer have any guidance on these two errors?
## T5
```python
________________ test_finetune[patrickvonplaten/t5-tiny-random] ________________
[gw3] linux -- Python 3.6.11 /usr/local/bin/python
model = 'patrickvonplaten/t5-tiny-random'
@pytest.mark.parametrize(
["model"], [pytest.param(T5_TINY), pytest.param(BART_TINY), pytest.param(MBART_TINY), pytest.param(MARIAN_TINY)]
)
def test_finetune(model):
args_d: dict = CHEAP_ARGS.copy()
task = "translation" if model in [MBART_TINY, MARIAN_TINY] else "summarization"
tmp_dir = make_test_data_dir()
output_dir = tempfile.mkdtemp(prefix="output_")
args_d.update(
data_dir=tmp_dir,
model_name_or_path=model,
tokenizer_name=None,
train_batch_size=2,
eval_batch_size=2,
output_dir=output_dir,
do_predict=True,
task=task,
src_lang="en_XX",
tgt_lang="ro_RO",
freeze_encoder=True,
freeze_embeds=True,
)
assert "n_train" in args_d
args = argparse.Namespace(**args_d)
> module = main(args)
examples/seq2seq/test_seq2seq_examples.py:233:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
examples/seq2seq/finetune.py:298: in main
model: SummarizationModule = SummarizationModule(args)
examples/seq2seq/finetune.py:95: in __init__
freeze_params(self.model.model.encoder) # TODO: this will break for t5
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = T5ForConditionalGeneration(
(shared): Embedding(32128, 64)
(encoder): T5Stack(
(embed_tokens): Embedding(32128...
(dropout): Dropout(p=0.1, inplace=False)
)
(lm_head): Linear(in_features=64, out_features=32128, bias=False)
)
name = 'model'
def __getattr__(self, name):
if '_parameters' in self.__dict__:
_parameters = self.__dict__['_parameters']
if name in _parameters:
return _parameters[name]
if '_buffers' in self.__dict__:
_buffers = self.__dict__['_buffers']
if name in _buffers:
return _buffers[name]
if '_modules' in self.__dict__:
modules = self.__dict__['_modules']
if name in modules:
return modules[name]
raise AttributeError("'{}' object has no attribute '{}'".format(
> type(self).__name__, name))
E AttributeError: 'T5ForConditionalGeneration' object has no attribute 'model'
/usr/local/lib/python3.6/site-packages/torch/nn/modules/module.py:594: AttributeError
```
## MBart
```python
_____________________ test_finetune[sshleifer/tiny-mbart] ______________________
[gw3] linux -- Python 3.6.11 /usr/local/bin/python
model = 'sshleifer/tiny-mbart'
@pytest.mark.parametrize(
["model"], [pytest.param(T5_TINY), pytest.param(BART_TINY), pytest.param(MBART_TINY), pytest.param(MARIAN_TINY)]
)
def test_finetune(model):
args_d: dict = CHEAP_ARGS.copy()
task = "translation" if model in [MBART_TINY, MARIAN_TINY] else "summarization"
tmp_dir = make_test_data_dir()
output_dir = tempfile.mkdtemp(prefix="output_")
args_d.update(
data_dir=tmp_dir,
model_name_or_path=model,
tokenizer_name=None,
train_batch_size=2,
eval_batch_size=2,
output_dir=output_dir,
do_predict=True,
task=task,
src_lang="en_XX",
tgt_lang="ro_RO",
freeze_encoder=True,
freeze_embeds=True,
)
assert "n_train" in args_d
args = argparse.Namespace(**args_d)
> module = main(args)
examples/seq2seq/test_seq2seq_examples.py:233:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
examples/seq2seq/finetune.py:324: in main
logger=logger,
examples/lightning_base.py:312: in generic_train
trainer.fit(model)
/usr/local/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py:1038: in fit
model.setup('fit')
examples/lightning_base.py:125: in setup
dataloader = self.get_dataloader("train", train_batch_size)
examples/seq2seq/finetune.py:193: in get_dataloader
dataset = self.get_dataset(type_path)
examples/seq2seq/finetune.py:188: in get_dataset
**self.dataset_kwargs,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <seq2seq.utils.SummarizationDataset object at 0x7ff21a4592e8>
tokenizer = <transformers.tokenization_bart.MBartTokenizer object at 0x7ff21f7c0b00>
data_dir = PosixPath('/tmp/tmpmc70afs6'), type_path = 'train'
max_source_length = 12, max_target_length = 12, n_obs = None
overwrite_cache = False, prefix = '', src_lang = None, tgt_lang = None
def __init__(
self,
tokenizer,
data_dir,
type_path="train",
max_source_length=1024,
max_target_length=56,
n_obs=None,
overwrite_cache=False,
prefix="",
src_lang=None,
tgt_lang=None,
):
super().__init__()
# FIXME: the rstrip logic strips all the chars, it seems.
tok_name = tokenizer.__class__.__name__.lower().rstrip("tokenizer")
if hasattr(tokenizer, "set_lang") and src_lang is not None:
tokenizer.set_lang(src_lang) # HACK: only applies to mbart
self.source = encode_file(
tokenizer,
os.path.join(data_dir, type_path + ".source"),
max_source_length,
overwrite_cache=overwrite_cache,
prefix=prefix,
tok_name=tok_name,
)
tgt_path = os.path.join(data_dir, type_path + ".target")
if hasattr(tokenizer, "set_lang"):
> assert tgt_lang is not None, "--tgt_lang must be passed to build a translation"
E AssertionError: --tgt_lang must be passed to build a translation
examples/seq2seq/utils.py:112: AssertionError
```<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5798?src=pr&el=h1) Report
> Merging [#5798](https://codecov.io/gh/huggingface/transformers/pull/5798?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/615be03f9d961c0c9722fe10e7830e011066772e&el=desc) will **decrease** coverage by `0.18%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5798?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5798 +/- ##
==========================================
- Coverage 78.66% 78.48% -0.19%
==========================================
Files 146 146
Lines 26200 26200
==========================================
- Hits 20611 20563 -48
- Misses 5589 5637 +48
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5798?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5798/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5798/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.51%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5798/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5798?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5798?src=pr&el=footer). Last update [615be03...ee864a0](https://codecov.io/gh/huggingface/transformers/pull/5798?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Merging this now.
cc @moscow25 this bumps us to `pytorch_lightning==0.8.5`, let us know if any issues.
cc @clmnt , @patil-suraj, @williamFalcon
Thanks for the big PR @nateraw and @williamFalcon !<|||||>Thanks @sshleifer -- `0.8.5` has been good for us this week. Much appreciated. |
transformers | 5,797 | closed | Can I use the pretrained BERT-Base model directly for predict isNextSentence task? | # ❓ Can I use the pretrained BERT-Base model directly for predict isNextSentence task?
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
I has a document-level corpus, but it don't have document boundaries.
I want to confirm that whether can I predict isNextSentence task by pretrained BERT-Base model? The model does not use any data finetune and don't mask any token when i used it.
Is prediction reliable by this way? | 07-16-2020 06:02:14 | 07-16-2020 06:02:14 | I have find anwser in [code](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L1138). |
transformers | 5,796 | closed | Moving transformers package import statements to relative imports in some files | When using the transformers library as a local submodule (eg git submodule) instead of a python package, it's important to have relative import instead of doing `from transformers` directly which would look at the installed version of the python package. Regardless, it seems like the codebase favors relative imports in general, but it seems a few cases were not added that way.
This pull request moves to relative paths occurrences in some files under the `src` folder, except by comments and the `convert_*` files which are likely importing the python package intentionally. | 07-16-2020 05:25:19 | 07-16-2020 05:25:19 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5796?src=pr&el=h1) Report
> Merging [#5796](https://codecov.io/gh/huggingface/transformers/pull/5796?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7214954db42ec96603ea596c5f68b16f574fba89&el=desc) will **increase** coverage by `0.42%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5796?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5796 +/- ##
==========================================
+ Coverage 78.38% 78.80% +0.42%
==========================================
Files 146 146
Lines 26318 26318
==========================================
+ Hits 20629 20741 +112
+ Misses 5689 5577 -112
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5796?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/5796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `100.00% <100.00%> (ø)` | |
| [src/transformers/modeling\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/5796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `92.20% <100.00%> (ø)` | |
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/5796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `90.90% <100.00%> (ø)` | |
| [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `89.45% <100.00%> (ø)` | |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <100.00%> (ø)` | |
| [src/transformers/testing\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90ZXN0aW5nX3V0aWxzLnB5) | `72.72% <100.00%> (ø)` | |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <100.00%> (ø)` | |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (ø)` | |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |
| ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/5796/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5796?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5796?src=pr&el=footer). Last update [7214954...39e85f8](https://codecov.io/gh/huggingface/transformers/pull/5796?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,795 | closed | LongFormerAttention For AutoRegressive Models | Longformer Currently supports only Bidirectional Attention,It would be a great feature to finetune current language models like GPT-2 on longer sequences . | 07-16-2020 04:05:34 | 07-16-2020 04:05:34 | Yes! This will be added when starting the `Longformer` Encoder framework :-) Closing this in favor of https://github.com/huggingface/transformers/issues/5170 and https://github.com/huggingface/transformers/issues/4225 |
transformers | 5,794 | closed | Print all next tokens of a sentence over a certain probability threshold. | How would I do this using GPT-2? | 07-16-2020 02:38:30 | 07-16-2020 02:38:30 | Hey @zanderbush, we are trying to move special feature requests / research questions to https://discuss.huggingface.co/ - would you mind posting it there again? |
transformers | 5,793 | closed | Adding the LXMERT pretraining model (MultiModal languageXvision) to HuggingFace's suite of models | LXMERT is considered a dual-stream language-vision model. Dual stream meaning that it uses a transformer encoder to perform self-attention between each modality of data and then a cross-modality transformer encoder for a fine-grained cross-attention between each modality. It's achieved tremendous (STOTA results) success across a wide variety of downstream tasks (GQA, VQA2, NLVR2).
Here is the original link to the paper: https://arxiv.org/pdf/1908.07490.pdf
Here is the link to the original implementation: https://github.com/airsplay/lxmert
and here is the weight to the original model weights: https://nlp1.cs.unc.edu/data/model_LXRT.pth
Please let me know if there is anything i missed, and I would be very grateful for any help if I end up running into any blockers, but I will do my best to follow the detailed instructions in the templates.
This is also a work in progress! | 07-16-2020 02:01:34 | 07-16-2020 02:01:34 | Thank you so much!!! I really appreciate you offering to help too!
To cover what you mentioned quickly, I can upload the model weights tomorrow! For updating the documentation, that should be no problem either. I think for the model outputs regarding this model, there probably is quite a bit of information to return (pooled output, hidden states, and attentions for the language, vision, and cross-modality encoders). I can see about adding these tomorrow and get your thoughts on that. And lastly, I am glad I can help with adding more model heads!
I have a bit more testing to do especially with the tensorflow model, but i will see what I can get done and let you know if I run into any blockers or questions. Look forward to getting back to you!<|||||>Hi, sorry for the slight delay. I added a new dataclass for Lxmert outputs, added the model card, finished all tests for the torch model, among a couple of other things.
For the tests, I have forgone the ModelMixin parent as I have found that some of the tests are hard to apply to lxmert. I have also temporarily forgone adding example commands for lxmert. Is this a neccesity, or would it be alright to leave these out?
I think I updated the documentation to the new standard, but if there are still some errors, any help would be appreciated!
I am running a donwstream task with the pretrained weights right now to ensure that the results are still the same! I will get back to you with these in about a day!<|||||>Hi thank you so much for a really quick response and review! In my next commit, Ill implement the following changes and suggestions. If it wouldn't be to much to ask, I think I could actually do with much help adding the mixin tester. Given that there are quite a few tests that seem lxmert seems to be incompatable with (for example I think one of the tests required that the LxmertConfig had the 'num_hidden_layers' attribute, which wouldn't be applicable since I suppose instead, we let the user specify the number of hidden layers for each of the visual, language, and cross-modality encoders. You're judgement on deciding what makes sense with regards to Lxmert test compatibility is probably greater than mine.
Also just one last implementation detail that I should probably bring up is that for the output of the cross-modality encoder attentions, I only output the attentions when the language-hidden states are used as the input to the cross attention. Since this encoder is used for bidirectional attention, I do not store the attentions when the visual-hidden states are used as the input. Also For every layer of the cross attention encoder, each of the visual and language states are further processed by a separate self-attention layer, and I do not keep track of the attention outputs for these either. The only reason I keep track of the language attentions when used as input to the cross-modality attention layer is because it is those hidden states that are used for downstream pooling.
I have yet to change the output for the TFLxmertModeling classes, which I will probably add in the commit following the one that addresses your review, but it probably would save me some time if you were able to cover that and would be greatly appreciated aswell!<|||||>Commited the above changes, I need to do some testing with the conversion script again, but besides that. I will wait to change the tensorflow code and the modeling tests for your response. If I do make a change in my next commit to modeling_tf_lxmert, it will just be to the documentation<|||||>I also just added some functionality to edit the number of question answering labels, simialr to the "resize_output_embeddings" utility in the LxmertPretrainedModel class. However, what I added seems a lot less complex than the process for resizing embeddings, so if you could take a look at these new functions, that would be awesome! I added some tests, and they do seem to work.<|||||>Hi! Thanks for the thorough explanation. I'm adding the common tests to the tests, and will report here. May I push directly on your fork?<|||||>Actually, since I'm bound to make changes to some of the files, I'd like you to review the changes I'm making so that we may discuss. I'll open a PR on your branch when it's in a good state.<|||||>Okay sounds good! the output hidden states argument should be added in
about 30 minutes, and then I can see about adding the TF changes in the
next hour.
On Mon, Aug 10, 2020 at 8:12 AM Lysandre Debut <[email protected]>
wrote:
> Actually, since I'm bound to make changes to some of the files, I'd like
> you to review the changes I'm making so that we may discuss. I'll open a PR
> on your branch when it's in a good state.
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/pull/5793#issuecomment-671318378>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ADLBORYS3OXI4RBT4LHTJLLR77P3FANCNFSM4O3KPDOA>
> .
>
<|||||>Great, I'll take a look at doing the Mixin tomorrow once your changes are up!<|||||>the lxmert model in pytorch should be ready to go!<|||||>Just pushed the same changes for TF as the PyTorch ones in https://github.com/eltoto1219/transformers/pull/1, alongside docs changes and a few patches to the PyTorch version.
I'll take care of the merge commit once everything is done, it's due to `isort==5` and `black==20.8b` being released.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5793?src=pr&el=h1) Report
> Merging [#5793](https://codecov.io/gh/huggingface/transformers/pull/5793?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/930153e7d2d658267b7630a047a4bfc85b86042d?el=desc) will **decrease** coverage by `1.74%`.
> The diff coverage is `79.63%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5793?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5793 +/- ##
==========================================
- Coverage 79.36% 77.62% -1.75%
==========================================
Files 157 161 +4
Lines 28569 29816 +1247
==========================================
+ Hits 22675 23144 +469
- Misses 5894 6672 +778
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5793?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/commands/convert.py](https://codecov.io/gh/huggingface/transformers/pull/5793/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9jb252ZXJ0LnB5) | `27.58% <0.00%> (-1.51%)` | :arrow_down: |
| [src/transformers/modeling\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/5793/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <70.01%> (ø)` | |
| [src/transformers/modeling\_tf\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/5793/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `88.31% <88.31%> (ø)` | |
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/5793/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.30% <100.00%> (+0.01%)` | :arrow_up: |
| [src/transformers/configuration\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5793/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `93.33% <100.00%> (+0.15%)` | :arrow_up: |
| [src/transformers/configuration\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/5793/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2x4bWVydC5weQ==) | `100.00% <100.00%> (ø)` | |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5793/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.85% <100.00%> (+0.12%)` | :arrow_up: |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5793/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.82% <100.00%> (+2.27%)` | :arrow_up: |
| [src/transformers/tokenization\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/5793/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbHhtZXJ0LnB5) | `100.00% <100.00%> (ø)` | |
| [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5793/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-79.30%)` | :arrow_down: |
| ... and [23 more](https://codecov.io/gh/huggingface/transformers/pull/5793/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5793?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5793?src=pr&el=footer). Last update [930153e...4ed21b4](https://codecov.io/gh/huggingface/transformers/pull/5793?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,792 | closed | Seq2SeqDataset uses linecache to save memory by @Pradhy729 (#5792) | 07-16-2020 01:23:45 | 07-16-2020 01:23:45 | The isort check doesn't fail when I run it on my end. Other than that - I would say this is complete.
<|||||>Awesome, I'm testing it now on WMT english-romanian translation.<|||||>OK, I have it running with many modifications and it works much better. Very little cpu ram wasted!
Is it OK if I make a new PR or do you prefer to add me as a contributor to your fork and so I can push to this branch? either way you will get all the credit in the release notes/pr summary :)
<|||||>Thanks - I'll add you as a contributor in mine. :)<|||||>> Thanks - I'll add you as a contributor in mine. :)
You can just click on "Allow edits from maintainers" on your PR. (in case you didn't know this feature)<|||||>Any updates here? Is it good to go?
<|||||>I'm still working on cleaning up my code. Sorry for the delay. <|||||>Biggest change:
- For MBart Tokenizer, we can't use the `encode_line` approach because there are special tokens all over the place, so I made a separate dataset.
Stylistic:
- Address the `linecache` off by 1 error inside of `__getitem__` instead other places.
- `_get_examples` -> `get_char_lens`.
- `MbartTokenizer` cleanup.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5792?src=pr&el=h1) Report
> Merging [#5792](https://codecov.io/gh/huggingface/transformers/pull/5792?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/eae6d8d14f1d25d62c3fe9e7e410607bbaf69787&el=desc) will **increase** coverage by `0.91%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5792?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5792 +/- ##
==========================================
+ Coverage 77.54% 78.46% +0.91%
==========================================
Files 146 146
Lines 26200 26200
==========================================
+ Hits 20318 20559 +241
+ Misses 5882 5641 -241
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5792?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5792/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.45% <100.00%> (ø)` | |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5792/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5792/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (-1.26%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5792/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5792?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5792?src=pr&el=footer). Last update [eae6d8d...79d73ee](https://codecov.io/gh/huggingface/transformers/pull/5792?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 5,791 | closed | Add script to convert tf2.x checkpoint to PyTorch | The script converts the newer TF2.x checkpoints (as published on their [official GitHub](https://github.com/tensorflow/models/tree/master/official/nlp/bert) to Pytorch. The [existing script](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_bert_original_tf_checkpoint_to_pytorch.py) only works with checkpoints from the [original BERT repository](https://github.com/google-research/bert) which uses TF 1.4.
The script currently only converts the encoder part (but no MLM/NSP heads). The official checkpoints published by the tensorflow team unfortunately also don't contain these heads. I have written a script which takes care of these, but it does add a fair bit of complexity.
I have tested on my side by comparing all model weights with the official Huggingface version:
```python
from transformers import BertModel
import torch
def validate_model(bert_original, bert_converted):
assert bert_original.num_parameters() == bert_converted.num_parameters()
assert len(bert_original.state_dict()) == len(bert_converted.state_dict())
for (layer_original, value_original), (layer_converted, value_converted) in zip(bert_original.state_dict().items(), bert_converted.state_dict().items()):
assert layer_original == layer_converted
if not torch.eq(value_original, value_converted).all():
raise ValueError(f'Incorrect weights for {layer_original}')
print('Success! Both models are identical!')
if __name__ == "__main__":
validate_against = 'bert-base-uncased'
path_to_converted_model = './converted_bert_base_uncased'
bert_converted = BertModel.from_pretrained(path_to_converted_model)
bert_original = BertModel.from_pretrained(validate_against)
validate_model(bert_original, bert_converted)
```
I'm happy to write some tests for this if needed (and if possible) or have any other input. | 07-15-2020 23:13:14 | 07-15-2020 23:13:14 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5791?src=pr&el=h1) Report
> Merging [#5791](https://codecov.io/gh/huggingface/transformers/pull/5791?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3b924fabeef717be8399f1888280c29c69e9ab00&el=desc) will **decrease** coverage by `0.08%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5791?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5791 +/- ##
==========================================
- Coverage 78.13% 78.05% -0.09%
==========================================
Files 146 146
Lines 26047 26047
==========================================
- Hits 20352 20330 -22
- Misses 5695 5717 +22
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5791?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5791/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5791/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.96% <0.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5791/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5791/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5791?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5791?src=pr&el=footer). Last update [3b924fa...378b034](https://codecov.io/gh/huggingface/transformers/pull/5791?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Other than that, LGTM! Thanks for your work!<|||||>Great, just renamed it - let me know if anything else should be changed! |
transformers | 5,790 | closed | github issue template suggests who to tag | Fewer issues get lost if people tag the relevant developer. It also saves @LysandreJik time.
While he is out, I figured we could experiment with trying to nudge issue raisers to tag.
Here is the comment at the beginning of the "Bug Report" template:
Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of who to tag.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @julien-c
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
blenderbot: @mariamabarham
Bart: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
tensorflow: @jplu
I wrote it in 3 minutes in case people hate this idea, so I am sure it is missing people!
Suggestions very much appreciated. | 07-15-2020 22:44:41 | 07-15-2020 22:44:41 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5790?src=pr&el=h1) Report
> Merging [#5790](https://codecov.io/gh/huggingface/transformers/pull/5790?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d088d744adb4e5aa45262a34acab3ae9e81de169&el=desc) will **decrease** coverage by `0.84%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5790?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5790 +/- ##
==========================================
- Coverage 78.10% 77.26% -0.85%
==========================================
Files 146 146
Lines 26047 26047
==========================================
- Hits 20344 20125 -219
- Misses 5703 5922 +219
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5790?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5790/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5790/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-2.01%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5790/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5790/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5790?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5790?src=pr&el=footer). Last update [d088d74...46f7d0f](https://codecov.io/gh/huggingface/transformers/pull/5790?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>I like it! Maybe we can also put in **big** that people should add their env information and not insert screenshots - this happens still quite often from what I see<|||||>@sshleifer I think I've committed instead of suggesting an edit as I wasn't a reviewer, sorry, lemme know if it broke anything!<|||||>ok to merge @julien-c ?<|||||>Like this too! Added myself for issues linked to the documentation, feel free to add more my way.<|||||>@sshleifer could you tag me for `examples/token-classification` 🤔 |
transformers | 5,789 | closed | Update README.md | Created PyTorch version of model. Minor update on README. | 07-15-2020 22:28:06 | 07-15-2020 22:28:06 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5789?src=pr&el=h1) Report
> Merging [#5789](https://codecov.io/gh/huggingface/transformers/pull/5789?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3b924fabeef717be8399f1888280c29c69e9ab00&el=desc) will **increase** coverage by `0.11%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5789?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5789 +/- ##
==========================================
+ Coverage 78.13% 78.25% +0.11%
==========================================
Files 146 146
Lines 26047 26047
==========================================
+ Hits 20352 20383 +31
+ Misses 5695 5664 -31
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5789?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5789/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5789/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (-1.01%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5789/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5789/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5789?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5789?src=pr&el=footer). Last update [3b924fa...f284d8c](https://codecov.io/gh/huggingface/transformers/pull/5789?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,788 | closed | add attention_dropout, relu_dropout command line args to lightning_base.py | then pass them to config in `__init__`.
| 07-15-2020 21:35:58 | 07-15-2020 21:35:58 | This is a duplicate. |
transformers | 5,787 | closed | Can't load weights for GPT2 error | ## System Info
Pop!_OS 20.04
Pytorch: 1.5.1
Transformers: 2.11.0
Python: 3.7.6
## Details
I am working behind a proxy. If I run the following:
```python
from transformers import GPT2Tokenizer
proxies = {'http':'http://my.proxy.com:port', 'https':'https://my.proxy.com:port'}
tokenizer = GPT2Tokenizer.from_pretrained("gpt2", proxies=proxies)
```
The tokenizer gets downloaded. However, if I run:
```python
from transformers import GPT2LMHeadModel
proxies = {'http':'http://my.proxy.com:port', 'https':'https://my.proxy.com:port'}
model = GPT2LMHeadModel.from_pretrained("gpt2", proxies=proxies)
```
I get the following error:
```python
Traceback (most recent call last):
File "/path/to/anaconda3/lib/python3.7/site-packages/transformers/modeling_utils.py", line 638, in from_pretrained
raise EnvironmentError
OSError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/path/to/text_gen_w_transformers/finetune_test.py", line 28, in <module>
model = GPT2LMHeadModel.from_pretrained("gpt2", proxies=proxies)
File "/path/to/anaconda3/lib/python3.7/site-packages/transformers/modeling_utils.py", line 645, in from_pretrained
raise EnvironmentError(msg)
OSError: Can't load weights for 'gpt2'. Make sure that:
- 'gpt2' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'gpt2' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt.
```
Any thoughts about what might be the issue? Thanks in advance for your help! | 07-15-2020 20:20:44 | 07-15-2020 20:20:44 | Hi! This is probably because the error reponse for a request is silenced . You can place a breakpont [here](https://github.com/huggingface/transformers/blob/0533cf470659b97c6279bd04f65536a1ec88404a/src/transformers/file_utils.py#L681) and check. Mine was SSL error, so I set `REQUESTS_CA_BUNDLE` env var to `/etc/ssl/certs/ca-certificates.crt`<|||||>@festeh Which variable should I be looking at in the breakpoint? If it's `response`, which attribute?<|||||>You can just type `requests.head(url, allow_redirects=True, proxies=proxies, timeout=etag_timeout)` in debugger console and check exception traceback.<|||||>I gave that a shot and got the following:
```python
In[7]: requests.head(url, allow_redirects=True, proxies=proxies, timeout=etag_timeout)
Out[7]: <Response [200]>
```<|||||>Interesting, probably you have an error around this request then, when you're actually downloading weights
https://github.com/huggingface/transformers/blob/7fad617dc1fc681a7f5da5e0172c8b83f4bf0024/src/transformers/file_utils.py#L678
if you place a breakpoint here, would the program hit it? and if not could you send this request manually?<|||||>I think the program hits it:
```
requests.get(url, stream=True, proxies=proxies, headers=headers)
Out[2]: <Response [200]>
```<|||||>Well, if you get the message from the first post it means that some line of code has raised an `EnvironmentError` or `TimeoutError`. I think you need to advance over all lines in `get_from_cache` and find out which line is responsible for that. After you find this line, you can re-run it in console and see the actual exception.<|||||>Ok here is what I found out. If I place a breakpoint at `etag = None` in `get_from_cache()` in `file_utils.py` and try to run things, it stops by that breakpoint twice. The first time, I step through to `response = requests.head(url, allow_redirects=True, proxies=proxies, timeout=etag_timeout)` and get the following response (with the inputs shown from the debugger):
```python
cache_dir = {str} '/path/to/.cache/torch/transformers'
etag = {NoneType} None
etag_timeout = {int} 10
force_download = {bool} False
local_files_only = {bool} False
proxies = {dict: 3} {'http': 'http://myproxy.com:port', 'https': 'https://myproxy.com:port', 'no': ',127.0.0.1,127.0.0.111,127.0.0.2'}
response = {Response} <Response [200]>
resume_download = {bool} False
url = {str} 'https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-config.json'
user_agent = {NoneType} None
requests.head(url, allow_redirects=True, proxies=proxies, timeout=etag_timeout)
Out[1]: <Response [200]>
```
on the second time, I get:
```python
cache_dir = {str} '/path/to/.cache/torch/transformers'
etag = {NoneType} None
etag_timeout = {int} 10
force_download = {bool} False
local_files_only = {bool} False
proxies = {dict: 3} {'http': 'http://myproxy.com:port', 'https': 'https://myproxy.com:port', 'no': ',127.0.0.1,127.0.0.111,127.0.0.2'}
resume_download = {bool} False
url = {str} 'https://cdn.huggingface.co/gpt2-pytorch_model.bin'
user_agent = {NoneType} None
requests.head(url, allow_redirects=True, proxies=proxies, timeout=etag_timeout)
Traceback (most recent call last):
File "/path/to/anaconda3/lib/python3.7/site-packages/urllib3/contrib/pyopenssl.py", line 485, in wrap_socket
cnx.do_handshake()
File "/path/to/anaconda3/lib/python3.7/site-packages/OpenSSL/SSL.py", line 1934, in do_handshake
self._raise_ssl_error(self._ssl, result)
File "/path/to/anaconda3/lib/python3.7/site-packages/OpenSSL/SSL.py", line 1671, in _raise_ssl_error
_raise_current_error()
File "/path/to/anaconda3/lib/python3.7/site-packages/OpenSSL/_util.py", line 54, in exception_from_error_queue
raise exception_type(errors)
OpenSSL.SSL.Error: [('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/path/to/anaconda3/lib/python3.7/site-packages/urllib3/connectionpool.py", line 662, in urlopen
self._prepare_proxy(conn)
File "/path/to/anaconda3/lib/python3.7/site-packages/urllib3/connectionpool.py", line 948, in _prepare_proxy
conn.connect()
File "/path/to/anaconda3/lib/python3.7/site-packages/urllib3/connection.py", line 360, in connect
ssl_context=context,
File "/path/to/anaconda3/lib/python3.7/site-packages/urllib3/util/ssl_.py", line 370, in ssl_wrap_socket
return context.wrap_socket(sock, server_hostname=server_hostname)
File "/path/to/anaconda3/lib/python3.7/site-packages/urllib3/contrib/pyopenssl.py", line 491, in wrap_socket
raise ssl.SSLError("bad handshake: %r" % e)
ssl.SSLError: ("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])",)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/path/to/anaconda3/lib/python3.7/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/path/to/anaconda3/lib/python3.7/site-packages/urllib3/connectionpool.py", line 720, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/path/to/anaconda3/lib/python3.7/site-packages/urllib3/util/retry.py", line 436, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='cdn.huggingface.co', port=443): Max retries exceeded with url: /gpt2-pytorch_model.bin (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])")))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/path/to/anaconda3/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3331, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-3-5b27aae00c67>", line 1, in <module>
requests.head(url, allow_redirects=True, proxies=proxies, timeout=etag_timeout)
File "/path/to/anaconda3/lib/python3.7/site-packages/requests/api.py", line 101, in head
return request('head', url, **kwargs)
File "/path/to/anaconda3/lib/python3.7/site-packages/requests/api.py", line 60, in request
return session.request(method=method, url=url, **kwargs)
File "/path/to/anaconda3/lib/python3.7/site-packages/requests/sessions.py", line 533, in request
resp = self.send(prep, **send_kwargs)
File "/path/to/anaconda3/lib/python3.7/site-packages/requests/sessions.py", line 646, in send
r = adapter.send(request, **kwargs)
File "/path/to/anaconda3/lib/python3.7/site-packages/requests/adapters.py", line 514, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='cdn.huggingface.co', port=443): Max retries exceeded with url: /gpt2-pytorch_model.bin (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])")))
```
Also on the second time, it skips right over the `if response.status_code == 200:` check and goes straight to `except (EnvironmentError, requests.exceptions.Timeout):` when I advance a step from `response = requests.head(url, allow_redirects=True, proxies=proxies, timeout=etag_timeout)`
```python
etag = None
if not local_files_only:
try:
response = requests.head(url, allow_redirects=True, proxies=proxies, timeout=etag_timeout)
if response.status_code == 200:
etag = response.headers.get("ETag")
except (EnvironmentError, requests.exceptions.Timeout):
# etag is already None
pass
```<|||||>Any further thoughts on this?<|||||>I'm suffering same problem. Cannot use behind the proxy servers (private network).<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I have the same problem. `AutoModelForSeq2SeqLM.from_pretrained("google/pegasus-xsum")` just idles forever.
Last version of transformers and torch.
Im not using proxy, but if I use vpn 'from_pretrained' is actually downloading the model<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 5,786 | closed | Faster mBART finetuning | Goal: Get BLEU 20 in 1 epoch on wmt-en-ro.
Can't run a bs=1 without `--freeze_embeds`.
1 epoch takes 6H on 16GB GPU with fp16, with `--freeze_embeds` and `--freeze_encoder`. Max bs=4
Ideas:
- [ ] Dataset that fits as many sentences as possible into an example, to increase gpu utilization.
- [ ] Only store embeddings once
- [ ] prune embeddings: https://github.com/pytorch/fairseq/issues/2120
- [ ] `label_smoothing=0.1`
- [ ] TPU?
Fairseq finetune command:
https://github.com/pytorch/fairseq/issues/2179
| 07-15-2020 19:05:18 | 07-15-2020 19:05:18 | Hi @sshleifer , can embed pruning affect accuracy(bleu) ? TPU option seems good as it won't result in less accuracy <|||||>https://github.com/pytorch/fairseq/issues/2120#issuecomment-647915216 suggests it costs 2 BLEU points.
One of our goals with examples is to make train loops that people can run on their hardware+data. If we can get 36 on a 16GB gpu that is super useful, just like `--freeze_encoder` and `--freeze_embeds` are useful, even if they probably hurt final performance a bit.
<|||||>Broke into smaller issues, but leaving this open in case people have other ideas!<|||||>note: fairseq wmt_en_de batch wps=14721.8 on V100 with https://github.com/pytorch/fairseq/issues/2506#issuecomment-678630596
check tpb/seconds for seq2seq/finetune.py
<|||||>I am trying to fine-tune on wmt-en-ro (facebook/mbart-large-cc25) using Colab and Kaggle 16GB gpu. On Colab, I get cuda oom and on Kaggle I get out of disk apace (20 GB limit). Is there a way I can skip checkpoints to fit the process in 20GB disk space?
I even tried 1 bs, 8 max len, fp16 and freeze_encoder.<|||||>High level, I would use an opus-mt model instead of mbart for most tasks. They are smaller and tend to be nearly as good at translation, if not better. I have run ~50~ experiments finetuning mbart on wmt-en-ro and it was not a particularly pleasant experience.
Disk space: You can see what happens if you remove `checkpoint_callback` from
https://github.com/huggingface/transformers/blob/9336086ab5d232cccd9512333518cf4299528882/examples/seq2seq/finetune.py#L362
and just call `model.model.save_pretrained` `model.model.half().save_pretrained` after.
<|||||>Thanks for the advice @sshleifer - just wanted to try it out and test other language pairs.
On Kaggle, as I just found out, the trick is to create the output_dir outside the working directory (where total disk space is just 5GB). Kaggle won't save it with kernel commit though.
<|||||>Cool! LMK if you have good results!<|||||>I got it working using a small subset of data, then went on training English -> Arabic translator (just a proof of concept). Model uploaded to [HF](https://huggingface.co/akhooli/mbart-large-cc25-en-ar).<|||||>Hi, I had written the code to do vocab pruning and because of the number of people wanting help with it I converted the code into a standalone library. I hope its okay if I link it here for other people to find.
[Link to repo.](https://github.com/IamAdiSri/hf-trim)
Referencing issues #5896 #6132 |
transformers | 5,785 | closed | Sentence-transformers model outputs different than when loaded in HuggingFace | # 🐛 Bug
## Information
The outputs I'm seeing from generating sentence embeddings with the steps in the model zoo for getting sentence-transformers embeddings deviate from those generated from the SentenceTransformers package.
I'm assuming this is due to the lack of pooling. Is there a way to convert a ST model to HF with pooling?
English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Load a pre-trained model from SentenceTransformers
2. Generate sentence embeddings from tokenized inputs
3. Print sentence embeddings
HuggingFace:
```python
#Sentences we want sentence embeddings for
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
tokenizer = AutoTokenizer.from_pretrained("roberta-large-nli-stsb-mean-tokens/0_RoBERTa/")
>>> model = AutoModel.from_pretrained("roberta-large-nli-stsb-mean-tokens/0_RoBERTa/")
>>> encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=128, return_tensors='pt')
>>> with torch.no_grad():
... model_output = model(**encoded_input)
... sentence_embeddings = model_output[0][:,0]
...
>>> print("Sentence embeddings:")
Sentence embeddings:
>>> print(sentence_embeddings)
tensor([[ 0.0057, -0.7690, 0.0702, ..., 0.0734, -1.4343, 0.3418],
[ 0.2066, -0.8213, 0.1272, ..., 0.2649, -1.2799, -0.1636],
[-0.4860, -0.5176, -0.5924, ..., -0.4880, -0.1880, -0.0554]])
```
SentenceTransformers
```python
model = SentenceTransformer('roberta-large-nli-stsb-mean-tokens')
sentence_embeddings = model.encode(sentences)>>> sentence_embeddings = model.encode(sentences)
>>> sentence_embeddings = model.encode(sentences)
>>> print(sentence_embeddings)
[array([ 0.6306487 , -0.2879937 , 0.05334993, ..., 0.26865923,
-2.2382815 , 0.22505784], dtype=float32), array([ 0.22068763, -0.8045991 , 0.18439776, ..., 0.6993382 ,
-1.7670776 , 0.11258417], dtype=float32), array([-0.17819108, 0.08762542, -0.7614953 , ..., -0.6983883 ,
-0.13175072, -0.11123852], dtype=float32)]
```
## Expected behavior
SentenceTransformer and HF outputs should be the same
## Environment info
```
- `transformers` version: 3.0.2
- Platform: Darwin-18.7.0-x86_64-i386-64bit
- Python version: 3.7.5
- PyTorch version (GPU?): 1.5.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
| 07-15-2020 18:35:52 | 07-15-2020 18:35:52 | This is just one of those days.. There's a[ mean pooling function](https://huggingface.co/sentence-transformers/bert-base-nli-mean-tokens) here that can be adapted. |
transformers | 5,784 | closed | [fix] Style. Trying again | 07-15-2020 18:08:15 | 07-15-2020 18:08:15 | ||
transformers | 5,783 | closed | Marian Conversion Script | I want to utilize and train the machine translation models posted on Github: https://github.com/Helsinki-NLP/OPUS-MT-train/tree/master/models on my own corpus. These models are C++ based.
Are these models exactly the same as the ones posted by Hugging Face’s website: https://huggingface.co/Helsinki-NLP ?
If they are, what is the transformers conversion script that loads these Github models and transforms them to models that are loadable via transformers (for example utilizing: transformers.AutoTokenizer.from_pretrained(‘path’), transformers.AutoModelWithLMHead.from_pretrained(‘path’))?
@sshleifer @jackalhan
| 07-15-2020 18:07:22 | 07-15-2020 18:07:22 | The script you want is at `src/transformers/convert_marian_to_pytorch.py`
It requires you to download the marian model you wish to convert and also to clone
```bash
git clone [email protected]:Helsinki-NLP/Opus-MT-train.git
```
you may have to adjust some paths in the script (like `repo_path`) based on where things are.
https://github.com/huggingface/transformers/blob/448c467256332e4be8c122a159b482c1ef039b98/src/transformers/convert_marian_to_pytorch.py#L189
<|||||>Also note that we have ported 1000+ of them, and some were renamed. Which one are you looking for?<|||||>I have trained a Transformer model to translate from Italian to Dutch with Marian based on https://github.com/marian-nmt/marian-examples/tree/master/transformer and data from OPUS. The model is using BPE for tokenization. I have the itnl.bpe, vocab file, model.npz file, etc. on my computer.
The example with the conver_marian_to_pytorch.py is using the model files from the Helsinki-NLP/Opus-MT-train repo instead of local model files. So how can I use the local model files (that I trained with marian) to convert them to a pytorch model that can be used with huggingface?
@sshleifer <|||||>I don't think BPE tokenizer will work. To answer the local model question, you need to (roughly) (a) run the converter on a model from the repo and see what files get downloaded, (b) make your filesystem look like that (c) update the code to not download things and not make model cards.<|||||>I am probably missing something here. I am trying to buid a grammar corrector using Marian NMT generated model with Huggingface's transformer. Source language is the text with erros and target language the text without. I have trained the model following this example (https://github.com/marian-nmt/marian-examples/tree/master/transformer). As it does not generate source and target spm files, I created both of them them using "build/spm_train" provided with Marian implementation and, of course, using for each one their respective training files, the same used for training the model.
The commands to generate spm files are:
../../build/spm_train --input data/src_sentences_dev.txt --model_prefix=source --vocab_size=16000 --character_coverage=1.0
../../build/spm_train --input data/ref_sentences_dev.txt --model_prefix=target --vocab_size=16000 --character_coverage=1.0
After that I proceeded with the convertion to pythorch using https://github.com/huggingface/transformers/blob/master/src/transformers/models/marian/convert_marian_to_pytorch.py. This convertion went fine. The problem is that when I use this model in Huggingface's transformer.MarianMTModel and MarianTokenizer (from_pretrained) I get weired results.
So to mitigate that I tried to perform another convertion, this time using en-de files downloaded from https://github.com/Helsinki-NLP/OPUS-MT-train/tree/master/models/en-de. Convertion went fine, unfortunately with the same weired results, example at the bottom, as with the model I created for grammar correction.
I further downloaded the required files from https://huggingface.co/Helsinki-NLP/opus-mt-en-de/tree/main and substituted the ones genereated by convert_marian_to_pytorch.py above with these and, as expected, the translation went fine.
The converter, to my knowledge, requires 4 files: source.spm, target.spm, vocab.yml and model.npz. As these weired results raises after I use the converter, both with my model and the en-de downloaded from https://github.com/Helsinki-NLP/OPUS-MT-train/tree/master/models/en-de, I guess I am missing a piece of info that I cannot identify.
It is worth noting that my grammar corrector model works correctly with marian_decoder.
Any help will be very much appreciated! @sshleifer ?
Cheers
Example of English -> German translation:
Source: "Today is Sunday"
Translation:
▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Nachricht▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Nachricht▁Rumänien▁Rumänien▁Nachricht▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Nachricht▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige
<|||||>I hope it's not bad practice to necro an issue but since I can't seem to link it in a new one, I'd like to add that I'm encountering th same behaviour using the script.
The conversion went fine, but the generation output is, for me :
??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
or, sometimes
linkage linkage linkage linkage linkage linkage linkage linkage linkage linkage linkage linkage linkage linkage linkage linkage linkage linkage linkage linkage linkage linkage linkage (ad lib).
I should note that I have used OPUS-CAT to fine tune an OPUS-MT model, and that it performs relatively well inside the app. Also conversion with the script went on without issues.
Like the OP, if I replace the model files with the original ones (in my case,Helsinki-NLP/opus-mt-zh-en), everything is fixed, so I don't think this is a pure script/params issue.
Thanks in advance ! |
transformers | 5,782 | closed | Create README.md | 07-15-2020 17:52:31 | 07-15-2020 17:52:31 | ||
transformers | 5,781 | closed | Create README.md | 07-15-2020 17:44:30 | 07-15-2020 17:44:30 | ||
transformers | 5,780 | closed | Error in conversion to tensorflow | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): DistilBERT
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
```
from transformers import TFAutoModel, AutoTokenizer, AutoModel
import os
model = AutoModel.from_pretrained('distilbert-base-uncased')
os.system('mkdir distilbert')
model.save_pretrained('distilbert')
model = TFAutoModel.from_pretrained('distilbert', from_pt=True) # crashes
```
## Expected behavior
Model is converted from pytorch to tensorflow
## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-5.3.0-62-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.1 (True)
- Tensorflow version (GPU?): 2.0.0-beta1 (False)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
## Actual Behaviour
```
Traceback (most recent call last):
File "pt2tf.py", line 8, in <module>
model = TFAutoModel.from_pretrained('distilbert', from_pt=True)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_auto.py", line 423, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py", line 482, in from_pretrained
return load_pytorch_checkpoint_in_tf2_model(model, resolved_archive_file, allow_missing_keys=True)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_pytorch_utils.py", line 93, in load_pytorch_checkpoint_in_tf2_model
tf_model, pt_state_dict, tf_inputs=tf_inputs, allow_missing_keys=allow_missing_keys
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_pytorch_utils.py", line 125, in load_pytorch_weights_in_tf2_model
tf_model(tf_inputs, training=False) # Make sure model is built
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 712, in __call__
outputs = self.call(inputs, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_distilbert.py", line 603, in call
outputs = self.distilbert(inputs, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 712, in __call__
outputs = self.call(inputs, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_distilbert.py", line 493, in call
embedding_output = self.embeddings(input_ids, inputs_embeds=inputs_embeds) # (bs, seq_length, dim)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 709, in __call__
self._maybe_build(inputs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 1966, in _maybe_build
self.build(input_shapes)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_distilbert.py", line 112, in build
"weight", shape=[self.vocab_size, self.dim], initializer=get_initializer(self.initializer_range)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 389, in add_weight
aggregation=aggregation)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/tracking/base.py", line 713, in _add_variable_with_custom_getter
**kwargs_for_getter)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 154, in make_variable
shape=variable_shape if variable_shape else None)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variables.py", line 260, in __call__
return cls._variable_v1_call(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variables.py", line 221, in _variable_v1_call
shape=shape)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variables.py", line 199, in <lambda>
previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variable_scope.py", line 2502, in default_variable_creator
shape=shape)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variables.py", line 264, in __call__
return super(VariableMetaclass, cls).__call__(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 464, in __init__
shape=shape)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 608, in _init_from_args
initial_value() if init_from_fn else initial_value,
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 134, in <lambda>
init_val = lambda: initializer(shape, dtype=dtype)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/init_ops_v2.py", line 341, in __call__
dtype = _assert_float_dtype(dtype)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/init_ops_v2.py", line 769, in _assert_float_dtype
raise ValueError("Expected floating point type, got %s." % dtype)
ValueError: Expected floating point type, got <dtype: 'int32'>.
```
| 07-15-2020 17:20:28 | 07-15-2020 17:20:28 | Hey @Alshutin,
I am not able to reproduce the error. It might be because PyTorch uses a GPU and Tensorflow does not.
Could you try to run your code when disabling GPU (`export CUDA_VISIBLE_DEVICES=""`) and see whether the
error persists?<|||||>Hi! I just tried it with another version of TensorFlow. With 2.2.0 it just works.<|||||>With 2.0.0-beta1 and CUDA_VISIBLE_DEVICES="" the error persists.<|||||>Interesting - thanks for checking!
Does it crash as well for `bert-base-uncased` and TF 2.0.0?
Could you run these lines to verify?
```python
from transformers import TFAutoModel, AutoTokenizer, AutoModel
import os
model = AutoModel.from_pretrained('bert-base-uncased')
os.system('mkdir bert')
model.save_pretrained('bert')
model = TFAutoModel.from_pretrained('bert', from_pt=True) # crashes
```
@thomwolf @jplu - are we gonna force TF 2.2 in `transformers` ? <|||||>Can you try with the 2.0.0 release and not beta? The beta was know to have a lot of issue and a lot of fixes have been applied since.
@patrickvonplaten I proposed indeed to fix the TensorFlow version to 2.2, because of some welcomed features from it. But nothing has been decided yet.<|||||>It works with 2.0.0 stable TensorFlow release. |
transformers | 5,779 | closed | [bart] decoder.last_hidden_state shape changes when passing labels | ```
config = BartConfig(
vocab_size=99,
d_model=24,
encoder_layers=2,
decoder_layers=2,
encoder_attention_heads=2,
decoder_attention_heads=2,
encoder_ffn_dim=32,
decoder_ffn_dim=32,
max_position_embeddings=48,
add_final_layer_norm=True,
)
lm_model = BartForConditionalGeneration(config).to(torch_device)
context = torch.Tensor([[71, 82, 18, 33, 46, 91, 2], [68, 34, 26, 58, 30, 2, 1]]).long().to(torch_device)
summary = torch.Tensor([[82, 71, 82, 18, 2], [58, 68, 2, 1, 1]]).long().to(torch_device)
loss, logits, enc_features = lm_model(input_ids=context, decoder_input_ids=summary, labels=summary)
expected_shape = (*summary.shape, config.vocab_size)
self.assertEqual(logits.shape, expected_shape)
outputs2 = lm_model(input_ids=context, decoder_input_ids=summary)
self.assertEqual(outputs2.logits.shape, expected_shape)
# Fails torch.Size([2, 1, 99]) != (2, 5, 99)
```
Is this expected @sgugger ? | 07-15-2020 16:25:25 | 07-15-2020 16:25:25 | NVM, it's a me problem.<|||||>FFR, You have to manually pass `use_cache=False` to bart forward now if you're not generating |
transformers | 5,778 | closed | Error using DataParallel with reformer model: There were no tensor arguments to this function | # 🐛 Bug
## Information
I'm having some issues using DataParallel with the reformer model with 4 GPUs. I am trying to feed the ReformerModel input embeddings, and output the last hidden state. I am using apex amp, however I get the same error when I don't use amp. I also get the same error when I use input IDs, rather than embeddings. And I've tested the same script using other HuggingFace models with no issues (Bert, and Roberta).
## To reproduce
Simple code:
```
import torch
from apex import amp
import transformers
from transformers import ReformerModel
from torch.utils.data import TensorDataset, DataLoader
import torch.nn as nn
print(transformers.__version__)
print(torch.__version__)
device = torch.device("cuda:0")
batch_size = 4
model_rf = ReformerModel.from_pretrained('google/reformer-crime-and-punishment')
model_rf.to(device)
opt_rf = torch.optim.AdamW(model_rf.parameters(), lr=0.0002)
model_rf, opt_rf = amp.initialize(model_rf, opt_rf)
model_rf = nn.DataParallel(model_rf)
embeds = torch.randn(80, 64, 256)
training_set = TensorDataset(embeds, embeds)
training_generator = DataLoader(training_set, batch_size=batch_size, shuffle=True)
for i, batch in enumerate(training_generator):
embeds, _ = batch
h_final = model_rf(inputs_embeds=embeds.to(device))
```
And the error:
```
Traceback (most recent call last):
File "rf_4.py", line 35, in <module>
h_final = model_rf(inputs_embeds=embeds)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/data_parallel.py", line 155, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/data_parallel.py", line 165, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/parallel_apply.py", line 85, in parallel_apply
output.reraise()
File "/usr/local/lib/python3.6/dist-packages/torch/_utils.py", line 395, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker
output = module(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_reformer.py", line 1621, in forward
embedding_output = self.embeddings(input_ids=input_ids, position_ids=position_ids, inputs_embeds=inputs_embeds)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_reformer.py", line 234, in forward
position_embeddings = self.position_embeddings(position_ids)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_reformer.py", line 170, in forward
[weight[:, :required_pos_encodings_columns] for weight in broadcasted_weights], dim=-1
File "/usr/local/lib/python3.6/dist-packages/apex/amp/wrap.py", line 81, in wrapper
return orig_fn(seq, *args, **kwargs)
RuntimeError: There were no tensor arguments to this function (e.g., you passed an empty list of Tensors), but no fallback function is registered for schema aten::_cat. This usually means that this function requires a non-empty list of Tensors. Available functions are [CPUTensorId, CUDATensorId, QuantizedCPUTensorId, VariableTensorId]
```
## Expected behavior
This code kicks an error at the h_final line
## Environment info
- `transformers` version: 3.0.2
- Platform: Ubuntu 18.04
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.1
- Tensorflow version (GPU?): no
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: Yes, 4 GPUs
| 07-15-2020 15:44:16 | 07-15-2020 15:44:16 | Update: This seems relevant https://github.com/pytorch/pytorch/issues/36035<|||||>I face the same error when using multi GPUs on Reformer model:
```Traceback (most recent call last):
File "src/run_language_modeling.py", line 305, in <module>
main()
File "src/run_language_modeling.py", line 269, in main
trainer.train(model_path=model_path)
File "/project/6006286/qiwu/from_git/transformers/src/transformers/trainer.py", line 499, in train
tr_loss += self._training_step(model, inputs, optimizer)
File "/project/6006286/qiwu/from_git/transformers/src/transformers/trainer.py", line 632, in _training_step
outputs = model(**inputs)
File "/home/qiwu/protein-reformer-env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/qiwu/protein-reformer-env/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 155, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/home/qiwu/protein-reformer-env/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 165, in parallel_apply
return parallel_apply(replicas,wandb: Waiting for W&B process to finish, PID 20542
inputs, kwargs, self.device_ids[:len(replicas)])
File "/home/qiwu/protein-reformer-env/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 85, in parallel_apply
output.reraise()
File "/home/qiwu/protein-reformer-env/lib/python3.7/site-packages/torch/_utils.py", line 395, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/home/qiwu/protein-reformer-env/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker
output = module(*input, **kwargs)
File "/home/qiwu/protein-reformer-env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/project/6006286/qiwu/from_git/transformers/src/transformers/modeling_reformer.py", line 1746, in forward
return_tuple=return_tuple,
File "/home/qiwu/protein-reformer-env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/project/6006286/qiwu/from_git/transformers/src/transformers/modeling_reformer.py", line 1610, in forward
embedding_output = self.embeddings(input_ids=input_ids, position_ids=position_ids, inputs_embeds=inputs_embeds)
File "/home/qiwu/protein-reformer-env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/project/6006286/qiwu/from_git/transformers/src/transformers/modeling_reformer.py", line 236, in forward
position_embeddings = self.position_embeddings(position_ids)
File "/home/qiwu/protein-reformer-env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/project/6006286/qiwu/from_git/transformers/src/transformers/modeling_reformer.py", line 143, in forward
weights = torch.cat(broadcasted_weights, dim=-1)
RuntimeError: There were no tensor arguments to this function (e.g., wandb: Program failed with code 1. Press ctrl-c to abort syncing.
you passed an empty list of Tensors), but no fallback function is registered for schema aten::_cat. This usually means that this function requires a non-empty list of Tensors. Available functions are [CPUTensorId, CUDATensorId, QuantizedCPUTensorId, VariableTensorId]
```<|||||>Out of curiosity, do you have the same error on PyTorch 1.4?<|||||>> Out of curiosity, do you have the same error on PyTorch 1.4?
I stopped my GC instance - now there are none available. Maybe someone else can check?<|||||>In my case there's no error using torch-1.4.0, but got a warning:
```07/16/2020 11:58:13 - INFO - transformers.trainer - ***** Running training *****
07/16/2020 11:58:13 - INFO - transformers.trainer - Num examples = 5444
07/16/2020 11:58:13 - INFO - transformers.trainer - Num Epochs = 12
07/16/2020 11:58:13 - INFO - transformers.trainer - Instantaneous batch size per device = 32
07/16/2020 11:58:13 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 64
07/16/2020 11:58:13 - INFO - transformers.trainer - Gradient Accumulation steps = 1
07/16/2020 11:58:13 - INFO - transformers.trainer - Total optimization steps = 1000
Epoch: 0%| | 0/12 [00:00<?, ?it/s
home/qiwu/torch-1.4/lib/python3.7/site-packages/torch/nn/parallel/_functions.py:61:
UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.
```
Found a relevant issue : https://github.com/huggingface/transformers/issues/852
https://discuss.pytorch.org/t/how-to-fix-gathering-dim-0-warning-in-multi-gpu-dataparallel-setting/41733/2<|||||>To be honest, I didn't check Reformer on multi-GPU yet - will note this issue down
<|||||>@qwu01, what version of transformers are you using, and do you also have tokenizers installed? I get a vague segmentation fault error when I attempt to run about the same training script as above using `torch==1.4.0`, `transformers==2.9.0`, and `tokenizers==0.7.0`.
<|||||>@jstremme I have these installed:
Package Version
--------------- ---------
argh 0.26.2
certifi 2020.6.20
chardet 3.0.4
click 7.1.2
configparser 5.0.0
docker-pycreds 0.4.0
filelock 3.0.12
gitdb 4.0.5
GitPython 3.1.7
gql 0.2.0
graphql-core 1.1
idna 2.10
joblib 0.16.0
numpy 1.18.4
nvidia-ml-py3 7.352.0
packaging 20.4
pathtools 0.1.2
pip 19.1.1
promise 2.3
psutil 5.7.0
pyparsing 2.4.7
python-dateutil 2.8.1
PyYAML 5.3.1
regex 2019.11.1
requests 2.24.0
sacremoses 0.0.43
sentencepiece 0.1.90
sentry-sdk 0.16.1
setuptools 41.0.1
shortuuid 1.0.1
six 1.15.0
smmap 3.0.4
subprocess32 3.5.3
**tokenizers 0.8.1rc1**
**torch 1.4.0**
tqdm 4.47.0
**transformers 3.0.2**
urllib3 1.25.9
wandb 0.9.3
watchdog 0.9.0
wheel 0.33.4
<|||||>Thanks very much @qwu01. Just to confirm, downgrading torch to `1.4.0` allowed you to train Reformer with multiple GPUs? Did this impact anything else?
The environment I'm using does not allow me to install `tokenizers 0.8.1rc1` and `transformers 3.0.2` currently, but I will test your environment config as soon as I'm able :)
If you are actively working on training a large Reformer model, I would be interested in discussing your parameters. I'm dealing with sequences of max length around 300k SentencePiece tokens and am limited to batch size = 1. Multi-GPU should get me to batch size = 4.<|||||>@jstremme Yes, I'm sure that torch 1.4.0 with multiple GPUs worked for Reformer training. AFAICT it's not impacting anything else.
<|||||>@qwu01, @anthonyfuller7, downgrading to `torch==1.4.0` worked for me as well :D<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Was this issue ever solved? I have managed to use multiple GPUs in Reformer training by downgrading to PyTorch 1.4.0 and transformers 3.0.2. However, I would like to not be constrained to this version setup, because it is leading to some inefficiencies (functions for which arguments have changed in the new version, etc.) and also because I'd like the version to be up to date.<|||||>@anthonyfuller7, perhaps you could reopen this? I'm in a similar position to @JellePiepenbrock where having to use `torch==1.4.0` is a suboptimal workaround.<|||||>Sorry guys that this was never really solved -> could you try to post the problem on the forum: https://discuss.huggingface.co/ instead? It's usually more active for problems with multi-gpu training<|||||>Sure, @patrickvonplaten. I created a [post here](https://discuss.huggingface.co/t/reformer-for-multi-gpu-not-possible-for-torch-1-4-0/9422?u=jstremme). Is there someone from Hugging Face who would be able to help resolve this? As mentioned in my post, I'd be happy to help, but I don't think I understand the code well enough to spearhead the fix. Thanks for your reply to my comment!<|||||>>
Hello @jstremme
I tried to downgrade to Pytorch 1.4.0 to remove the warning (and possibly increase training speed, as mentioned [here](https://github.com/huggingface/transformers/issues/852)) but I got this error:
```
Traceback (most recent call last):
File "run_mlm_arrow_dataset.py", line 552, in <module>
main()
File "run_mlm_arrow_dataset.py", line 501, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/claudio/python-environments/test_venv/lib/python3.6/site-packages/transformers/trainer.py", line 1214, in train
self.create_optimizer_and_scheduler(num_training_steps=max_steps)
File "/home/claudio/python-environments/test_venv/lib/python3.6/site-packages/transformers/trainer.py", line 803, in create_optimizer_and_scheduler
self.create_optimizer()
File "/home/claudio/python-environments/test_venv/lib/python3.6/site-packages/transformers/trainer.py", line 836, in create_optimizer
self.optimizer = optimizer_cls(optimizer_grouped_parameters, **optimizer_kwargs)
File "/home/claudio/python-environments/test_venv/lib/python3.6/site-packages/transformers/optimization.py", line 311, in __init__
require_version("torch>=1.5.0") # add_ with alpha
File "/home/claudio/python-environments/test_venv/lib/python3.6/site-packages/transformers/utils/versions.py", line 114, in require_version
_compare_versions(op, got_ver, want_ver, requirement, pkg, hint)
File "/home/claudio/python-environments/test_venv/lib/python3.6/site-packages/transformers/utils/versions.py", line 50, in _compare_versions
f"{requirement} is required for a normal functioning of this module, but found {pkg}=={got_ver}.{hint}"
ImportError: torch>=1.5.0 is required for a normal functioning of this module, but found torch==1.4.0.
```
Did you find some solution to the issue? |
transformers | 5,777 | closed | Bug in MiniLM-L12-H384-uncased modelhub model files | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): microsoft/MiniLM-L12-H384-uncased
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: SQuAD ver.2
* [ ] my own task or dataset: (give details below)
Problem: The vocab for microsoft/MiniLM-L12-H384-uncased is missing a token => wrong tokenization => bad performance for SQuAD finetuning
Potential fix: Upload the original vocab that was published in the original Microsoft Repository (https://github.com/microsoft/unilm/tree/master/minilm)
## To reproduce
Steps to reproduce the behavior:
1. While tokenizing the a sample english sentence with miniLM model downloaded from modelhub
2. Comparing for modelhub tokenizer vocab size vs. modelhub model vocab size
```
from transformers import AutoTokenizer, AutoModel
tokenizer_modelhub = AutoTokenizer.from_pretrained("microsoft/MiniLM-L12-H384-uncased")
model_mod
elhub = AutoModel.from_pretrained("microsoft/MiniLM-L12-H384-uncased")
assert tokenizer_modelhub.vocab_size == model_modelhub.embeddings.word_embeddings.num_embeddings, "tokenizer vocab_size {} doesn't match embedding vocab size {} ".format(tokenizer.vocab_size, model.embeddings.word_embeddings.num_embeddings)
```
Output
```
AssertionError: tokenizer vocab_size 30521 doesn't match embedding vocab size 30522
```
3. Download "original" MiniLM model from Microsoft's MiniLM GitHub Repo (https://1drv.ms/u/s!AjHn0yEmKG8qixAYyu2Fvq5ulnU7?e=DFApTA)
4. Comparing the modelhub MiniLM tokenizer and "original" MiniLM tokenizer token ids
```
import torch
input_ids_modelhub = torch.tensor([tokenizer_modelhub.encode("Let's see all hidden-states and attentions on this text")])
config_github = AutoConfig.from_pretrained("<github_minilm_model_directory>")
tokenizer_github = AutoTokenizer.from_pretrained(
... "<github_minilm_model_directory>", config=config_github)
model_github_finetuned = AutoModelForQuestionAnswering.from_pretrained(
... "<github_minilm_model_directory>", config=config_github)
assert tokenizer_github.vocab_size == model_github.embeddings.word_embeddings.num_embeddings, "tokenizer vocab_size {} doesn't match embedding vocab size {} ".format(tokenizer.vocab_size, model.embeddings.word_embeddings.num_embeddings)
input_ids_github = torch.tensor([tokenizer_github.encode("Let's see all hidden-states and attentions on this text")])
```
```
print(input_ids_github)
tensor([[ 101, 2292, 1005, 1055, 2156, 2035, 5023, 1011, 2163, 1998, 3086, 2015,
2006, 2023, 3793, 102]])
```
```
print(input_ids_modelhub)
tensor([[ 100, 2291, 1004, 1054, 2155, 2034, 5022, 1010, 2162, 1997, 3085, 2014,
2005, 2022, 3792, 101]])
```
5. Fine-tune modelhub MiniLM model for SQuAD ver 2
```
python examples/question-answering/run_squad.py --model_type bert \
--model_name_or_path microsoft/Multilingual-MiniLM-L12-H384 \
--output_dir finetuned_modelhub_minilm \
--data_dir data/squad20 \
--train_file train-v2.0.json \
--predict_file dev-v2.0.json \
--learning_rate 4e-5 \
--num_train_epochs 4 \
--max_seq_length 384 \
--doc_stride 128 \
--per_gpu_train_batch_size 12 \
--per_gpu_eval_batch_size 12 \
--gradient_accumulation_steps 4 \
--version_2_with_negative \
--do_lower_case \
--verbose_logging \
--do_train \
--do_eval \
--seed 42 \
--save_steps 5000 \
--overwrite_output_dir \
--overwrite_cache
```
Results:
```
{'exact': 59.681630590415224, 'f1': 63.78250778488946, 'total': 11873, 'HasAns_exact': 49.73009446693657, 'HasAns_f1': 57.94360913123985, 'HasAns_total': 5928, 'NoAns_exact': 69.60470984020185, 'NoAns_f1': 69.60470984020185, 'NoAns_total': 5945, 'best_exact': 59.690053061568264, 'best_exact_thresh': 0.0, 'best_f1': 63.79093025604285, 'best_f1_thresh': 0.0}
```
6. Fine-tune original MiniLM model for SQuAD ver 2
```
python examples/question-answering/run_squad.py --model_type bert \
--model_name_or_path <saved_githubModel_local_path> \
--output_dir finetuned_github_minilm \
--data_dir data/squad20 \
--train_file train-v2.0.json \
--predict_file dev-v2.0.json \
--learning_rate 4e-5 \
--num_train_epochs 4 \
--max_seq_length 384 \
--doc_stride 128 \
--per_gpu_train_batch_size 12 \
--per_gpu_eval_batch_size 12 \
--gradient_accumulation_steps 4 \
--version_2_with_negative \
--do_lower_case \
--verbose_logging \
--do_train \
--do_eval \
--seed 42 \
--save_steps 5000 \
--overwrite_output_dir \
--overwrite_cache
```
Results:
```
{'exact': 76.23178640613156, 'f1': 79.57013365427773, 'total': 11873, 'HasAns_exact': 78.50877192982456, 'HasAns_f1': 85.1950399590485, 'HasAns_total': 5928, 'NoAns_exact': 73.96131202691338, 'NoAns_f1': 73.96131202691338, 'NoAns_total': 5945, 'best_exact': 76.23178640613156, 'best_exact_thresh': 0.0, 'best_f1': 79.57013365427775, 'best_f1_thresh': 0.0}
```
## Expected behavior
1. Assertions should pass.
2. `input_ids_modelhub` and `input_ids_github` should produce same results
3. Reproduce the Downstream results on MiniLM modelhub files as mentioned in MiniLM model card
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Ubuntu 18.04.4 LTS
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.1 (Yes)
- Tensorflow version (GPU?): Not using
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| 07-15-2020 15:41:55 | 07-15-2020 15:41:55 | I will be off for the next two weeks - maybe @sshleifer @sgugger @julien-c can take a look? <|||||>@JetRunner Do you know who from @microsoft uploaded the MiniLM model?<|||||>@patrickvonplaten did it if I remember it right. He's on vocation so I'll take a look.
<|||||>Here's the diff

@patrickvonplaten when u are back to work, pls check why this happened.
I'll re-upload `vocab.txt` to resolve the problem for now.<|||||>@julien-c I've re-uploaded it. However, CDN seems to have cached the incorrect version (https://cdn.huggingface.co/microsoft/MiniLM-L12-H384-uncased/vocab.txt).<|||||>Yes, the CDN caches files for up to 24 hours on each POP. However AFAIK the library doesn't load tokenizer files from the CDN anyways.<|||||>The model is working now |
transformers | 5,776 | closed | Update README.md | Add cherry picked example for the widget | 07-15-2020 15:40:57 | 07-15-2020 15:40:57 | |
transformers | 5,775 | closed | [squad] make examples and dataset accessible from SquadDataset object | In order to do evaluation on the SQuAD dataset using `squad_evaluate`, the user needs access to both the examples loaded in the dataset and the `TensorDataset` that contains values like `unique_id` and the like that are used in constructing the list of `SquadResult` objects. This PR surfaces the examples and dataset to the user so that they can access it directly.
For example of why access to those is needed, see how evaluation is currently done in `examples/run_squad.py`. The `SquadDataset` object attempts to wrap up some of this functionality, but without access to examples and dataset the evaluation is not possible. | 07-15-2020 15:13:59 | 07-15-2020 15:13:59 | There seem to be some issues with CircleCI right now causing all the integration tests to fail. Please let me know if there is an issue on my end. |
transformers | 5,774 | closed | [fix] check_code_quality | 07-15-2020 15:08:09 | 07-15-2020 15:08:09 | ||
transformers | 5,773 | closed | Ensure OpenAI GPT position_ids is correctly initialized and registered at init. | This will make it compatible with TorchScript export and avoid hardcoded `position_ids` tensor's device in the generated graph.
Solve the following issue: #5664
Signed-off-by: Morgan Funtowicz <[email protected]> | 07-15-2020 14:45:53 | 07-15-2020 14:45:53 | Fixes for the failing tests are not detected by Github while being present on the branch ... Lets see if it comes back to life in a while ...<|||||>CI failure has been fixed on master.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5773?src=pr&el=h1) Report
> Merging [#5773](https://codecov.io/gh/huggingface/transformers/pull/5773?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/223bad242d0d64e20a39b956b73ab300231a9c70&el=desc) will **decrease** coverage by `0.04%`.
> The diff coverage is `90.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5773?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5773 +/- ##
==========================================
- Coverage 78.67% 78.63% -0.05%
==========================================
Files 146 146
Lines 26210 26206 -4
==========================================
- Hits 20621 20606 -15
- Misses 5589 5600 +11
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5773?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5773/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `89.44% <66.66%> (-0.02%)` | :arrow_down: |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5773/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.37% <100.00%> (-0.02%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5773/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <100.00%> (-0.06%)` | :arrow_down: |
| [src/transformers/modeling\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5773/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `90.00% <100.00%> (-0.03%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5773/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-2.51%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5773/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.19% <0.00%> (-0.30%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5773?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5773?src=pr&el=footer). Last update [b01a884...722824f](https://codecov.io/gh/huggingface/transformers/pull/5773?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>@sshleifer I've updated all the models with `position_ids` created in the forward pass as it seems to be the right design to allow people to export the bunch of models impacted.
Also, it avoids allocating tensors at every forward call, so might reduce the pressure on PyTorch memory manager. <|||||>Do we get the "Uninitialized parameters" warning because the position_ids are not in the `pytorch_model.bin` on S3?
<|||||>I'll check the above, didn't catch at first 👌 <|||||>Could this be merged? Would love to test these changes after merging :)<|||||>@vdantu Now merged, sorry for the delay, many people are off these days, it slows down a little bit the merging process 😃.
Let us know if you have any follow up issue(s) / question(s).<|||||>@mfuntowicz : Thanks for the push. Would we have to wait for a new release of `transformers` package to get these changes? <|||||>@vdantu: I'll let @LysandreJik handle this one |
transformers | 5,772 | closed | [fix] check code quality | 07-15-2020 14:37:15 | 07-15-2020 14:37:15 | Merging without circleci. |
|
transformers | 5,771 | closed | Fine-tune BERT for regression problem | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
I was wondering in order to fine-tune BERT for regression problem, do I just need to put num_labels = 1 inside the BertForSequenceClassification function? Does there anything else need to modify? I'm a newbie to transformers and still confused with basic things. Thanks in advance.
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | 07-15-2020 14:36:51 | 07-15-2020 14:36:51 | Hey @rxlian,
We are trying to move more general and "researchy" questions to our discussion forum here: https://discuss.huggingface.co/ and use github issues mainly for bugs.
Would you mind posting your questions at the forum - other people might be interested as well :-) <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,770 | closed | XLNet `use_cache` refactor | As discussed with @joeddav and @sgugger, this PR lightly refactors the `use_cache` argument in XLNet.
- In line with #5438, in the model methods `use_cache` defaults to None, which redirects to the model config value if no value is passed.
- `use_cache` is independent of `mem_len`: if `use_cache` is True and `mem_len` is 0 or None (which is the case in the base model config), the model behaves like GPT-2 and returns `mems` to be used as `past` in generation.
- This changes functionality and will enable the default model to use caching; for instance, it should speed up the [inference widget](https://huggingface.co/xlnet-base-cased?text=My+name+is+Clara+and+I+am+) significantly (x3 speed-up on my CPU) | 07-15-2020 13:59:06 | 07-15-2020 13:59:06 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5770?src=pr&el=h1) Report
> Merging [#5770](https://codecov.io/gh/huggingface/transformers/pull/5770?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3653d01f2af0389207f2239875a8ceae41bf0598&el=desc) will **decrease** coverage by `0.01%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5770?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5770 +/- ##
==========================================
- Coverage 77.26% 77.25% -0.02%
==========================================
Files 146 146
Lines 25948 25958 +10
==========================================
+ Hits 20048 20053 +5
- Misses 5900 5905 +5
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5770?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5770/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.52% <ø> (ø)` | |
| [src/transformers/configuration\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5770/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3hsbmV0LnB5) | `94.33% <ø> (ø)` | |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5770/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `81.75% <100.00%> (+0.26%)` | :arrow_up: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5770/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (-1.26%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5770?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5770?src=pr&el=footer). Last update [3653d01...cfd7c66](https://codecov.io/gh/huggingface/transformers/pull/5770?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Not sure what is going on with Cicle CI here - might be related to the problems with github yesterday...<|||||>Yeah it doesn't want to let me re-run, I think I'll just add a small change to refresh it<|||||>After our discussion yesterday @TevenLeScao, I think I am more or less fine with this PR. A couple of things we should maybe think about:
1) ***Consistency with `use_cache` of other models***
As it is implemented at the moment (correct me if I'm wrong), if `use_cache=False`, `mem_len` has no effect as `mems` will always be `None`. This is not ideal IMO because this means that one has to enable `use_cache=True` to correctly train the model => this is not really consistent with the way `use_cache` is used in the library in general. In all other models GPT2, CTRL, T5, BART, Reformer `use_cache` should only be used for inference and disabled for training. So as a solution I think, there are two options:
A.) All `use_cache` statements should be updated with `use_cache or self.training` => this way I think it would be consistent with the logic / design of other models (Let me know if this is not clear)
B.) Add an explicit warning that `use_cache` should be `True` for training to enable the `mems` => I don't like this very much because this is confusing for people that are used to GPT2 where `use_cache` is only for inference.
2) ***Equality `use_cache=True/False`***
After our discussion yesterday, it seems that since `XLNet` uses a mixture of CLM and bi-directional self-attention (defined in the `perm_mask` input), it seems that it is **not** possible to have a (mathematically) identical output between `use_cache=True` and `use_cache=False`. I guess that's just how it is in `XLNet` and we can't really do anything against it.
- A.) I guess we should change (...could be done in a 2nd PR though), is to add the parameter `offset` to `XLNet` config and add it to all `forward(...)` functions and maybe give it a better name... As I understood it yesterday, `offset` defines the trade-off between how many tokens should be used from `mems` (and are therefore not in the *query projection*) and how many are in the query projection and have a bi-directional mask on them. At the moment `offset` is hard-coded to `2` in the `prepare_generate` function which does not allow for a lot of flexibility and is somewhat arbitrary IMO. I think, this parameter should be handled in a similar way as `num_hashes` is handled in `Reformer`
- B.) This is one is more important to me. We should add a test to `XLNet` that verifies that `use_cache` does give the **same** outputs if the `perm_mask` is set to a causal mask. We have tests for this in `GPT2, T5, Bart` I think. Here is the one for GPT2 e.g.: https://github.com/huggingface/transformers/blob/1f75f9317e381425ee56f7108e5ec8d3f3d6b6ad/tests/test_modeling_gpt2.py#L165 . The test here can be very similar - only that we need to define the `perm_mask` correctly. This test would be extremely useful a) to make sure we fully understand what's going on in `XLNet` and b) make sure that there is no bug.
3) ***Benchmark performance improvement***
Soon we will have benchmarking for the `generate` function, see PR here: https://github.com/huggingface/transformers/pull/5802 . So I took this case here as a good trial to test the benchmark scripts for generation. I started two benchmarks 1 on CPU 1 on GPU and will post the results here as soon as the benchmarking is done. It be great to add benchmark results in general to PRs like this.
I think this is quite an important Model and an important PR, so I'd love to loop in @sshleifer and @thomwolf here as well to hear their opinions.
@TevenLeScao - sorry for the long message! It's mostly because I want to better understand `XLNet` in detail. Let me know if some of my points are not clear or not 100% correct.<|||||>I used the following code for benchmarking:
```python
#!/usr/bin/env python3
from transformers import XLNetConfig, PyTorchBenchmark, PyTorchBenchmarkArguments
config_with_cache = XLNetConfig.from_pretrained("xlnet-base-cased", use_cache=True, mem_len=1024)
config_without_cache = XLNetConfig.from_pretrained("xlnet-base-cased", use_cache=False, mem_len=1024)
config_without_cache_no_mems = XLNetConfig.from_pretrained("xlnet-base-cased", use_cache=False)
assert config_without_cache_no_mems.mem_len is None, "Configs are wrong"
assert config_with_cache.mem_len == config_without_cache.mem_len == 1024, "Configs are wrong"
assert config_with_cache.use_cache is (not config_without_cache.use_cache) is True and (not config_without_cache_no_mems.use_cache) is True, "Configs are wrong"
args = PyTorchBenchmarkArguments(models=["xlnet-cache", "xlnet-no-cache", "xlnet-no-mems"], sequence_lengths=[300], batch_sizes=[1], generate=True, no_inference=True)
benchmark = PyTorchBenchmark(args=args, configs=[config_with_cache, config_without_cache, config_without_cache_no_mems])
benchmark.run()
```
and the benchmark code from #5802
=> I could not see a real speed-up on GPU for the following case:
Start with 1 token and generate up to `Seq Length` tokens:
INFERENCE - SPEED - RESULT
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
xlnet-cache 1 500 11.8
xlnet-no-cache 1 500 11.925
xlnet-no-mems 1 500 11.911
--------------------------------------------------------------------------------
INFERENCE - MEMORY - RESULT
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Memory in MB
--------------------------------------------------------------------------------
xlnet-cache 1 500 1351
xlnet-no-cache 1 500 1397
xlnet-no-mems 1 500 1397
--------------------------------------------------------------------------------
ENVIRONMENT INFORMATION
- transformers_version: 3.0.2
- framework: PyTorch
- use_torchscript: False
- framework_version: 1.5.0
- python_version: 3.7.7
- system: Linux
- cpu: x86_64
- architecture: 64bit
- date: 2020-07-16
- time: 11:28:43.644495
- fp16: False
- use_multiprocessing: True
- only_pretrain_model: False
- cpu_ram_mb: 32089
- use_gpu: True
- num_gpus: 1
- gpu: TITAN RTX
- gpu_ram_mb: 24217
- gpu_power_watts: 280.0
- gpu_performance_state: 0
- use_tpu: False
=> For CPU, there is a small speed up and the speed up is probably more significant for longer sequence length (so I run it again for longer sequences):
INFERENCE - SPEED - RESULT
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
xlnet-cache 1 50 5.032
xlnet-no-cache 1 50 6.149
xlnet-no-mems 1 50 6.175
--------------------------------------------------------------------------------
INFERENCE - MEMORY - RESULT
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Memory in MB
--------------------------------------------------------------------------------
xlnet-cache 1 50 691
xlnet-no-cache 1 50 695
xlnet-no-mems 1 50 696
--------------------------------------------------------------------------------
ENVIRONMENT INFORMATION
- transformers_version: 3.0.2
- framework: PyTorch
- use_torchscript: False
- framework_version: 1.5.0
- python_version: 3.7.7
- system: Linux
- cpu: x86_64
- architecture: 64bit
- date: 2020-07-16
- time: 11:28:43.644495
- fp16: False
- use_multiprocessing: True
- only_pretrain_model: False
- cpu_ram_mb: 32089
- use_gpu: False
- use_tpu: False
But overall, the speed-up is not really significant... @TevenLeScao can you post the code you used to benchmark the CPU speed-up here? Maybe I am doing something wrong.
I think if we manage to cache the key, value projections instead of recalculating them for every token we would see a bigger speed-up, but this has to checked.<|||||>- on GPU I also haven't observed a difference, and doing inference/text generation on 1 sentence isn't going to be very affected by the caching trick since the GPU is going to parallelize everything anyway
- on CPU I've run from the text generation pipeline:
```
#!/usr/bin/env python3
from transformers import pipeline, XLNetLMHeadModel
import time
# xlnet = XLNetLMHeadModel.from_pretrained("xlnet-base-cased")
xlnet = XLNetLMHeadModel.from_pretrained("xlnet-base-cased", use_cache=False)
generator = pipeline("text-generation", model=xlnet, tokenizer="xlnet-base-cased")
start = time.time()
number_gen = 100
for i in range(number_gen):
output_to_check = generator("Today is a beautiful day and I, ")
print(output_to_check)
print((time.time() - start) / (i+1))
print()
```
which makes I think a big (33.2 vs 8.6 seconds on my CPU) difference since XLNet has the 170-long padding text at the start! Which we should probably also take a look at since it heavily influences the generated text...<|||||>I can confirm a much bigger speed-up on CPU for longer sequences. Here the results:
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
xlnet-cache 1 300 37.733
xlnet-no-cache 1 300 98.142
xlnet-no-mems 1 300 94.338
--------------------------------------------------------------------------------
==================== INFERENCE - MEMORY - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Memory in MB
--------------------------------------------------------------------------------
xlnet-cache 1 300 728
xlnet-no-cache 1 300 752
xlnet-no-mems 1 300 750
--------------------------------------------------------------------------------
==================== ENVIRONMENT INFORMATION ====================
- transformers_version: 3.0.2
- framework: PyTorch
- use_torchscript: False
- framework_version: 1.5.0
- python_version: 3.7.7
- system: Linux
- cpu: x86_64
- architecture: 64bit
- date: 2020-07-16
- time: 13:49:43.409581
- fp16: False
- use_multiprocessing: True
- only_pretrain_model: False
- cpu_ram_mb: 32089
- use_gpu: False
- use_tpu: False
<|||||>@patrickvonplaten I added the `self.training` check and the test for `use_cache` in autoregressive mode.<|||||>So weird, it passed the tests earlier but the CI just told me the push had been refused, I'll revert it and look at it again tomorrow |
transformers | 5,769 | closed | [fix] T5 ONNX test: model.to(torch_device) | Should fix #5724
Signed-off-by: Morgan Funtowicz <[email protected]> | 07-15-2020 12:37:44 | 07-15-2020 12:37:44 | Code Quality is failing due to files that were not modified in this PR, not addressing to avoid rebasing somewhere else.<|||||>ill fix code quality. |
transformers | 5,768 | closed | Confidence score prediction of pretrained models in extractive QA - similar to pipeline | How can we calculate the confidence of a single sentence predicted from extractive question answering using autotokenizer. we will get a score using the pipeline method for a sentence, but what we get from the extractive qa are answer_Start_scores and answer_end_scores. How can we get one single score ? | 07-15-2020 12:35:45 | 07-15-2020 12:35:45 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Have you got your answer? I got the same question.. |
transformers | 5,767 | closed | How to get parameters from a Query. | # ❓ Questions & Help
Hi maybe someone can help me with a problem, I need to get parameters from a query.
I already use question answering pipeline to identify a report and now I want to get some parameters to use as filter, example:
Query: How many electric cars sold in 2019?
Output: (Item: eletric cars / Year:2019)
| 07-15-2020 11:41:14 | 07-15-2020 11:41:14 | It seems you're trying to identify entities in your query? You could use an NER model to do so, but I'm not sure the entities the NER models on the hub are trained on would work in your use-case. You can still check them out, and check the [NER script](https://github.com/huggingface/transformers/tree/master/examples/token-classification) if you want to train a model with your data.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,766 | closed | Hello,I have this problem in running 'run_glue.py'! | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
hello ,I have this problem in running 'run_glue.py'.Anyone can help me ?Thanks!
from transformers import glue_compute_metrics
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name 'glue_compute_metrics' from 'transformers' (/home/wangbingchen/wangbingchen/anaconda3/lib/python3.7/site-packages/transformers/__init__.py)
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | 07-15-2020 09:30:57 | 07-15-2020 09:30:57 | Hi @BCWang93, I had the same issue, you need to have scikit-learn installed to run 'run_glue.py'.
`pip install scikit-learn` solved the issue for me.<|||||>Indeed, this issue happens when `scikit-learn` is not installed. Thanks @nassim-yagoub! |
transformers | 5,765 | closed | When using "transformers.WarmUp" with tensorflow 2.0.0, warming up restart in each epoch! | When using "transformers.WarmUp" with tensorflow 2.0.0, warming up restart in each epoch!
That is because, in keras "Callbacks.on_batch_begin(self, batch, logs)", batch start from zero in each epoch when using "fit" method. | 07-15-2020 08:40:37 | 07-15-2020 08:40:37 | Hello!
Can you give a code that better explain your issue and a way to reproduce it please? Thanks :) <|||||>Sorry! I check again and it works well!
Close it!
|
transformers | 5,764 | closed | TF Longformer | This PR adds Longformer
In a first step, it is made sure that code is clean and that all tests pass. Todo:
### ToDo List:
- [x] same output for local attention only
- [x] same output for local + global attention only
- [x] same output for aggressive test
- [x] add all other tests
- [x] add longformer QA
- [x] refactor code and run benchmark
- [x] adds weights to all QA models and check performance via notebook
### ToDo after PR is merged:
- [ ] Add Longformer for SeqClass, MC, ... ("good first issue")
- [ ] Speed up performance and make GPU XLA work -> use Benchmark tools and possible TF Profiler
### For Review
This PR adds `TFLongformer` and the two most important parent classes `TFLongformerForMaskedLM` and `TFLongformerForQuestionAnswering`. Many tests are added to verify that TFLongformer gives identical results to PT Longformer and a colab notebook (see below) is attached to show performance on a real task.
Below you can find a Benchmark showing that TFLongformer is about 1.5x slower than PT on GPU. For now this is acceptable IMO, but in a future PR I want to take a deeper look at how TF code can be optimized and also solve a problem there is currently with TF XLA.
I spent a lot of time, trying to solve this issue: https://github.com/huggingface/transformers/issues/5815 for TFLongformer and didn't manage to find a good solution. The corresponding tests are in `SLOW` mode so they won't fail on this PR. Since we are currently thinking about a better solution than using `cast_bool_to_primitive` to solve the know TF graph boolean error, I think I will leave this small bug in TFLongformer for now (it's quite an edge IMO anymays).
Docs are added and checked, comments are added, performance on TriviaQA is verified in TF colab: https://colab.research.google.com/drive/1UmU3T1nPmJ2LgXQtPcaEVtXUnoBJ1onF?usp=sharing and TF weights were added to all longformer models here: https://huggingface.co/models?search=longformer.
Would be happy about a review @jplu @ibeltagy @LysandreJik @sshleifer @sgugger @julien-c @thomwolf | 07-15-2020 08:11:21 | 07-15-2020 08:11:21 | In `RUN_SLOW=1` the new tests: `test_saved_model_with_attentions_outputs` and `test_saved_model_with_hidden_states_output` fail @jplu from https://github.com/huggingface/transformers/pull/5468. The problem is that I have to use `tf.cond(...)` and it seems like this forces me to also use `cast_bool_...`. Not sure if you have any ideas on how to fix this @jplu .<|||||>Yes, It is still an issue with the AutoGraph thing :cry: I suggest to comment them for now.<|||||>I'm currently thinking on how to properly rework all the booleans handling in TF. As this is the main issue.<|||||>Ok...will leave it for now since the test are only in `RUN_SLOW` mode, so they won't show up in Circle CI<|||||>Confirmed that this PR will not slow down the Longformer PyTorch version.
Running the following benchmark on master:
```
python examples/benchmarking/run_benchmark.py --models allenai/longformer-base-4096 --no_memory --sequence_length 512 1024
```
gives same performance is in as in https://github.com/huggingface/transformers/pull/5811.<|||||>Benchmarking the model in TF leads to a slow-down vs. PyTorch of ca. 1.5, which is reasonable IMO:
Running:
```
python examples/benchmarking/run_benchmark_tf.py --models allenai/longformer-base-4096 --no_memory --sequence_length 512 1024
```
gives:
```
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
allenai/longformer-base-4096 8 512 0.226
allenai/longformer-base-4096 8 1024 0.446
--------------------------------------------------------------------------------
==================== ENVIRONMENT INFORMATION ====================
- transformers_version: 3.0.2
- framework: TensorFlow
- eager_mode: False
- use_xla: False
- framework_version: 2.3.0
- python_version: 3.6.10
- system: Linux
- cpu: x86_64
- architecture: 64bit
- date: 2020-08-06
- time: 15:55:55.754100
- fp16: False
- use_multiprocessing: True
- only_pretrain_model: False
- cpu_ram_mb: 32088
- use_gpu: True
- num_gpus: 1
- gpu: TITAN RTX
- gpu_ram_mb: 24217
- gpu_power_watts: 280.0
- gpu_performance_state: 0
- use_tpu: False
```
At the moment running the model in XLA on GPU fails...=> should take a closer look in a next PR.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5764?src=pr&el=h1) Report
> Merging [#5764](https://codecov.io/gh/huggingface/transformers/pull/5764?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1b8a7ffcfdfe37f5440ac0eafb58089ff5aef00a&el=desc) will **increase** coverage by `0.05%`.
> The diff coverage is `24.52%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5764?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5764 +/- ##
==========================================
+ Coverage 79.33% 79.38% +0.05%
==========================================
Files 148 149 +1
Lines 27196 27670 +474
==========================================
+ Hits 21577 21967 +390
- Misses 5619 5703 +84
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5764?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/5764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `15.76% <15.76%> (ø)` | |
| [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/5764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `89.76% <92.98%> (+0.54%)` | :arrow_up: |
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/5764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.25% <100.00%> (+<0.01%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `66.86% <100.00%> (+0.20%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/5764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.20%)` | :arrow_down: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `92.41% <0.00%> (+1.33%)` | :arrow_up: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+1.51%)` | :arrow_up: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.60% <0.00%> (+1.59%)` | :arrow_up: |
| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/5764/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5764?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5764?src=pr&el=footer). Last update [1b8a7ff...41cb64f](https://codecov.io/gh/huggingface/transformers/pull/5764?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Where do I send the pizza & beer to get this merged?<|||||>I'm curious if you think the longformer should be added to the language_generation_model? <|||||>@sgugger - I added the `return_dict` functionality and adapted the doc strings -> the new docstring functions are awesome!<|||||>> I'm curious if you think the longformer should be added to the language_generation_model?
We will add it to the EncoderDecoderModel framework, where it can be used with `generate()` |
transformers | 5,763 | closed | ADD ERNIE model | ERNIE is a serial model released by Baidu including:
ERNIE1.0: [Ernie: Enhanced representation through knowledge integration](https://arxiv.org/abs/1904.09223)
ERNIE2.0: [ERNIE 2.0: A Continual Pre-training Framework for Language Understanding](https://arxiv.org/abs/1907.12412)
ERNIE-tiny: Use data distillation from ERNIE 2.0
These models have state-of-the-art performances on NLU tasks, especially on Chinese tasks.
So, ERNIE is a very important model in the transformer model family.
Actually, ERNIE model is the same structure as BERT including the above three ERNIE viersions, so we don't add a new model, and just need to convert the weight.
> Note that: ERNIE2.0 introduces a task embedding, but the official released version doesn't have this embedding weight, and all released weights are the same with BERT.
I have successfully converted ERNIE models into PyTorch version and do serial experiments to check the conversion is the same with the original paddlepaddle implement.
More detail: https://github.com/nghuyong/ERNIE-Pytorch
In this PR, I directly add ERNIE model to BERT model related files.
This PR is link to issue: [#issue5117](https://github.com/huggingface/transformers/issues/5117),[#issue928](https://github.com/huggingface/transformers/issues/928) and [#issue514](https://github.com/huggingface/transformers/issues/514)
| 07-15-2020 07:17:58 | 07-15-2020 07:17:58 | We shouldn't have to add those because the library should work out of the box with the models at https://huggingface.co/nghuyong
The discoverability of those models on huggingface.co is a different thing and we're open to suggestions to improve it
For instance, you should upload model cards describing the models including metadata etc. See https://huggingface.co/docs for instance
Pinging @JetRunner and @sshleifer for feedback<|||||>Thanks for your contribution!
Yep, I think this PR is not very necessary here since ERNIE serves as weights for BERT. We have shifted into a "model hub" model so I can't think of a reason to keep hard-coding urls here (and we think we should even remove the existing ones at some point) @julien-c <|||||>Also, what do you mean by "especially on Chinese"? Aren't these checkpoints in English I think?<|||||>We should add some docs and tweet!<|||||>You could copy https://github.com/nghuyong/ERNIE-Pytorch/blob/master/Readme.md to each model card.
<|||||>@JetRunner
ERNIE1.0 is for Chinese, more performance detail: https://arxiv.org/abs/1904.09223
ERNIE2.0 and ERNIE-tiny are for English.<|||||>> @JetRunner
>
> ERNIE1.0 is for Chinese, more performance detail: https://arxiv.org/abs/1904.09223
>
> ERNIE2.0 and ERNIE-tiny are for English.
Yeah, this raises some confusion here. Thanks for your clarification!<|||||>@julien-c @JetRunner @sshleifer
I have withdrew my previous submission, and add model_card in this new commit.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5763?src=pr&el=h1) Report
> Merging [#5763](https://codecov.io/gh/huggingface/transformers/pull/5763?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8ab565a4be5a7fd96b19ef88d474037ef31f27e5&el=desc) will **decrease** coverage by `0.08%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5763?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5763 +/- ##
==========================================
- Coverage 77.32% 77.24% -0.09%
==========================================
Files 146 146
Lines 26047 26047
==========================================
- Hits 20141 20120 -21
- Misses 5906 5927 +21
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5763?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5763/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5763/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5763/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5763?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5763?src=pr&el=footer). Last update [8ab565a...0c8a23f](https://codecov.io/gh/huggingface/transformers/pull/5763?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks @nghuyong! The model cards look awesome.
We'll take it from here and add some metadata and then merge the model cards. <|||||>@JetRunner Thanks<|||||>Hi @nghuyong ,
I just tried [your script](https://github.com/nghuyong/ERNIE-Pytorch#reproduce-ernie-papers-case), and found that
```
Some weights of BertForMaskedLM were not initialized from the model checkpoint at nghuyong/ernie-1.0 and are newly initialized: ['cls.predictions.transform.LayerNorm.bias', 'cls.predictions.decoder.bias', 'cls.predictions.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.predictions.transform.dense.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
This error is very common if you upload a model that was loaded by `BertModel`. Could you re-upload these checkpoints by loading them in `BertForMaskedLM` first? That would be something like
```python
tokenizer = BertTokenizer.from_pretrained('./convert')
model = BertForMaskedLM.from_pretrained('./convert')
# instead of `model = BertModel.from_pretrained('./convert')`
model.save_pretrained('./saved')
```
This will allow users to directly use the checkpoints to do mask filling without fine-tuning them. Also, could you help convert the checkpoints to TensorFlow as well? That would be super easy. We have a tutorial here: https://huggingface.co/transformers/model_sharing.html
Thank you!<|||||>@JetRunner OK,I will update soon<|||||>@JetRunner have updated now~ |
transformers | 5,762 | closed | Feature request of Sparselty Gated Mixture-of-Experts and PowerNorm | # 🚀 Feature request
- The newer variant of MoE for Transformer as in [GShard: Scaling Giant Models with Conditional
Computation and Automatic Sharding](https://arxiv.org/abs/2006.16668) ([Relevant codes](https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/models/research/moe.py))
- PowerNorm of [PowerNorm: Rethinking Batch Normalization in Transformers](https://arxiv.org/abs/2003.07845) ([Relevant codes](https://github.com/sIncerass/powernorm/blob/master/fairseq/modules/norms/mask_powernorm.py))
## Motivation
MoE as in GShard should be a crucial addition for this library, since the performance gain from it is rather significant as demonstrated in the paper. For example, Transformer+MoE achieved 44.3 avg BLEU on various NMT tasks with 10x less computes than with the baseline Transformer to get 36.9 avg BLEU. The codes for GShard has not been available yet, and if it will be published, it is most likely to be published in Tensorflow (or JAX) rather than PyTorch due to their use of TPUs. However, the codes for MoE does not seem to be complicated, so it must be easy to implement it.
PowerNorm improved Wikitext-103 perplexity from 20.9 to 17.9 without additional computes simply by replacing LayerNorm with PowerNorm. Given that there is already PyTorch codes available, which is easy to transplant into this library, I think it's reasonable to suggest its feature request.
## Your contribution
As for MoE, I'm currently writing my implementation of non-hierarchical MoE in PyTorch (no model parallelism as in GShard for numerous GPUs), and I'm going to compare its performance against the baseline. It should serve as a reference. My collaborator (@lucidrains) may write the same one with more flexibilities later, but no guarantee. You may want hierarchical MoE and GShard components as well. If you want them, my codes may not be as useful.
As for PowerNorm, I don't think you need much help from me, as all you need is to replace LayerNorm with the above implemenation. I'll try to verify its performance gain, since replication of the results hasn't been done yet. | 07-15-2020 05:02:29 | 07-15-2020 05:02:29 | Is there any update on this? Thanks.<|||||>@lucidrains made this [repo](https://github.com/lucidrains/mixture-of-experts) with me.<|||||>Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,761 | closed | [cleanup] T5 test, warnings | - simplify shape checking, task specific params usage
- add task specific params logger statement to examples
- Let model.config.max_length determine document length for evaluation | 07-15-2020 02:37:22 | 07-15-2020 02:37:22 | |
transformers | 5,760 | closed | Zero shot classification pipeline | This PR adds a pipeline for zero-shot classification using pre-trained NLI models as demonstrated in our [zero-shot topic classification demo](https://huggingface.co/zero-shot/) and [blog post](https://joeddav.github.io/blog/2020/05/29/ZSL.html).
Addresses #5756, where @clmnt requested zero-shot classification in the inference API. However, it should be noted that this model has a max sequence size of `1024`, so long documents would be truncated to this length when classifying.
The pipeline takes in collection of sequences and labels. The labels are converted into a hypothesis, e.g. `vulgar` ➡ `this example is vulgar.` Each sequence and each candidate label must be paired and passed through the model, so the total number of forward passes is `num_labels * num_sequences`.
#### Usage
The pipeline can be initialized using the `pipeline` factory:
```python
from transformers import pipeline
nlp = pipeline("zero-shot-classification")
```
Then any combination of sequences and candidate labels can be passed.
```python
sequence_to_classify = "Who are you voting for in 2020?"
candidate_labels = ["Europe", "public health", "politics"]
nlp(sequence_to_classify, candidate_labels)
>>> {'sequence': 'Who are you voting for in 2020?',
'labels': ['politics', 'Europe', 'public health'],
'scores': [0.9676316380500793, 0.019536184147000313, 0.012832209467887878]}
```
When more than one label is passed, we assume that there is only one true label and that the others are false so that the output probabilities add up to 1. This can be changed by passing `multi_class=True`:
```python
sequence_to_classify = "Who are you voting for in 2020?"
candidate_labels = ["Europe", "public health", "politics", "elections"]
nlp(sequence_to_classify, candidate_labels, multi_class=True)
>>> {'sequence': 'Who are you voting for in 2020?',
'labels': ['politics', 'elections', 'Europe', 'public health'],
'scores': [0.9720695614814758,
0.967610776424408,
0.060417089611291885,
0.03248738870024681]}
```
The single-label case is likely to be more reliable, however, since the guarantee of only one true label provides strong signal to the model which is very useful in the zero-shot setting.
By default, labels are turned into NLI-format hypotheses with the template `This example is {label}.`. You can change this by with the `hypothesis_template` argument, but the default template seems to work well in most settings I've experimented with.
A couple more examples:
```python
reviews = [
"I didn't care for this film, but the ending was o.k.",
"There were some weak moments, but the movie was pretty good overall"
]
nlp(reviews, ["positive", "negative"])
>>> [{'sequence': "I didn't care for this film, but the ending was o.k.",
'labels': ['negative', 'positive'],
'scores': [0.9887893199920654, 0.011210653930902481]},
{'sequence': 'There were some weak moments, but the movie was pretty good overall',
'labels': ['positive', 'negative'],
'scores': [0.6071907877922058, 0.3928091824054718]}]
```
```python
reviews = [
"I didn't care for this film, but the ending was o.k.",
"There were some weak moments, but the movie was pretty good overall"
]
hypothesis_template = 'The sentiment of this review is {}.'
nlp(reviews, ["positive", "negative"], hypothesis_template=hypothesis_template)
>>> [{'sequence': "I didn't care for this film, but the ending was o.k.",
'labels': ['negative', 'positive'],
'scores': [0.9774571061134338, 0.022542938590049744]},
{'sequence': 'There were some weak moments, but the movie was pretty good overall',
'labels': ['positive', 'negative'],
'scores': [0.9787198305130005, 0.021280216053128242]}]
```
```python
nlp("I am a bit discouraged by my grades.", ["sad", "happy", "angry"])
>>> {'sequence': 'I am a bit discouraged by my grades.',
'labels': ['sad', 'angry', 'happy'],
'scores': [0.9630885124206543, 0.0311590563505888, 0.005752446595579386]}
``` | 07-14-2020 23:46:14 | 07-14-2020 23:46:14 | > The UI for inputting a list of potential labels + possibly a hypothesis template is non trivial
I think for purposes of the early inference API, customizing the hypothesis template is not all that important. I added support for providing labels as a list of comma-delimited strings instead of a list.
> one model is currently linked to one (and one only) Pipeline type in the inference API.
Would it be possible to have one model, e.g. `bart-large-mnli`, linked to the zero shot pipeline and have the others remain linked to text classification?<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5760?src=pr&el=h1) Report
> Merging [#5760](https://codecov.io/gh/huggingface/transformers/pull/5760?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8ab565a4be5a7fd96b19ef88d474037ef31f27e5&el=desc) will **increase** coverage by `0.72%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5760?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5760 +/- ##
==========================================
+ Coverage 77.32% 78.05% +0.72%
==========================================
Files 146 146
Lines 26047 26089 +42
==========================================
+ Hits 20141 20363 +222
+ Misses 5906 5726 -180
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5760?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `78.47% <100.00%> (+1.50%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.02% <0.00%> (-1.29%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `92.44% <0.00%> (-1.17%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.96% <0.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5760?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5760?src=pr&el=footer). Last update [8ab565a...afcf86a](https://codecov.io/gh/huggingface/transformers/pull/5760?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>> Would it be possible to have one model, e.g. `bart-large-mnli`, linked to the zero shot pipeline and have the others remain linked to text classification?
Yes we can override manually<|||||>So I think I found a possible bug in the calculation of the probabilities. Consider this example:
```python
sequences = ["Who are you voting for in 2020?"]
candidate_labels = ["politics", "public health", "economics", "elections"]
classifier(sequences[0], candidate_labels)
```
which gives the output:
```
{'labels': ['politics', 'elections', 'economics', 'public health'],
'scores': [0.5225354433059692,
0.4626988470554352,
0.007836099714040756,
0.006929598283022642],
'sequence': 'Who are you voting for in 2020?'}
```
I was able to replicate the probability calculations by doing the following:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-mnli")
model = AutoModelForSequenceClassification.from_pretrained("facebook/bart-large-mnli")
sequences = ["Who are you voting for in 2020?"]
candidate_labels = ["politics", "public health", "economics", "elections"]
template = "This example is {}."
x = tokenizer(sequences*len(candidate_labels), [template.format(label) for label in candidate_labels], return_tensors="pt", padding=True)
with torch.no_grad():
y = model(**x)[0]
probs = F.softmax(y[:,-1], dim=0) # Think this is wrong
print(probs)
# tensor([0.5225, 0.0069, 0.0078, 0.4627])
```
So I think the probability calculation is wrong. `y` is of shape `(4, 3)` and the softmax should be over the 3 not the the 4 dimensions, since it is over contradiction, neutral, entailment. So what I am suggesting is that the probability calculation should be the following instead:
```python
probs2 = F.softmax(y, dim=-1)
probs2 = probs2[:,-1] / sum(probs2[:,-1])
print(probs2)
# tensor([0.4977, 0.0016, 0.0049, 0.4958])
```
It's especially important to do it this way since there is no guarantee that the exponential of `y` will sum to one since they are logits.
<|||||>Hey @sachinruk, thanks for the comment. This isn't a bug though, it's just the way we've chosen to use the outputs. When `multi_class=False`, we ignore the contradiction and neutral logits and just do a softmax over the entailment logits. This does guarantee you will sum to 1. Your snippet is an alternative way of interpreting the model outputs to get probabilities that sum to 1, but I don't see a reason to think it is more correct.<|||||>I have included this in my local Jupyter notebook:
!pip install git+https://github.com/huggingface/transformers.git --user
nlp = pipeline("zero-shot-classification") gives the following error.
KeyError: "Unknown task zero-shot-classification, available tasks are ['feature-extraction', 'sentiment-analysis', 'ner', 'question-answering', 'fill-mask', 'summarization', 'translation_en_to_fr', 'translation_en_to_de', 'translation_en_to_ro', 'text-generation']"
What could be the missing steps? Thanks.<|||||>@SophiaLeigh you probably just had another version of transformers installed already so the pip install didn’t do anything. Try just passing`--upgrade` with the pip install.<|||||>EDIT:
**pip install transformers** delivers a version of pipelines.py that is not the one found in the current master
**pip install pip install git+https://github.com/huggingface/transformers** delivers the correct version obviously.
I dont know anything about the inner workings of pip but I have the current version _pip 20.2.2_ installed.
Same issue as @SophiaLeigh with
_transformers 3.0.2_
---------------------------------------------------------------------------
```
KeyError Traceback (most recent call last)
<ipython-input-6-1f0825594ce1> in <module>
----> 1 classifier = pipeline("zero-shot-classification")
~/.conda/envs/zero-shot/lib/python3.7/site-packages/transformers/pipelines.py in pipeline(task, model, config, tokenizer, framework, **kwargs)
1819 # Retrieve the task
1820 if task not in SUPPORTED_TASKS:
-> 1821 raise KeyError("Unknown task {}, available tasks are {}".format(task, list(SUPPORTED_TASKS.keys())))
1822
1823 framework = framework or get_framework(model)
KeyError: "Unknown task zero-shot-classification, available tasks are ['feature-extraction', 'sentiment-analysis', 'ner', 'question-answering', 'fill-mask', 'summarization', 'translation_en_to_fr', 'translation_en_to_de', 'translation_en_to_ro', 'text-generation']"
```
Also huggingface shows either outdated or wrong info here:
https://huggingface.co/transformers/main_classes/pipelines.html?highlight=pipelines
and here:
https://huggingface.co/transformers/_modules/transformers/pipelines.html#pipeline
`Args:
task (:obj:`str`):
The task defining which pipeline will be returned. Currently accepted tasks are:
- "feature-extraction": will return a :class:`~transformers.FeatureExtractionPipeline`
- "sentiment-analysis": will return a :class:`~transformers.TextClassificationPipeline`
- "ner": will return a :class:`~transformers.TokenClassificationPipeline`
- "question-answering": will return a :class:`~transformers.QuestionAnsweringPipeline`
- "fill-mask": will return a :class:`~transformers.FillMaskPipeline`
- "summarization": will return a :class:`~transformers.SummarizationPipeline`
- "translation_xx_to_yy": will return a :class:`~transformers.TranslationPipeline`
- "text-generation": will return a :class:`~transformers.TextGenerationPipeline``
While I can clearly see it`s here :)
https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines.py
I installed via pip install transformers and also via pip install git+https/thisrepository and ~~both versions~~ have the correct pipelines.py file that has all the implementations for zero-shot- Now I am really confused. <|||||>the error persists though I have included --upgrade
`pip install git+https://github.com/huggingface/transformers --upgrade
!pip install git+https://github.com/huggingface/transformers --upgrade
Collecting git+https://github.com/huggingface/transformers
Cloning https://github.com/huggingface/transformers to c:\users\sophia\appdata\local\temp\pip-req-build-6_7vqcz3
Requirement already satisfied, skipping upgrade: numpy in c:\users\sophia\anaconda3\lib\site-packages (from transformers==3.0.2) (1.19.1)
Requirement already satisfied, skipping upgrade: tokenizers==0.8.1.rc2 in c:\users\sophia\anaconda3\lib\site-packages (from transformers==3.0.2) (0.8.1rc2)
Requirement already satisfied, skipping upgrade: packaging in c:\users\sophia\anaconda3\lib\site-packages (from transformers==3.0.2) (20.4)
Requirement already satisfied, skipping upgrade: filelock in c:\users\sophia\anaconda3\lib\site-packages (from transformers==3.0.2) (3.0.12)
Requirement already satisfied, skipping upgrade: requests in c:\users\sophia\anaconda3\lib\site-packages (from transformers==3.0.2) (2.24.0)
Requirement already satisfied, skipping upgrade: tqdm>=4.27 in c:\users\sophia\anaconda3\lib\site-packages (from transformers==3.0.2) (4.48.2)
Requirement already satisfied, skipping upgrade: regex!=2019.12.17 in c:\users\sophia\anaconda3\lib\site-packages (from transformers==3.0.2) (2020.7.14)
Requirement already satisfied, skipping upgrade: sentencepiece!=0.1.92 in c:\users\sophia\anaconda3\lib\site-packages (from transformers==3.0.2) (0.1.91)
Requirement already satisfied, skipping upgrade: sacremoses in c:\users\sophia\anaconda3\lib\site-packages (from transformers==3.0.2) (0.0.43)
Requirement already satisfied, skipping upgrade: six in c:\users\sophia\anaconda3\lib\site-packages (from packaging->transformers==3.0.2) (1.15.0)
Requirement already satisfied, skipping upgrade: pyparsing>=2.0.2 in c:\users\sophia\anaconda3\lib\site-packages (from packaging->transformers==3.0.2) (2.4.7)
Requirement already satisfied, skipping upgrade: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in c:\users\sophia\anaconda3\lib\site-packages (from requests->transformers==3.0.2) (1.25.9)
Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in c:\users\sophia\anaconda3\lib\site-packages (from requests->transformers==3.0.2) (2020.6.20)
Requirement already satisfied, skipping upgrade: chardet<4,>=3.0.2 in c:\users\sophia\anaconda3\lib\site-packages (from requests->transformers==3.0.2) (3.0.4)
Requirement already satisfied, skipping upgrade: idna<3,>=2.5 in c:\users\sophia\anaconda3\lib\site-packages (from requests->transformers==3.0.2) (2.10)
Requirement already satisfied, skipping upgrade: joblib in c:\users\sophia\anaconda3\lib\site-packages (from sacremoses->transformers==3.0.2) (0.16.0)
Requirement already satisfied, skipping upgrade: click in c:\users\sophia\anaconda3\lib\site-packages (from sacremoses->transformers==3.0.2) (7.1.2)
Building wheels for collected packages: transformers
Building wheel for transformers (setup.py): started
Building wheel for transformers (setup.py): finished with status 'done'
Created wheel for transformers: filename=transformers-3.0.2-py3-none-any.whl size=873793 sha256=c363c4ea6e4ec438b88c6fafb162915d24a2c149dd4210111713e78496940af2
Stored in directory: C:\Users\sophia\AppData\Local\Temp\pip-ephem-wheel-cache-ck6yr140\wheels\35\2e\a7\d819e3310040329f0f47e57c9e3e7a7338aa5e74c49acfe522
Successfully built transformers
Installing collected packages: transformers
Attempting uninstall: transformers
Found existing installation: transformers 3.0.2
Uninstalling transformers-3.0.2:
Successfully uninstalled transformers-3.0.2
Successfully installed transformers-3.0.2`<|||||>@SophiaLei did you restart your kernel after upgrading?
@Tabernakel In the top left corner of the docs, click on `v3.0.2` and switch to `master`.
Feel free to open a topic on https://discuss.huggingface.co with any other questions.<|||||>It works now. Thank you.<|||||>So the online demo has two different models MNLI and MLNI + Yahoo Answers. I know the second one is Bart with a classification head trained on MNLI and then further fine-tuned on Yahoo Answers topic classification. Is there a specific scenario where MNLI + yahoo answers outperform just MNLI in the zero-shot classification task? <|||||>@avinregmi For practical usage the base MNLI model is typically going to do better. The Yahoo Answers model will do better on Yahoo Answers, which seems unhelpful unless you recognize that was only fine-tuned on 5 labels out of the 10 in the corpus. So if you have a big labeled dataset but only covering a subset of the labels you want to be able classify into, fine-tuning an MNLI model as I did with Yahoo Answers will likely boost your performance. Otherwise, stick with the base MNLI model.<|||||>Is there a way to persist Zero shot classification pipeline and use it for deploying in production?
Thanks!<|||||>> Is there a way to persist Zero shot classification pipeline and use it for deploying in production?
@mariyamiteva You have two options:
1. Use the pipeline directly via our inference API and let us (Hugging Face engineers) take care of all the production serving for you. Check out [the documentation for our inference API](https://api-inference.huggingface.co/docs/python/html/index.html) and reach out to [email protected] to discuss further if you're interested. cc @jeffboudier
2. Use [this distillation script](https://github.com/huggingface/transformers/tree/master/examples/research_projects/zero-shot-distillation) I wrote with some unlabeled data to distill the zero-shot pipeline to a more efficient sequence classification model, which can then be shared and deployed like any model.<|||||>Thanks @joeddav - and @mariyamiteva you can do both: 2) Distill the model to efficient sentence classification and then 1) upload it as a private model on Hugging Face to serve it via our hosted inference API.<|||||>Thanks @joeddav and @jeffboudier for the prompt feedback!
I am currently trying to run `distill_classifier.py`. `roberta-large-mnli` has been used as a teacher:
python distill_classifier.py
--data_file ..\model\output\unlabeled.txt
--class_names_file ..\model\output\class_names.txt
--teacher_name_or_path roberta-large-mnli
--multi_label 1
--output_dir ..\model\output\distilled
My `unlabeled.txt` has a single line text, e.g.

My `class_names.txt` takes the following form:

Here is the snippet of the error I get:
File "distill_classifier.py", line 338, in <module>
main()
File "distill_classifier.py", line 328, in main
trainer.train()
File "C:\Users\Maria\anaconda3\lib\site-packages\transformers\trainer.py", line 1222, in train
tr_loss += self.training_step(model, inputs)
File "C:\Users\Maria\anaconda3\lib\site-packages\transformers\trainer.py", line 1617, in training_step
loss = self.compute_loss(model, inputs)
File "distill_classifier.py", line 119, in compute_loss
target_p = inputs["labels"]
File "C:\Users\Maria\anaconda3\lib\site-packages\transformers\tokenization_utils_base.py", line 230, in __getitem__
return self.data[item]
KeyError: 'labels'
Unfortunately, I was not able to deal with the error above. Could you please help? Thanks!<|||||>@mariyamiteva I'm not positive this is the issue, but you might need to end your `unlabeled.txt` with a newline. Also, you don't need the `'` single quotes around your text or class names.<|||||>Yes, it was not the issue. I performed the proposed changes and upgraded the version to 4.7.0.dev0.
The error occurred is not the same, but similar:
Traceback (most recent call last):
File "distill_classifier.py", line 338, in <module>
main()
File "distill_classifier.py", line 328, in main
trainer.train()
File "C:\Users\Maria\anaconda3\lib\site-packages\transformers\trainer.py", line 1261, in train
tr_loss += self.training_step(model, inputs)
File "C:\Users\Maria\anaconda3\lib\site-packages\transformers\trainer.py", line 1734, in training_step
loss = self.compute_loss(model, inputs)
File "distill_classifier.py", line 119, in compute_loss
target_p = inputs["labels"]
KeyError: 'labels'<|||||>@mariyamiteva I'm not sure this is this issue, but do you have tokenizers installed? Try `pip install tokenizers` and let me know what that gives you.<|||||>@joeddav Could the following transformer model used for zero-shot classification be optimized in terms of model inference through ONNX - ‘joeddav/xlm-roberta-large-xnli’?
Thanks!
M.<|||||>@joeddav What if anyone wants to fine-tune the zero-shot classifier for a specific domain dataset. Is there any code or GitHub repo that may help us to train the zero-shot classifier? |
transformers | 5,759 | closed | T5 Model Cards | All are identical.
Happy to update w more info!
I didn't have task tags because they seem to already work. | 07-14-2020 22:53:22 | 07-14-2020 22:53:22 | The file paths aren't correct. (we have a "legacy" format for non-namespaced models – model pages link to the correct paths)
In fact if you look at the model pages, e.g. https://huggingface.co/t5-base – they already have (non-textual) model cards. Can you update those? Thanks=)<|||||>Up, @sshleifer <|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5759?src=pr&el=h1) Report
> Merging [#5759](https://codecov.io/gh/huggingface/transformers/pull/5759?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ae67b2439fb15954bfd8f0fdf521cf1a650bafb9&el=desc) will **increase** coverage by `0.17%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5759?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5759 +/- ##
==========================================
+ Coverage 78.51% 78.69% +0.17%
==========================================
Files 146 146
Lines 26214 26214
==========================================
+ Hits 20581 20628 +47
+ Misses 5633 5586 -47
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5759?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.19% <0.00%> (ø)` | |
| [src/transformers/modeling\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/5759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kcHIucHk=) | `97.83% <0.00%> (ø)` | |
| [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `85.88% <0.00%> (ø)` | |
| [src/transformers/modeling\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `82.04% <0.00%> (ø)` | |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (ø)` | |
| [src/transformers/modeling\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `81.55% <0.00%> (ø)` | |
| [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/5759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `89.21% <0.00%> (ø)` | |
| [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `89.45% <0.00%> (ø)` | |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.60% <0.00%> (ø)` | |
| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/5759/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5759?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5759?src=pr&el=footer). Last update [ae67b24...583295f](https://codecov.io/gh/huggingface/transformers/pull/5759?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>CircleCI jobs say they succeeded if you click through. Unclear why they are yellow on this page. |
transformers | 5,758 | closed | metadata | 07-14-2020 22:11:42 | 07-14-2020 22:11:42 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5758?src=pr&el=h1) Report
> Merging [#5758](https://codecov.io/gh/huggingface/transformers/pull/5758?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8ab565a4be5a7fd96b19ef88d474037ef31f27e5&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5758?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5758 +/- ##
=======================================
Coverage 77.32% 77.32%
=======================================
Files 146 146
Lines 26047 26047
=======================================
Hits 20141 20141
Misses 5906 5906
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5758?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5758?src=pr&el=footer). Last update [8ab565a...a67f412](https://codecov.io/gh/huggingface/transformers/pull/5758?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 5,757 | closed | BART/T5 eli5 in model hub | # 🚀 Feature request
Would love to have https://huggingface.co/qa in the model hub mostly for the inference widget/api (the demo is down quite regularly)
## Motivation
A lot of companies might want to test it/use it!
## Your contribution
🔥 emojis on slack | 07-14-2020 21:58:45 | 07-14-2020 21:58:45 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,756 | closed | BART MNLI + yahoo answer in the model hub for inference API | # 🚀 Feature request
would love to be able to use https://huggingface.co/zero-shot/ with the inference API
## Motivation
it would help many companies run zero-shot classification of long documents
## Your contribution
I can provide 🔥 emojis on slack
| 07-14-2020 21:13:02 | 07-14-2020 21:13:02 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,755 | closed | Problems with generating text using mbart-large-cc25 | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): MBART
Language I am using the model on (English, Chinese ...): English, Romanian
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
I'm examining 'facebook/mbart-large-en-ro' and 'facebook/mbart-large-cc25' checkpoints of MBART.
Here is my first script translating an English sentence to Romanian:
```
from transformers import MBartTokenizer, BartForConditionalGeneration
tokenizer = MBartTokenizer.from_pretrained('facebook/mbart-large-en-ro')
model = BartForConditionalGeneration.from_pretrained('facebook/mbart-large-en-ro')
src_sent = "UN Chief Says There Is No Military Solution in Syria"
src_ids = tokenizer.prepare_translation_batch([src_sent])
output_ids = model.generate(src_ids["input_ids"], decoder_start_token_id=tokenizer.lang_code_to_id["ro_RO"])
output = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0]
print('src_sent: ', src_sent)
print('src_ids: ', src_ids)
print('output_ids: ', output_ids)
print('output: ', output)
```
stdout:
```
src_sent: UN Chief Says There Is No Military Solution in Syria
src_ids: {'input_ids': tensor([[ 8274, 127873, 25916, 7, 8622, 2071, 438, 67485, 53,
187895, 23, 51712, 2, 250004]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}
output_ids: tensor([[250020, 0, 47711, 7844, 127666, 8, 18347, 18147, 1362,
315, 42071, 36, 31563, 8454, 33796, 451, 346, 125577]])
output: Şeful ONU declară că nu există o soluţie militară în Siria
```
As seen in output_ids the model always generates 0 (corresponding to bos_token) at the first decoding step. However, this seems to not be a problem with this checkpoint as the output is still the correct translation.
Now I run the same script but using pretrained "facebook/mbart-large-cc25" and trying to denoise an English input. Since the input does not have mask tokens the output should be identical to the input given the pertaining objective of MBART.
However, the output always misses the first token from the input. I have observed this with different examples (even when you have masked tokens in the input).
```
from transformers import MBartTokenizer, BartForConditionalGeneration
tokenizer = MBartTokenizer.from_pretrained('facebook/mbart-large-cc25')
model = BartForConditionalGeneration.from_pretrained('facebook/mbart-large-cc25')
src_sent = "UN Chief Says There Is No Military Solution in Syria"
src_ids = tokenizer.prepare_translation_batch([src_sent])
output_ids = model.generate(src_ids["input_ids"], decoder_start_token_id=tokenizer.lang_code_to_id["en_XX"])
output = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0]
print('src_sent: ', src_sent)
print('src_ids: ', src_ids)
print('output_ids: ', output_ids)
print('output: ', output)
```
the stdout:
```
src_sent: UN Chief Says There Is No Military Solution in Syria
src_ids: {'input_ids': tensor([[ 8274, 127873, 25916, 7, 8622, 2071, 438, 67485, 53,
187895, 23, 51712, 2, 250004]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}
output_ids: tensor([[250004, 0, 127873, 25916, 7, 8622, 2071, 438, 67485,
53, 187895, 23, 51712]])
output: Chief Says There Is No Military Solution in Syria
```
I have tried various approaches but haven't found any clear solutions to this. Appreciate any help on this.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Python version: 3.7.4
- PyTorch version (GPU?): 1.4.0 (False)
| 07-14-2020 20:52:28 | 07-14-2020 20:52:28 | Thanks for flagging. We are still trying to get to the bottom of which special tokens to use for mbart-large-cc25. See https://github.com/pytorch/fairseq/issues/2258 .<|||||>This might not be the best solution but after experimenting with the tokenizer special tokens a bit, it seems like the model is insensitive to the first input_id and lang_code used on the encoder side.
So after these modifications:
```
def set_src_lang_special_tokens(self, src_lang) -> None:
"""Reset the special tokens to the source lang setting. No prefix and suffix=[eos, cur_lang_code]."""
self.cur_lang_code = self.lang_code_to_id[src_lang]
self.prefix_tokens = [self.bos_token_id]
self.suffix_tokens = [self.eos_token_id]
def set_tgt_lang_special_tokens(self, lang: str) -> None:
"""Reset the special tokens to the target language setting. Prefix [tgt_lang_code], suffix =[eos]."""
self.cur_lang_code = self.lang_code_to_id[lang]
self.prefix_tokens = [self.cur_lang_code]
self.suffix_tokens = [self.eos_token_id]
```
in tokenization_bart.py, the model seems to be doing the right thing and generating correct English output:
```
src_sent: UN Chief Says There Is No Military Solution in Syria
src_ids: {'input_ids': tensor([[ 0, 8274, 127873, 25916, 7, 8622, 2071, 438, 67485,
53, 187895, 23, 51712, 2]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}
output_ids: tensor([[250004, 0, 8274, 127873, 25916, 7, 8622, 2071, 438,
67485, 53, 187895, 23, 51712]])
output: UN Chief Says There Is No Military Solution in Syria
```
Although this is not how things are described in MBART paper so the core issue remains.
Also, this code segment in modeling_bart.py :
```
def adjust_logits_during_generation(self, logits, cur_len, max_length, **kwargs):
if cur_len == 1:
self._force_token_ids_generation(logits, self.config.bos_token_id)
```
is the culprit for generating 0 (corresponding to bos_token) at every first decoding step.
It might need to be changed for MBART model.
<|||||>I find that the input does need to start with `<bos>`, and the decoder should be seeded with `<lang_code> <bos>`. With this setup, I am able to recover the input sequence during decoding. Like @Mehrad0711 I find that the input `lang_code` does not make a significant difference. <|||||>@tomhosking I replicated what you wrote. Definitely needs `<s>`(bos) at the beginning of the input string to fix the off by one error. You can see the tests in this PR #6524 .
I'm still having trouble squaring this/trying to find a unified fix to accomodate the behavior that mbart-large-en-ro seems to want, as shown in #6156 .
Maybe the simplest change is just to add `<s>` to the start of the encoder side string?<|||||>Interestingly, `prepend_bos` is set to false by default in the fairseq mbart finetuning docs.
I set a breakpoint during finetuning and there is no BOS to be found: here is [how batches look](https://gist.github.com/sshleifer/cba08bc2109361a74ac3760a7e30e4f4)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Hi,
Reviving this issue as it still persists in the recent version of transformers. The solution I proposed in Aug 2 comment seems to be working after a modification on prefix source token borrowed from `tokenization_mbart50.py`.
Proposed solution:
```
def set_src_lang_special_tokens(self, src_lang) -> None:
"""Reset the special tokens to the source lang setting. Prefix=[src_lang_code] and suffix=[eos]."""
self.cur_lang_code = self.lang_code_to_id[src_lang]
self.prefix_tokens = [self.cur_lang_code]
self.suffix_tokens = [self.eos_token_id]
def set_tgt_lang_special_tokens(self, lang: str) -> None:
"""Reset the special tokens to the target language setting. Prefix=[tgt_lang_code] and suffix=[eos]."""
self.cur_lang_code = self.lang_code_to_id[lang]
self.prefix_tokens = [self.cur_lang_code]
self.suffix_tokens = [self.eos_token_id]
```
This change will fix `mbart-large-cc25`'s output text while leaving `mbart-large-en-ro`'s untouched.
Code to reproduce:
```
from transformers import MBartTokenizer, MBartForConditionalGeneration
tokenizer = MBartTokenizer.from_pretrained('facebook/mbart-large-cc25')
model = MBartForConditionalGeneration.from_pretrained('facebook/mbart-large-cc25')
src_sent = "UN Chief Says There Is No Military Solution in Syria"
batch = tokenizer.prepare_seq2seq_batch(src_texts=[src_sent], src_lang="en_XX", return_tensors="pt")
output_ids = model.generate(**batch, decoder_start_token_id=tokenizer.lang_code_to_id["en_XX"])
output = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0]
print('src_sent: ', src_sent)
print('src_ids: ', batch["input_ids"])
print('output_ids: ', output_ids)
print('output: ', output)
```
stdout (before change):
```
src_sent: UN Chief Says There Is No Military Solution in Syria
src_ids: tensor([[ 8274, 127873, 25916, 7, 8622, 2071, 438, 67485, 53,
187895, 23, 51712, 2, 250004]])
output_ids: tensor([[250004, 0, 127873, 25916, 7, 8622, 2071, 438, 67485,
53, 187895, 23, 51712, 2]])
output: Chief Says There Is No Military Solution in Syria
```
stdout (after change):
```
src_sent: UN Chief Says There Is No Military Solution in Syria
src_ids: tensor([[250004, 8274, 127873, 25916, 7, 8622, 2071, 438, 67485,
53, 187895, 23, 51712, 2]])
output_ids: tensor([[250004, 0, 8274, 127873, 25916, 7, 8622, 2071, 438,
67485, 53, 187895, 23, 51712, 2]])
output: UN Chief Says There Is No Military Solution in Syria
```
Potential reviewers: @patrickvonplaten, @patil-suraj, @sgugger
<|||||>Hi @sgugger, @patrickvonplaten, @patil-suraj,
it would be great if you could provide your feedback on this. I would be happy to provide more context if needed.<|||||>Hi @Mehrad0711
Thank you for reporting this. I'll go through the original model code in fairseq to see how they are handling the prefix tokens and get back here.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @patil-suraj,
Was wondering if you had the chance to take a look at this. Thanks.<|||||>Hi @Mehrad0711
You are right, all mBART models actually use the language code as the prefix token and `<eos>` as the suffix token.
But unfortunately, we can't really change it now, because this will be backward incompatible with the other models trained using the existing format.
Also, this doesn't really make that much difference if you want to fine-tune the model. As long as a consistent format is used for fine-tuning and then for inference then it should work. However, it would change the output for the pre-trained models (as you reported). But as `mbart-large-cc25` is just a pre-trained model and should be fine-tuned to use it for the downstream tasks, this doesn't seem like a big issue.
<|||||>Hi @patil-suraj!
Thanks for your reply. I understand the concern regarding backward incompatibility of this change.
I was using `mbart-large-cc25` without fine-tuning for text denoising; that's how the problem popped up. Given newer mbart models are now available on huggingface, I'll switch to them. |
transformers | 5,754 | closed | T5 fine-tuned model doesn't appear in the model hub | Hi guys,
I have fine-tuned T5-base on wikiSQL dataset and uploaded it to HF model hub. The problem is that the model doesn't appear there. AFAIK it is because the config file is not right. Should I change anything?
Thanks. | 07-14-2020 19:54:13 | 07-14-2020 19:54:13 | I can see it now: https://huggingface.co/mrm8488/t5-base-finetuned-wikiSQL
Let us know if something looks wrong (otherwise, feel free to close)<|||||>Everything right. Maybe I searched it too soon and was not indexed yet. |
transformers | 5,753 | closed | Can't load `facebook/mbart-large-cc25` tokenizer | # 🐛 Bug
Trying to use the [example](https://huggingface.co/facebook/mbart-large-cc25) fails to load the tokenizer
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")
```
## Information
Model I am using `facebook/mbart-large-cc25`
The problem arises when using:
* [ ] the official example scripts: (give details below)
[Example](https://huggingface.co/facebook/mbart-large-cc25) fails to load the tokenizer
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")
```
## To reproduce
Steps to reproduce the behavior:
`AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")`
we get the following error indicating a missing model.
```
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-48-29412cfd6509> in <module>
----> 1 tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")
/project/WMT20/opt/miniconda3/envs/HuggingFace-3.0.2_cu101/lib/python3.7/site-packages/transformers/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
215 return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
216 else:
--> 217 return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
218
219 raise ValueError(
/project/WMT20/opt/miniconda3/envs/HuggingFace-3.0.2_cu101/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, *inputs, **kwargs)
1138
1139 """
-> 1140 return cls._from_pretrained(*inputs, **kwargs)
1141
1142 @classmethod
/project/WMT20/opt/miniconda3/envs/HuggingFace-3.0.2_cu101/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1244 ", ".join(s3_models),
1245 pretrained_model_name_or_path,
-> 1246 list(cls.vocab_files_names.values()),
1247 )
1248 )
OSError: Model name 'facebook/mbart-large-cc25' was not found in tokenizers model name list (facebook/mbart-large-en-ro, sshleifer/mbart-large-cc25). We assumed 'facebook/mbart-large-cc25' was a path, a model identifier, or url to a directory containing vocabulary files named ['sentencepiece.bpe.model'] but couldn't find such vocabulary files at this path or url.
```
## Expected behavior
Loading a tokenizer for `facebook/mbart-large-cc25` without failure.
## Environment info
```
transformers-cli env
WARNING:tensorflow:From /project/WMT20/opt/miniconda3/envs/HuggingFace-3.0.2_cu101/lib/python3.7/site-packages/transformers/commands/env.py:36: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
2020-07-14 15:14:13.475823: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA
2020-07-14 15:14:13.495994: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 2100000000 Hz
2020-07-14 15:14:13.507620: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55555a4d87f0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-07-14 15:14:13.507676: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 3.0.2
- Platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-centos-7.8.2003-Core
- Python version: 3.7.6
- PyTorch version (GPU?): 1.5.1 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
``` | 07-14-2020 19:15:11 | 07-14-2020 19:15:11 | Should we be using `sshleifer/mbart-large-cc25` has it has a name very similar to `facebook/mbart-large-cc25`? @sshleifer is the name of a HuggingFace contributor and there could be a rename mismatch.
After downloading both `sshleifer/mbart-large-cc25` and `facebook/mbart-large-cc25`
```
from transformers import AutoTokenizer
model_a = AutoModelWithLMHead.from_pretrained("sshleifer/mbart-large-cc25")
model_b = AutoModelWithLMHead.from_pretrained("facebook/mbart-large-cc25")
```
If we poke at the cache we find that both models have the same sha1sum
```
sha1sum 2593e721f6f9000d1c1a19f144236237dbad3b77d2380982baf030667eaf6f7c.2ed9a0461de33053de7171857e42bb9dd55d8722bb567aeaebff33a2a974fb14{,.json} 31f54e8b3a7628593ed122d67a426dbf6ba0687b3e406d753b61fc3e2d9e5014.2ed9a0461de33053de7171857e42bb9dd55d8722bb567aeaebff33a2a974fb14{,.json}
040e8d684abb1ca97e9aabd8f5a61e1a42c5653b 2593e721f6f9000d1c1a19f144236237dbad3b77d2380982baf030667eaf6f7c.2ed9a0461de33053de7171857e42bb9dd55d8722bb567aeaebff33a2a974fb14
1aded004ab07f675042c03fe556744e47811831b 2593e721f6f9000d1c1a19f144236237dbad3b77d2380982baf030667eaf6f7c.2ed9a0461de33053de7171857e42bb9dd55d8722bb567aeaebff33a2a974fb14.json
040e8d684abb1ca97e9aabd8f5a61e1a42c5653b 31f54e8b3a7628593ed122d67a426dbf6ba0687b3e406d753b61fc3e2d9e5014.2ed9a0461de33053de7171857e42bb9dd55d8722bb567aeaebff33a2a974fb14
b3590a41726b003e3d15d997d52b1929f7608e02 31f54e8b3a7628593ed122d67a426dbf6ba0687b3e406d753b61fc3e2d9e5014.2ed9a0461de33053de7171857e42bb9dd55d8722bb567aeaebff33a2a974fb14.json
```
We can see that the model files are identical but not the json as the json contains to different names where one is `facebook` and the other is `sshleifer`.
```
head \
2593e721f6f9000d1c1a19f144236237dbad3b77d2380982baf030667eaf6f7c.2ed9a0461de33053de7171857e42bb9dd55d8722bb567aeaebff33a2a974fb14.json \
31f54e8b3a7628593ed122d67a426dbf6ba0687b3e406d753b61fc3e2d9e5014.2ed9a0461de33053de7171857e42bb9dd55d8722bb567aeaebff33a2a974fb14.json
==> 2593e721f6f9000d1c1a19f144236237dbad3b77d2380982baf030667eaf6f7c.2ed9a0461de33053de7171857e42bb9dd55d8722bb567aeaebff33a2a974fb14.json <==
{"url": "https://cdn.huggingface.co/facebook/mbart-large-cc25/pytorch_model.bin", "etag": "\"77a6a0d3b317fe29dc30de34840c519c-292\""}
==> 31f54e8b3a7628593ed122d67a426dbf6ba0687b3e406d753b61fc3e2d9e5014.2ed9a0461de33053de7171857e42bb9dd55d8722bb567aeaebff33a2a974fb14.json <==
{"url": "https://cdn.huggingface.co/sshleifer/mbart-large-cc25/pytorch_model.bin", "etag": "\"77a6a0d3b317fe29dc30de34840c519c-292\""}
```<|||||>Yes they're identical. Use facebook/ . It seems you fixed your issue?<|||||>I see the same error: Model name 'facebook/mbart-large-cc25' was not found in tokenizers model name list (facebook/mbart-large-en-ro, sshleifer/mbart-large-cc25)
`tokenizer = MBartTokenizer.from_pretrained("facebook/mbart-large-cc25")`
It works with mbart-large-en-ro<|||||>``python
tokenizer = MBartTokenizer.from_pretrained("facebook/mbart-large-cc25")
```
works for me on master<|||||>>
>
> ``python
> tokenizer = MBartTokenizer.from_pretrained("facebook/mbart-large-cc25")
>
> ```
> works for me on master
> ```
Thanks. So, code is not released yet. Was using transformer 3.0.2 but installing from source works. |
transformers | 5,752 | closed | Update README.md | 07-14-2020 19:02:12 | 07-14-2020 19:02:12 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5752?src=pr&el=h1) Report
> Merging [#5752](https://codecov.io/gh/huggingface/transformers/pull/5752?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5d178954c909141363df4513eb5f0cc80e5e829c&el=desc) will **increase** coverage by `0.75%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5752?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5752 +/- ##
==========================================
+ Coverage 77.24% 78.00% +0.75%
==========================================
Files 146 146
Lines 26047 26047
==========================================
+ Hits 20121 20318 +197
+ Misses 5926 5729 -197
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5752?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5752/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5752/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `63.55% <0.00%> (-31.78%)` | :arrow_down: |
| [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5752/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5752/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `79.94% <0.00%> (-6.52%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5752/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `83.98% <0.00%> (-4.91%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5752/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.02% <0.00%> (-1.29%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5752/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `88.05% <0.00%> (-1.26%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5752/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.19% <0.00%> (-0.30%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5752/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5752/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5752?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5752?src=pr&el=footer). Last update [baf93b0...9e4df6b](https://codecov.io/gh/huggingface/transformers/pull/5752?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 5,751 | closed | tiny ppl typo fix | 07-14-2020 16:34:43 | 07-14-2020 16:34:43 | ||
transformers | 5,750 | closed | fail to run trainer.train() with huggingface transformer | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- I am trying to set up a TensorFlow fine-tune framework for a question-answering project. Using hugging-face/transformer as the prototype, but cannot run through the trainer.
The experiment is conducted at Databricks, the pre-trained model loaded is base-bert, train and dev sets are downloaded from hugging-face examples SQUAD 2.0 https://github.com/huggingface/transformers/tree/master/examples/question-answering
The error log complains about the unexpected keyword argument 'is_impossible', which is a SQUAD 2 data format feature. -->
<!-- Here is the link to my question on stackoverflow -->
**https://stackoverflow.com/questions/62879960/fail-to-run-trainer-train-with-huggingface-transformer**: | 07-14-2020 15:56:47 | 07-14-2020 15:56:47 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,749 | closed | Reintroduce clean_text on BertTokenizer call which was removed by mistake in #4723 | Signed-off-by: Morgan Funtowicz <[email protected]>
closes #7665 | 07-14-2020 14:22:48 | 07-14-2020 14:22:48 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5749?src=pr&el=h1) Report
> Merging [#5749](https://codecov.io/gh/huggingface/transformers/pull/5749?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5668fdb09e1bcd888930c1ff242bf200649da39c?el=desc) will **increase** coverage by `2.02%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5749?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5749 +/- ##
==========================================
+ Coverage 75.33% 77.36% +2.02%
==========================================
Files 195 146 -49
Lines 39826 26048 -13778
==========================================
- Hits 30003 20151 -9852
+ Misses 9823 5897 -3926
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5749?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5749/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `95.00% <100.00%> (+2.96%)` | :arrow_up: |
| [src/transformers/commands/env.py](https://codecov.io/gh/huggingface/transformers/pull/5749/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9lbnYucHk=) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |
| [src/transformers/commands/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/5749/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9fX2luaXRfXy5weQ==) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |
| [src/transformers/commands/transformers\_cli.py](https://codecov.io/gh/huggingface/transformers/pull/5749/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy90cmFuc2Zvcm1lcnNfY2xpLnB5) | `0.00% <0.00%> (-87.50%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5749/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.56%)` | :arrow_down: |
| [src/transformers/commands/download.py](https://codecov.io/gh/huggingface/transformers/pull/5749/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9kb3dubG9hZC5weQ==) | `0.00% <0.00%> (-65.00%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/5749/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `25.82% <0.00%> (-63.85%)` | :arrow_down: |
| [src/transformers/commands/serving.py](https://codecov.io/gh/huggingface/transformers/pull/5749/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9zZXJ2aW5nLnB5) | `0.00% <0.00%> (-55.89%)` | :arrow_down: |
| [src/transformers/commands/run.py](https://codecov.io/gh/huggingface/transformers/pull/5749/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9ydW4ucHk=) | `0.00% <0.00%> (-53.34%)` | :arrow_down: |
| [src/transformers/tokenization\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/5749/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY2FtZW1iZXJ0LnB5) | `37.03% <0.00%> (-53.13%)` | :arrow_down: |
| ... and [185 more](https://codecov.io/gh/huggingface/transformers/pull/5749/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5749?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5749?src=pr&el=footer). Last update [5668fdb...25ff60c](https://codecov.io/gh/huggingface/transformers/pull/5749?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Do we have unit tests for that `clean_text` functionality 🤔<|||||>@stefan-it I'll add one, had to switch to something else in-between <|||||>Do you mind making the code quality and the tests pass before we merge?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,748 | closed | Long BERT TypeError: forward() takes from 2 to 4 positional arguments but 7 were given | I'm having an issue on the pretraining of a BERT-like model. I used the following function twice: the first time with [bert-base-multilingual-cased](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip) and the second time with a simil version, but more efficient for **long documents**, exploiting the class [LongformerSelfAttention](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_longformer.py#L100) to make the normal BERT into a **LongBERT**.
```python
def pretrain_and_evaluate(args, model, tokenizer, eval_only, model_path):
val_dataset = TextDataset(tokenizer=tokenizer,
file_path=args.val_datapath,
block_size=tokenizer.max_len)
if eval_only:
train_dataset = val_dataset
else:
logger.info(f'Loading and tokenizing training data is usually slow: {args.train_datapath}')
train_dataset = TextDataset(tokenizer=tokenizer,
file_path=args.train_datapath,
block_size=tokenizer.max_len)
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=True, mlm_probability=0.15)
trainer = Trainer(model=model, args=args, data_collator=data_collator,
train_dataset=train_dataset, eval_dataset=val_dataset, prediction_loss_only=True,)
eval_loss = trainer.evaluate()
eval_loss = eval_loss['eval_loss']
logger.info(f'Initial eval bpc: {eval_loss/math.log(2)}')
if not eval_only:
trainer.train(model_path=model_path)
trainer.save_model()
eval_loss = trainer.evaluate()
eval_loss = eval_loss['eval_loss']
logger.info(f'Eval bpc after pretraining: {eval_loss/math.log(2)}')
```
With the [bert-base-multilingual-cased](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip) it works well: `model` and `tokenizer` passed as arguments to the function are respectively:
```python
model = BertForMaskedLM.from_pretrained('bert-base-multilingual-cased')
tokenizer = BertTokenizerFast.from_pretrained('bert-base-multilingual-cased')
```
But with the modified version of BERT this error occours:
```
Traceback (most recent call last):
File "convert_bert_to_long_bert.py", line 172, in <module>
pretrain_and_evaluate(training_args, model, tokenizer, eval_only=False, model_path=training_args.output_dir)
File "convert_bert_to_long_bert.py", line 86, in pretrain_and_evaluate
eval_loss = trainer.evaluate()
File "/Users/user/Library/Python/3.7/lib/python/site-packages/transformers/trainer.py", line 748, in evaluate
output = self._prediction_loop(eval_dataloader, description="Evaluation")
File "/Users/user/Library/Python/3.7/lib/python/site-packages/transformers/trainer.py", line 829, in _prediction_loop
outputs = model(**inputs)
File "/Users/user/Library/Python/3.7/lib/python/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/Users/user/Library/Python/3.7/lib/python/site-packages/transformers/modeling_bert.py", line 1098, in forward
return_tuple=return_tuple,
File "/Users/user/Library/Python/3.7/lib/python/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/Users/user/Library/Python/3.7/lib/python/site-packages/transformers/modeling_bert.py", line 799, in forward
return_tuple=return_tuple,
File "/Users/user/Library/Python/3.7/lib/python/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/Users/user/Library/Python/3.7/lib/python/site-packages/transformers/modeling_bert.py", line 460, in forward
output_attentions,
File "/Users/user/Library/Python/3.7/lib/python/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/Users/user/Library/Python/3.7/lib/python/site-packages/transformers/modeling_bert.py", line 391, in forward
hidden_states, attention_mask, head_mask, output_attentions=output_attentions,
File "/Users/user/Library/Python/3.7/lib/python/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/Users/user/Library/Python/3.7/lib/python/site-packages/transformers/modeling_bert.py", line 335, in forward
hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, output_attentions,
File "/Users/user/Library/Python/3.7/lib/python/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
TypeError: forward() takes from 2 to 4 positional arguments but 7 were given
```
I did few modifications to a working script to obtain a Long version of RoBERTa given the RoBERTa base model. What could be the mistake? | 07-14-2020 14:15:00 | 07-14-2020 14:15:00 | **Update:** I have downgraded transformers to the version `transformers==2.11.0` and it seems working, even if for now I have used little datasets for test. I will update this issue if someone is interested<|||||>The code in Longformer has changed quite a bit. I think a simply remedy to make your code work with the current version of `Longformer` is to add `**kwargs` to every forward function in `modeling_longformer.py` that you copied into your notebook. This way it can handle an arbitrary number of input arguments and the above error should not occur.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>> The code in Longformer has changed quite a bit. I think a simply remedy to make your code work with the current version of `Longformer` is to add `**kwargs` to every forward function in `modeling_longformer.py` that you copied into your notebook. This way it can handle an arbitrary number of input arguments and the above error should not occur.
EDIT: To begin pre-training, make sure you LOAD the saved model exactly the way the notebook does BEFORE pre-training! Don't try and use the model straightaway!<|||||>I have same issue and the problem remains. It looks the problem comes from a higher transformer version. |
transformers | 5,747 | closed | Unrecognized configuration class <class 'transformers.configuration_electra.ElectraConfig'> | # 🐛 Bug
## Information
Model I am using : ahotrod/electra_large_discriminator_squad2_512
This problem happened again, when I use ELECTRA on question-answering pipeline. My Transformers version is 2.11.0.
Code:
> from transformers import pipeline, AutoTokenizer, AutoModelForQuestionAnswering
>
> tokenizer = AutoTokenizer.from_pretrained("ahotrod/electra_large_discriminator_squad2_512")
> model = AutoModelForQuestionAnswering.from_pretrained("ahotrod/electra_large_discriminator_squad2_512")
Error:
> Traceback (most recent call last):
> File "albert_qa.py", line 5, in <module>
> model = AutoModelForQuestionAnswering.from_pretrained("ahotrod/electra_large_discriminator_squad2_512")
> File "/home/aim/ANDY-project/sentence-transformers-env/lib/python3.6/site-packages/transformers/modeling_auto.py", line 1004, in from_pretrained
> ", ".join(c.__name__ for c in MODEL_FOR_QUESTION_ANSWERING_MAPPING.keys()),
> ValueError: Unrecognized configuration class <class 'transformers.configuration_electra.ElectraConfig'> for this kind of AutoModel: AutoModelForQuestionAnswering.
- `transformers` version: 2.11.0
- Platform: Ubuntu
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.1
| 07-14-2020 14:11:40 | 07-14-2020 14:11:40 | Hi! Even though the ELECTRA model was added in version v2.8.0, the `ElectraForQuestionAnswering` head was only added in v3.0.0. You would need to upgrade your `transformers` version to at at least v3.0.0 for your code to work!<|||||>I am trying to use pretrained Distilbert in EncoderDecoderModel but i am getting this error. How can i leverage pretrained distilbert in EncoderDecoderModel.
`ValueError: Unrecognized configuration class <class 'transformers.configuration_distilbert.DistilBertConfig'> for this kind of AutoModel: AutoModelForCausalLM.
Model type should be one of CamembertConfig, XLMRobertaConfig, RobertaConfig, BertConfig, OpenAIGPTConfig, GPT2Config, TransfoXLConfig, XLNetConfig, XLMConfig, CTRLConfig, ReformerConfig, BertGenerationConfig.` |
transformers | 5,746 | closed | Where can I find raw code for char_to_token function. | I understand the function has been defined in tokenization_utils_base.py but in the return statement there is a recursive call to the same function. I am unable to understand where does the actual offset calculation takes place.
Tapan
| 07-14-2020 13:56:52 | 07-14-2020 13:56:52 | I figured it out. Thanks. |
transformers | 5,745 | closed | google/reformer-enwik8 tokenizer was not found in tokenizers model name list | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Reformer
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Enter https://huggingface.co/google/reformer-enwik8
2. Look at "Hosted inference API"
The model's tokenizer cannot be found; I'm getting the same error in scripts as the one displayed on your webpage:
```⚠️ This model could not be loaded by the inference API. ⚠️
Error loading tokenizer Model name 'google/reformer-enwik8' was not found in tokenizers model name list (google/reformer-crime-and-punishment). We assumed 'google/reformer-enwik8' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url. OSError("Model name 'google/reformer-enwik8' was not found in tokenizers model name list (google/reformer-crime-and-punishment). We assumed 'google/reformer-enwik8' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.")
```
## Expected behavior
Tokenizer loaded without issues
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: latest
- Platform: your own
- Python version: ?
- PyTorch version (GPU?): ?
- Tensorflow version (GPU?): ?
- Using GPU in script?: ?
- Using distributed or parallel set-up in script?: ?
| 07-14-2020 13:52:38 | 07-14-2020 13:52:38 | That's because only the crime and punishment modell has an uploaded tokenizer.
<|||||>`google/reformer-enwik8` is the only model that is a char language model and does not need a tokenizer. If you take a look here: https://huggingface.co/google/reformer-enwik8#reformer-language-model-on-character-level-and-trained-on-enwik8 , you can see that the model does not need a tokenier but a simple python encode and decode function.
@julien-c @mfuntowicz - how do you think we can include char lms to `pipelines`? Should we maybe introduce a `is_char_lm` config variable? Or just wrap a dummy tokenizer around the python encode and decode functions?<|||||>Add a `tokenizer_class` optional attribute to config.json which overrides the type of Tokenizer that's instantiated when calling `.from_pretrained()`?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,744 | closed | Create README.md for the model card of GPorTuguese-2 model (Portuguese GPT-2 small) | 07-14-2020 13:18:27 | 07-14-2020 13:18:27 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5744?src=pr&el=h1) Report
> Merging [#5744](https://codecov.io/gh/huggingface/transformers/pull/5744?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/838950ee44360ca427f345441502d4e7ab2772b8&el=desc) will **increase** coverage by `1.12%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5744?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5744 +/- ##
==========================================
+ Coverage 77.33% 78.45% +1.12%
==========================================
Files 146 146
Lines 26055 26047 -8
==========================================
+ Hits 20149 20436 +287
+ Misses 5906 5611 -295
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5744?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5744/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.74% <100.00%> (-0.06%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5744/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5744/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5744/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+0.50%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5744/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5744?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5744?src=pr&el=footer). Last update [b2505f7...ae466d5](https://codecov.io/gh/huggingface/transformers/pull/5744?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 5,743 | closed | Customize inference widget input | 07-14-2020 11:07:11 | 07-14-2020 11:07:11 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5743?src=pr&el=h1) Report
> Merging [#5743](https://codecov.io/gh/huggingface/transformers/pull/5743?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/838950ee44360ca427f345441502d4e7ab2772b8&el=desc) will **decrease** coverage by `0.09%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5743?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5743 +/- ##
==========================================
- Coverage 77.33% 77.24% -0.10%
==========================================
Files 146 146
Lines 26055 26047 -8
==========================================
- Hits 20149 20119 -30
- Misses 5906 5928 +22
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5743?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5743/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.74% <100.00%> (-0.06%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5743/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5743/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5743?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5743?src=pr&el=footer). Last update [b2505f7...f8d259d](https://codecov.io/gh/huggingface/transformers/pull/5743?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>@JetRunner There's still the possibility to override the default inputs if it makes more sense for the model: https://twitter.com/mrm8488/status/1282778743598194688<|||||>(up to @mrm8488 if it does here)<|||||>In this case it makes more sense because in last phase the model was fine tuned on Spanish Wikipedia data |
|
transformers | 5,742 | closed | How to use pytorch_model.bin to classify a single sentence? | Hi!
I fine-turned BERT on my own datasets by using run_glue.py. And I got pytorch_model.bin as output. I want to use pytorch_model.bin in another system to classify a single sentence from web browser.
I would be grateful if you could teach me about usage. | 07-14-2020 10:17:17 | 07-14-2020 10:17:17 | Which task did you fine tune the model on? For single sentence it is probably Cola or SST-2 task right?
You must use predict function by specifying do_predict in the input parameters.<|||||>Thank you for answering my question!
I fine-tuned my model on a original task for binary classifications of Japanese sentences. The processor for original task is below.
in transformers/data/processors/glue.py
```
class OriginalProcessor(DataProcessor):
"""Processor for the original data set."""
def get_example_from_tensor_dict(self, tensor_dict):
"""See base class."""
return InputExample(
tensor_dict["idx"].numpy(),
tensor_dict["sentence"].numpy().decode("utf-8"),
None,
str(tensor_dict["label"].numpy()),
)
def get_train_examples(self, data_dir):
"""See base class."""
return self._create_examples(self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
def get_dev_examples(self, data_dir):
"""See base class."""
return self._create_examples(self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev")
def get_labels(self):
"""See base class."""
return ["0", "1"]
def _create_examples(self, lines, set_type):
"""Creates examples for the training and dev sets."""
examples = []
for (i, line) in enumerate(lines):
# if tsv files have a header, remove #
# if i == 0:
# continue
guid = "%s-%s" % (set_type, i)
text_a = line[0]
label = line[1]
examples.append(InputExample(guid=guid, text_a=text_a, text_b=None, label=label))
return examples
```
And please let me ask two questions.
1. Which is better to execute run_glue.py as an external process or to use my own script which mimic run_glue.py's predict function?
2. Can I load my fine-tuned model by specifying the directory which have 'pytorch_model.bin' in parameters or my script as written below? The directory includes output of fine-turning.
```
#parameter
--model_name_or_path="the path for the directory"
#script
model = BertForSequenceClassification.from_pretrained('the path for the directory')
```<|||||>I made it with first one. Thank you!<|||||>Both are doable. I would turn off training option and just use prediction option.
Good to know you did it. |
transformers | 5,741 | closed | FileNotFoundError: File not found when running run_squad.py to fine-tune the BERT on SQuAD v1.1. | Hi, I am just following this tutorial https://github.com/huggingface/transformers/tree/master/examples/question-answering and created a folder named SQUAD_DIR under transformers. The train and test file were downloaded and put into the SQUAD_DIR folder. But the error is that such a file can not be found. | 07-14-2020 10:08:44 | 07-14-2020 10:08:44 | This is a mismatch between the file location and what you're indicating to the script. Are you sure you're pointing to the correct directory? If so, can you try using absolute paths?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,740 | closed | [ModelOutput] Proposal to fix compatibility issue with torch.DataParallel | This is a proposal to fix #5693 by making `ModelOutput` inherit from a dictionary and behave like a dictionary on iteration (i.e. iterate over the keys rather than the values).
This could break backward compatibility when users iterates over the output tuple rather than indexing it.
On the other hand, we regain backward compatibility with `torch.DataParallel` and from a more general design point of view the `ModelOutput` class should probably be closer to a dictionary than a tuple in the future. | 07-14-2020 10:04:59 | 07-14-2020 10:04:59 | Update: this is actually not working because it breaks the possibility to unpack the output of the model's forward pass which is obviously a common pattern (cf failure in the tests) :-/<|||||>This is superseded by #6138 now. |
transformers | 5,739 | closed | TypeError: join() argument must be str or bytes, not 'NoneType' | Trying to run seq2seq example scripts with multiple gpus and wandb logging and I get
```
Traceback (most recent call last):
File "/home/martongyorgy/projects/tmp/transformers/examples/seq2seq/finetune.py", line 364, in <module>
xsum_rouge.json
File "/home/martongyorgy/projects/tmp/transformers/examples/seq2seq/finetune.py", line 342, in main
logger=logger,
File "/home/martongyorgy/projects/tmp/transformers/examples/lightning_base.py", line 330, in generic_train
trainer.fit(model)
File "/home/martongyorgy/miniconda3/envs/transformers/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 891, in fit
self.ddp_train(task, model)
File "/home/martongyorgy/miniconda3/envs/transformers/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 530, in ddp_train
self.run_pretrain_routine(model)
File "/home/martongyorgy/miniconda3/envs/transformers/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1046, in run_pretrain_routine
self.configure_checkpoint_callback()
File "/home/martongyorgy/miniconda3/envs/transformers/lib/python3.7/site-packages/pytorch_lightning/trainer/callback_config.py", line 60, in configure_checkpoint_callback
"checkpoints"
File "/home/martongyorgy/miniconda3/envs/transformers/lib/python3.7/posixpath.py", line 94, in join
genericpath._check_arg_types('join', a, *p)
File "/home/martongyorgy/miniconda3/envs/transformers/lib/python3.7/genericpath.py", line 153, in _check_arg_types
(funcname, s.__class__.__name__)) from None
TypeError: join() argument must be str or bytes, not 'NoneType'
```
To reproduce follow the instructions in the seq2seq example Readme to download XSUM then run
```
export PYTHONPATH="../":"${PYTHONPATH}"
python finetune.py \
--learning_rate=3e-5 \
--fp16 \
--gpus 2 \
--do_train \
--n_val 1000 \
--val_check_interval 0.1 \
--data_dir xsum \
--output_dir xsum_frozen_embs \
--model_name_or_path t5-small \
--train_batch_size 16 --eval_batch_size 16 --freeze_embeds --freeze_encoder \
--num_train_epochs 6 \
--max_target_length=60 --val_max_target_length=60 --test_max_target_length=100 \
--logger wandb
```
Happens in pytorch-lightning 0.8.1, fixed in 0.8.4. But there is another problem in 0.8.4, see #5584
| 07-14-2020 09:25:32 | 07-14-2020 09:25:32 | Hopefully fixed by https://github.com/huggingface/transformers/pull/5361<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,738 | closed | Unicode normalization for bert-cased models | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Tokenize text containing combining marks with `BertTokenizer`. E.g.
```
In [1]: from transformers.tokenization_bert import *
In [2]: tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
In [3]: tokenizer.tokenize("\u00E1")
Out[3]: ['á']
In [4]: tokenizer.tokenize("a\u0301")
Out[4]: ['a', '##́']
In [5]: tokenizer.tokenize("a\u0300")
Out[5]: ['[UNK]']
In [6]: tokenizer.tokenize("\u00E0")
Out[6]: ['à']
```
Results for `BertTokenizerFast` are the same.
## Expected behavior
`"\u00E1"` and `"a\u0301"` should ideally be tokenized the same way; so should `"\u00E0"` and `"a\u0300"`. If existing behavior should be preserved, maybe add an optional argument.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Linux-4.4.0-18362-Microsoft-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 07-14-2020 09:16:26 | 07-14-2020 09:16:26 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,737 | closed | Update model_summary.rst | Add '-' to make the reference of Transformer-XL more accurate and formal. | 07-14-2020 09:00:02 | 07-14-2020 09:00:02 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5737?src=pr&el=h1) Report
> Merging [#5737](https://codecov.io/gh/huggingface/transformers/pull/5737?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cd30f98fd24837f285cfc221b91cfa66b1b38c32&el=desc) will **decrease** coverage by `0.77%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5737?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5737 +/- ##
==========================================
- Coverage 78.02% 77.25% -0.78%
==========================================
Files 146 146
Lines 26055 26055
==========================================
- Hits 20329 20128 -201
- Misses 5726 5927 +201
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5737?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5737/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5737/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5737/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: |
| [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5737/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `95.23% <0.00%> (+10.71%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5737/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5737?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5737?src=pr&el=footer). Last update [cd30f98...5347731](https://codecov.io/gh/huggingface/transformers/pull/5737?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,736 | closed | TypeError: an integer is required (got type NoneType) while using run_language_modeling.py | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): BERT
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below) Yes
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below) Own task
## To reproduce
Steps to reproduce the behavior:
Train the tokenizer:
```python
from tokenizers import BertWordPieceTokenizer
paths = 'Clean_merged.txt'
# Initialize a tokenizer
tokenizer = BertWordPieceTokenizer()
# Customize training
tokenizer.train(files=paths, vocab_size=32_000, min_frequency=4)
tokenizer.save_model('./')
```
```
%env TRAIN_FILE= Clean_merged.txt
!python transformers/examples/language-modeling/run_language_modeling.py \
--output_dir=output_from_scratch \
--model_type=bert \
--do_train \
--tokenizer_name save_tokenizer \
--save_steps 2000 \
--per_gpu_train_batch_size 8 \
--train_data_file=$TRAIN_FILE \
--mlm \
--block_size 510
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
To train a BERT model from scratch, on an mlm task.
### Stack trace
```
2020-07-14 07:52:19.364573: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
07/14/2020 07:52:21 - INFO - transformers.training_args - PyTorch: setting up devices
07/14/2020 07:52:21 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 1, distributed training: False, 16-bits training: False
07/14/2020 07:52:21 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir='output_from_scratch', overwrite_output_dir=False, do_train=True, do_eval=False, do_predict=False, evaluate_during_training=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, per_gpu_train_batch_size=8, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, learning_rate=5e-05, weight_decay=0.0, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Jul14_07-52-21_cf8085fe2205', logging_first_step=False, logging_steps=500, save_steps=2000, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=1000, past_index=-1)
07/14/2020 07:52:21 - WARNING - __main__ - You are instantiating a new config instance from scratch.
07/14/2020 07:52:21 - INFO - transformers.configuration_utils - loading configuration file save_tokenizer/config.json
07/14/2020 07:52:21 - INFO - transformers.configuration_utils - Model config BertConfig {
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 514,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 0,
"type_vocab_size": 2,
"vocab_size": 32000
}
07/14/2020 07:52:21 - INFO - transformers.tokenization_utils_base - Model name 'save_tokenizer' not found in model shortcut name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). Assuming 'save_tokenizer' is a path, a model identifier, or url to a directory containing tokenizer files.
07/14/2020 07:52:21 - INFO - transformers.tokenization_utils_base - Didn't find file save_tokenizer/added_tokens.json. We won't load it.
07/14/2020 07:52:21 - INFO - transformers.tokenization_utils_base - Didn't find file save_tokenizer/special_tokens_map.json. We won't load it.
07/14/2020 07:52:21 - INFO - transformers.tokenization_utils_base - Didn't find file save_tokenizer/tokenizer_config.json. We won't load it.
07/14/2020 07:52:21 - INFO - transformers.tokenization_utils_base - Didn't find file save_tokenizer/tokenizer.json. We won't load it.
07/14/2020 07:52:21 - INFO - transformers.tokenization_utils_base - loading file save_tokenizer/vocab.txt
07/14/2020 07:52:21 - INFO - transformers.tokenization_utils_base - loading file None
07/14/2020 07:52:21 - INFO - transformers.tokenization_utils_base - loading file None
07/14/2020 07:52:21 - INFO - transformers.tokenization_utils_base - loading file None
07/14/2020 07:52:21 - INFO - transformers.tokenization_utils_base - loading file None
07/14/2020 07:52:21 - INFO - __main__ - Training new model from scratch
/usr/local/lib/python3.6/dist-packages/transformers/modeling_auto.py:709: FutureWarning: The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models.
FutureWarning,
07/14/2020 07:52:26 - INFO - filelock - Lock 140270211965616 acquired on cached_lm_BertTokenizer_508_Clean_merged.txt.lock
07/14/2020 07:52:31 - INFO - transformers.data.datasets.language_modeling - Loading features from cached file cached_lm_BertTokenizer_508_Clean_merged.txt [took 3.803 s]
07/14/2020 07:52:31 - INFO - filelock - Lock 140270211965616 released on cached_lm_BertTokenizer_508_Clean_merged.txt.lock
07/14/2020 07:52:33 - INFO - transformers.trainer - You are instantiating a Trainer but W&B is not installed. To use wandb logging, run `pip install wandb; wandb login` see https://docs.wandb.com/huggingface.
07/14/2020 07:52:33 - WARNING - transformers.training_args - Using deprecated `--per_gpu_train_batch_size` argument which will be removed in a future version. Using `--per_device_train_batch_size` is preferred.
07/14/2020 07:52:33 - WARNING - transformers.training_args - Using deprecated `--per_gpu_train_batch_size` argument which will be removed in a future version. Using `--per_device_train_batch_size` is preferred.
07/14/2020 07:52:33 - INFO - transformers.trainer - ***** Running training *****
07/14/2020 07:52:33 - INFO - transformers.trainer - Num examples = 101259
07/14/2020 07:52:33 - INFO - transformers.trainer - Num Epochs = 3
07/14/2020 07:52:33 - INFO - transformers.trainer - Instantaneous batch size per device = 8
07/14/2020 07:52:33 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 8
07/14/2020 07:52:33 - INFO - transformers.trainer - Gradient Accumulation steps = 1
07/14/2020 07:52:33 - INFO - transformers.trainer - Total optimization steps = 37974
Epoch: 0% 0/3 [00:00<?, ?it/s]
Iteration: 0% 0/12658 [00:00<?, ?it/s]Traceback (most recent call last):
File "transformers/examples/language-modeling/run_language_modeling.py", line 296, in <module>
main()
File "transformers/examples/language-modeling/run_language_modeling.py", line 260, in main
trainer.train(model_path=model_path)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 492, in train
for step, inputs in enumerate(epoch_iterator):
File "/usr/local/lib/python3.6/dist-packages/tqdm/std.py", line 1104, in __iter__
for obj in iterable:
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 345, in __next__
data = self._next_data()
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 385, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.6/dist-packages/transformers/data/datasets/language_modeling.py", line 75, in __getitem__
return torch.tensor(self.examples[i], dtype=torch.long)
TypeError: an integer is required (got type NoneType)
Epoch: 0% 0/3 [00:00<?, ?it/s]
Iteration: 0% 0/12658 [00:00<?, ?it/s]
```
## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.1+cu101 (True)
- Tensorflow version (GPU?): 2.2.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: <No | 07-14-2020 08:41:43 | 07-14-2020 08:41:43 | The error was caused because I did not specify the type of dataset (```LinebyLine``` in my case). After doing that, it worked. <|||||>I'm facing the same error and below is my run command. Any pointers, @cabhijith ?
`cmd = "python run_language_modeling.py \
--output_dir ./bertout \
--model_type bert \
--do_train \
--do_eval \
--train_data_file ./test.txt \
--eval_data_file ./test.txt \
--mlm \
--line_by_line \
--learning_rate 1e-4 \
--num_train_epochs 5 \
--save_total_limit 2 \
--save_steps 2000 \
--per_gpu_train_batch_size 16 \
--evaluate_during_training \
--warmup_steps=10000 \
--logging_steps=100 \
--gradient_accumulation_steps=4 \
--seed 666 \
--block_size=512 \
--tokenizer_name ./bert2 \
--config_name ./bert2"`<|||||>I still got this error after I use `LinebyLine` dataset, anyone can help me ?<|||||>> I still got this error after I use `LinebyLine` dataset, anyone can help me ?
It works after a close a file is opening... |
transformers | 5,735 | closed | Create README.md (Model card for Norod78/hewiki-articles-distilGPT2py-il) | Model card for Norod78/hewiki-articles-distilGPT2py-il
A tiny GPT2 model for generating Hebrew text | 07-14-2020 08:36:10 | 07-14-2020 08:36:10 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5735?src=pr&el=h1) Report
> Merging [#5735](https://codecov.io/gh/huggingface/transformers/pull/5735?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cd30f98fd24837f285cfc221b91cfa66b1b38c32&el=desc) will **increase** coverage by `0.48%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5735?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5735 +/- ##
==========================================
+ Coverage 78.02% 78.51% +0.48%
==========================================
Files 146 146
Lines 26055 26055
==========================================
+ Hits 20329 20456 +127
+ Misses 5726 5599 -127
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5735?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `63.55% <0.00%> (-31.78%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `79.94% <0.00%> (-6.27%)` | :arrow_down: |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: |
| [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `95.23% <0.00%> (+10.71%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5735?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5735?src=pr&el=footer). Last update [cd30f98...148adfb](https://codecov.io/gh/huggingface/transformers/pull/5735?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Hi @Norod, this is our first model for Hebrew, that's awesome – thanks for sharing.
Do you think you'd be up for adding default example inputs to all models for Hebrew? If you are, just open a PR against [this file](https://github.com/huggingface/widgets-server/blob/master/DefaultWidget.ts).<|||||>(also cc'ing @JetRunner)<|||||>Hello @julien-c
Thank you for having such awesome models and community.
While my contribution is the first GPT2 model in Hebrew, there are several ones which were contributed by "Helsinki-NLP" for translating to Hebrew and from Hebrew. For example: https://huggingface.co/Helsinki-NLP/opus-mt-he-de?text=%D7%A9%D7%9C%D7%95%D7%9D
There is also one BERT model contributed by "TurkuNLP" [TurkuNLP/wikibert-base-he-cased](https://huggingface.co/TurkuNLP/wikibert-base-he-cased) that can generate masked predictions.
Thank you for pointing out the 'default example inputs' file, I will have a look.<|||||>@Norod Good point! We now have a clearer webpage that lists (mono-lingual) models by language: see e.g. this link for Hebrew https://huggingface.co/languages#he |
transformers | 5,734 | closed | Fix typo (model saving TF) | 07-14-2020 05:44:23 | 07-14-2020 05:44:23 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5734?src=pr&el=h1) Report
> Merging [#5734](https://codecov.io/gh/huggingface/transformers/pull/5734?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f0bda06f43a0d5e4ef80ad0f1812027b658b724d&el=desc) will **increase** coverage by `0.26%`.
> The diff coverage is `0.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5734?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5734 +/- ##
==========================================
+ Coverage 77.05% 77.31% +0.26%
==========================================
Files 146 146
Lines 26012 26012
==========================================
+ Hits 20043 20111 +68
+ Misses 5969 5901 -68
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5734?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `16.53% <0.00%> (ø)` | |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.82% <0.00%> (-0.29%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5734?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5734?src=pr&el=footer). Last update [f0bda06...74755d7](https://codecov.io/gh/huggingface/transformers/pull/5734?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 5,733 | closed | DataParallel fixes | 1. switched to a more precise check as suggested by @thomwolf
```
- if self.args.n_gpu > 1:
+ if isinstance(model, nn.DataParallel):
```
discussion: https://github.com/huggingface/transformers/issues/5693#issuecomment-657937349
2. fix tests - require the same fixup under DataParallel as the training module fix merged earlier today:
https://github.com/huggingface/transformers/pull/5685
discussion: https://github.com/huggingface/transformers/issues/5693#issuecomment-657938856
| 07-14-2020 04:51:21 | 07-14-2020 04:51:21 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5733?src=pr&el=h1) Report
> Merging [#5733](https://codecov.io/gh/huggingface/transformers/pull/5733?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c3c61ea01733403210a1d159114e8c3d042dabb7&el=desc) will **increase** coverage by `1.34%`.
> The diff coverage is `50.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5733?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5733 +/- ##
==========================================
+ Coverage 77.22% 78.57% +1.34%
==========================================
Files 146 146
Lines 26012 26012
==========================================
+ Hits 20088 20439 +351
+ Misses 5924 5573 -351
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5733?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5733/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.84% <50.00%> (ø)` | |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5733/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5733/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `63.55% <0.00%> (-31.78%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5733/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.71% <0.00%> (-0.51%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5733/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5733/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5733?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5733?src=pr&el=footer). Last update [c3c61ea...4d61a1e](https://codecov.io/gh/huggingface/transformers/pull/5733?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Way better fix, LGTM
Did the multigpu test pass?<|||||>> Way better fix, LGTM
Well, I followed @sgugger and @thomwolf's breadcrumbs, so that was easy.
> Did the multigpu test pass?
Yes. And I retested the glue_run.py on multi-gpu machine.
<|||||>Pushed another fix for https://github.com/huggingface/transformers/issues/5693#issuecomment-659564678 |
transformers | 5,732 | closed | Add `power` argument for TF PolynomialDecay | 07-14-2020 04:24:15 | 07-14-2020 04:24:15 | Thanks for the PR!!
Can you just go a bit forward and create a parameter in `training_args_tf.py` for this?<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5732?src=pr&el=h1) Report
> Merging [#5732](https://codecov.io/gh/huggingface/transformers/pull/5732?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f0bda06f43a0d5e4ef80ad0f1812027b658b724d&el=desc) will **increase** coverage by `1.06%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5732?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5732 +/- ##
==========================================
+ Coverage 77.05% 78.12% +1.06%
==========================================
Files 146 146
Lines 26012 26012
==========================================
+ Hits 20043 20321 +278
+ Misses 5969 5691 -278
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5732?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `57.65% <ø> (ø)` | |
| [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `16.53% <ø> (ø)` | |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.02% <0.00%> (-1.29%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.96% <0.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5732?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5732?src=pr&el=footer). Last update [f0bda06...bdb7613](https://codecov.io/gh/huggingface/transformers/pull/5732?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>How can I see where my format is wrong ?<|||||>Can you run in that order:
```
isort --recursive examples templates tests src utils
black --line-length 119 --target-version py35 examples templates tests src utils
```
And then push.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Hello @Colanim!
Can you rebase on master, and I will merge once done!<|||||>Nice! Can you run a `make style` in order to fix the code quality test. |
|
transformers | 5,731 | closed | [fix] mbart_en_ro_generate test now identical to fairseq | violentele -> violenţa
Slow test was previously failing. | 07-14-2020 03:57:35 | 07-14-2020 03:57:35 | I think CI is spurious.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5731?src=pr&el=h1) Report
> Merging [#5731](https://codecov.io/gh/huggingface/transformers/pull/5731?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c3c61ea01733403210a1d159114e8c3d042dabb7&el=desc) will **increase** coverage by `1.21%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5731?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5731 +/- ##
==========================================
+ Coverage 77.22% 78.43% +1.21%
==========================================
Files 146 146
Lines 26012 26012
==========================================
+ Hits 20088 20403 +315
+ Misses 5924 5609 -315
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5731?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5731/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5731/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5731/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5731?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5731?src=pr&el=footer). Last update [c3c61ea...608369d](https://codecov.io/gh/huggingface/transformers/pull/5731?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,730 | closed | Using pipeline('ner'), partial tokens returned when grouped_entities=True | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): pipeline('ner', grouped_entities=True)
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import pipeline
ner = pipeline('ner', grouped_entities=True)
ner("the sapodilla tree is native to Central America")
```
## Expected behavior
The output says that "##di" is one of the named entities. It doesn't seem like partial tokens should possibly be returned as predicted named entities. Instead, I imagine that the desired result is that either the entire word "sapodilla" is determined to be an entity group or nothing at all. Is this a bug or was this quirk consciously chosen to be allowed?
As a side note, another similar quirk here is that something like "U.S." occasionally gives just "U" or "S" as individual named entities, where "U.S." is desired. I consider this related to the above issue.
## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-5.3.0-1032-azure-x86_64-with-glibc2.10
- Python version: 3.8.1
- PyTorch version (GPU?): 1.5.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no | 07-14-2020 02:34:55 | 07-14-2020 02:34:55 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,729 | closed | [Feature request] Pass any Iterable to tokenizer.__call__() | # 🚀 Feature request
Currently `tokenizer.__call__` accepts `List[List[str]]`, for pre-tokenized inputs. It should also accept `List[np.ndarray[str]]`.
## Motivation
The `nlp` library stores a batch of pre-tokenized strings as `List[np.ndarray[str]]` after creating them with `dset.map(batched=True)`. Currently it requires a memory-intensive `tokenizer([list(ex) for ex in batch])` to pass these pre-tokenized strings into `batch_encode_plus`.
## Your contribution
I could contribute this.
| 07-14-2020 02:33:17 | 07-14-2020 02:33:17 | This should be handle in `nlp` in my opinion.
It's related to https://github.com/huggingface/nlp/issues/387
cc @lhoestq <|||||>Nice find on the `nlp` side! There are also other use cases when users might want to pass in a NumPy array, or other type of Iterable. Any reason we shouldn't extend to all Iterables like https://github.com/huggingface/nlp/pull/370?<|||||>The fast tokenizers only support python inputs at the moment. We could change it to allow numpy arrays as well but I would expect this to be potentially a significant work to update.<|||||>Ah yes, Rust's static typing makes this more complex. And my own benchmarks show significant memory usage when casting from a numpy array to python list. Fixing the `nlp` output format should work well then.<|||||>I just did the change. Let me know if it's good for you :)<|||||>Thanks for the fix @lhoestq! That does indeed resolve the need for a list comprehension, but I'm still hitting the same speed bottleneck. It seems that the list comprehension is just moved inside of the `to_pylist()` function. Here's a reproducible benchmark:
```python
import nlp
from transformers import BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased")
dset_size = 10000
max_seq_length = 512
dset = nlp.Dataset.from_dict(
{"examples": [[str(i) for i in range(max_seq_length)] for _ in range(dset_size)]}
)
dset = dset.map(
lambda batch: tokenizer(
batch["examples"], is_pretokenized=True, # rather than [ex for ex in batch["examples"]]
),
batched=True,
remove_columns=["examples"],
)
```
This takes 37 seconds to run, processing around 270 examples/second, or 3.7 seconds/batched iteration. At this pace it takes around 1 hour to encode 10GB of text, such as Wikipedia, and even longer for a larger dataset like C4.
I would love to take full use of the tokenizers functionality of "they can encode 1GB of text in ~20sec on a standard server's CPU". That would allow encoding Wikipedia in 2 minutes rather than an hour. Are there any further improvements that would un-bottleneck the batched map function?<|||||>The bottleneck is indeed the conversion of arrow types to python lists.
I've been looking for a faster way to do it but I couldn't find a satisfactory solution.
If we manage to be full rust/c++ on this we could achieve full speed:
- We could add support for numpy arrays as input for tokenizers (arrow to numpy is very fast)
- Or we could leverage arrow's rust API to do the processing in rust
Both options are not trivial though.
If we do so, we can expect to have the tokenization as the bottleneck, instead of the conversion from arrow to python lists.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,728 | closed | Return tokens from tokenizer.__call__() | Currently, pre-tokenizing (not encode tokens to ids, just generating the tokens) a batch of strings requires a manual for-loop. I'm adding a method to call the underlying Rust batch_encode implementation, which runs about 2x faster than a Python for-loop. I was expecting an even greater speedup, so if there's any way this could be made more efficient I would love to hear it.
Before:
```python
batch = ["Sentence 1", "Sentence 2"]
tokenized_batch = [tokenizer.tokenize(ex) for ex in batch]
# [["Sen", "##tence", "1"], ["Sen", "##tence", "2"]
```
Now:
```python
tokenized_batch = tokenizer.tokenize_batch(batch)
# [["Sen", "##tence", "1"], ["Sen", "##tence", "2"]
``` | 07-14-2020 01:33:32 | 07-14-2020 01:33:32 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5728?src=pr&el=h1) Report
> Merging [#5728](https://codecov.io/gh/huggingface/transformers/pull/5728?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c3c61ea01733403210a1d159114e8c3d042dabb7&el=desc) will **increase** coverage by `1.18%`.
> The diff coverage is `50.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5728?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5728 +/- ##
==========================================
+ Coverage 77.22% 78.41% +1.18%
==========================================
Files 146 146
Lines 26012 26014 +2
==========================================
+ Hits 20088 20399 +311
+ Misses 5924 5615 -309
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5728?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.60% <ø> (ø)` | |
| [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/5728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `93.57% <50.00%> (-0.64%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.96% <0.00%> (-1.26%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.36% <0.00%> (+0.40%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5728?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5728?src=pr&el=footer). Last update [c3c61ea...d91c382](https://codecov.io/gh/huggingface/transformers/pull/5728?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Hi Jared,
We are currently trying to reduce the number of user-facing methods in the tokenizer so I would be in favor of extending `tokenize` to accept batches as well. This can be done by having `tokenize` call `__call__` which accept both single examples and batches and extracting the tokens from the results.
It's already what happening in `encode` if you dive in the code (`encode` is calling the full tokenization pipeline in `encode_plus` and filtering the output to keep only the tokens).<|||||>Thanks for explaining your thinking! What about adding a `return_tokens` argument to `tokenizer.__call__`? `encoding.tokens` is the only attribute in an `Encoding` that can't be accessed directly from that method.<|||||>Oh yes, actually you can already do `.tokens(index_in_the_batch)` on the output of `__call__` but I see the docstring was missed and it is thus not in the doc currently, we will add it (and the similar method `.words(index_in_the_batch)`).
It's here in the `BatchEncoding` class (which is the output of the encoding methods): https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils_base.py#L221
So you can do:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('bert-base-cased', use_fast=True)
batch = ['hello how are you?', 'good morning everyone']
encodings = tokenizer(batch)
>>> encodings.tokens(1)
['[CLS]', 'good', 'morning', 'everyone', '[SEP]']
>>> encodings.tokens(0)
['[CLS]', 'hello', 'how', 'are', 'you', '?', '[SEP]']
```
Is it what you were looking for?
It currently only work for "fast" tokenizers (i.e. most of the tokenizers except the sentencepiece ones but they should be added not too far in the future)<|||||>Somewhat. I still have to do `[encodings.tokens(i) for i in range(len(encodings))]`, but that's fine. I also got direct access to the underlying Rust tokenizer via `tokenizer._tokenizer.encode_batch(batch["sentences"])`, so that will do. Thanks! |
transformers | 5,727 | closed | t5 model card | Add Model Card for all t5 checkpoints.
cc @clmnt | 07-14-2020 01:30:48 | 07-14-2020 01:30:48 | |
transformers | 5,726 | closed | Finetuning GPT2 with Custom Loss | ## System Info
- Ubuntu 20.04
- Pytorch: 1.5.1+cpu
- Transformers: 3.0.2
- Python: 3.7.6
## Details
Ultimately, I would like to finetune GPT2 on my dataset using a custom loss from an `NGrams` model I have created. Here is what I for the model:
```python
from transformers import GPT2LMHeadModel
from FeatureExtraction.NGrams import *
class GPT2FinetunedWithNgrams(GPT2LMHeadModel):
def __init__(self, ngrams_model_path):
super().from_pretrained('gpt2')
self.ngrams_model = NGrams(ngrams_model_path)
def forward(
self,
input_ids=None,
past=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
labels=None,
use_cache=None,
output_attentions=None,
output_hidden_states=None,
return_tuple=None,
):
return_tuple = return_tuple if return_tuple is not None else self.config.use_return_tuple
transformer_outputs = self.transformer(
input_ids,
past=past,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
use_cache=use_cache,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_tuple=return_tuple,
)
hidden_states = transformer_outputs[0]
lm_logits = self.lm_head(hidden_states)
#use gpt2 to generate a span of text based off input_ids?
#gpt2_sent = ???
loss = self.ngrams_model.sentence_loss(gpt2_sent)
return (loss, lm_logits)
```
and here is my training script using Transformers `Trainer`:
```python
from text_gen_w_transformers.finetune_gpt2 import GPT2FinetunedWithNgrams
from transformers import Trainer, TrainingArguments
model = GPT2FinetunedWithNgrams('/path/to/ngrams/model.pkl')
training_args = TrainingArguments(
output_dir='/path/to/finetuned_gpt2',
do_train=True,
per_device_train_batch_size=16,
learning_rate=1e-3,
num_train_epochs=1,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=?????
)
trainer.train()
```
My questions are:
1. You can see from the `#gpt2_sent = ???` comment in the model code that I presume this is the place where I would generate a gpt2 sequence based off this version of gpt2 that is currently being finetuned. However, I am not sure what the best way to go about doing this is. Any recommendations?
2. In the training script, I am using the `Trainer` module. However, I don't understand what the `train_dataset` parameter is in `Trainer`. I have a csv file that contains one sequence per line, but I have a feeling I need to construct a `Dataset` object or something.
3. I haven't tried to run this code because I need to fill in the above 2 parts, but I also think I'm not setting any of the parameters for `transformer_outputs`. It looks like they are set to `None` and I don't know if that will be problematic. Any thoughts on this?
I've been reading through the documentation and really like the library. I'm also new to it and pytorch so I apologize if my questions are pretty basic. Thanks in advance for your help!
**EDIT**
When I run `model = GPT2FinetunedWithNgrams('/path/to/ngrams/model.pkl')`, I just get repeated printouts of the GPT2Config object, so I don't think `super().from_pretrained('gpt2')` is the right approach for loading a pretrained model when you have inherited another class.
| 07-14-2020 00:08:17 | 07-14-2020 00:08:17 | Saw your question on `discussion.huggingface.co` => thanks for posting it there. We are trying to handle these kinds of questions (longer questions / very researchy bugs/problems in the forum) - so let's move it there :-) <|||||>https://discuss.huggingface.co/t/finetuning-gpt2-with-user-defined-loss/163/12?u=patrickvonplaten |
transformers | 5,725 | closed | TPU CI testing | Run TPU CI testing using CircleCI.
I sent a guide in Slack with steps needed on the owner side for CircleCI and Google Cloud to make this work (setting env vars, creating GKE cluster, and populating the dataset).
| 07-13-2020 22:02:45 | 07-13-2020 22:02:45 | @LysandreJik I'm wondering if CircleCI does not run the pending changes to `.circleci/config.yml` if the changes came from a PR from a forked repo.
When testing on my private repo, I used a branch on the main repo and CircleCI did include the pending changes when running. I've seen lots of differences between branches on main repo and forked repo PRs when it comes to CircleCI and Github Actions.
The action item might be to remake this PR as a branch on the repo rather than a forked PR if you'd like to see the `job` run before submit.<|||||>
> @LysandreJik I'm wondering if CircleCI does not run the pending changes to `.circleci/config.yml` if the changes came from a PR from a forked repo.
>
> When testing on my private repo, I used a branch on the main repo and CircleCI did include the pending changes when running. I've seen lots of differences between branches on main repo and forked repo PRs when it comes to CircleCI and Github Actions.
>
> The action item might be to remake this PR as a branch on the repo rather than a forked PR if you'd like to see the `job` run before submit.
I think the forked repo is irrelevant. I was able to run new CircleCI changes in a similar PR for PyTorch Lightning: https://github.com/PyTorchLightning/pytorch-lightning/pull/2486
<|||||>Looks like this rebase went wrong - recreated the change in https://github.com/huggingface/transformers/pull/6158 |
transformers | 5,724 | closed | T5 ONNX Export Test Failing on GPU | https://github.com/huggingface/transformers/runs/863647649?check_suite_focus=true

| 07-13-2020 21:15:56 | 07-13-2020 21:15:56 | |
transformers | 5,723 | closed | Fix slow test_enro_generate | see https://user-images.githubusercontent.com/6045025/87226810-6103b400-c364-11ea-875a-dcbd7c1e49ca.png | 07-13-2020 21:13:38 | 07-13-2020 21:13:38 | |
transformers | 5,722 | closed | Cannot preprocess WNUT'17 dataset for token-classification | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):BERT
Language I am using the model on (English, Chinese ...):
The problem arises when using:
* [x] the official example scripts: I am trying to run the run_ner.py script for the WNUT’17 dataset. I followed the preprocessing steps mentioned in the README. I just downloaded the dev dataset. The command
`python3 scripts/preprocess.py data_wnut_17/dev.txt.tmp $BERT_MODEL $MAX_LENGTH > data_wnut_17/dev.txt`
does not work and my terminal hangs.
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: Token Classification
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Move into the `token-classification` folder in the `examples` directory.
2. Run the commands for the WNUT’17 dataset as mentioned in the README.
```
mkdir -p data_wnut_17
curl -L 'https://github.com/leondz/emerging_entities_17/raw/master/emerging.dev.conll' | tr '\t' ' ' > data_wnut_17/dev.txt.tmp
export MAX_LENGTH=128
export BERT_MODEL=bert-large-cased
```
3. The terminal hangs on the next command
`python3 scripts/preprocess.py data_wnut_17/dev.txt.tmp $BERT_MODEL $MAX_LENGTH > data_wnut_17/dev.txt`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I would expect the `preprocess.py` script to execute normally and return the control of the terminal back to me. Once hung, I fail to regain control of my command-line even by `ctrl-d/z/c`.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Ubuntu 18.04
- Python version: 3.7.8
- PyTorch version (GPU?): 1.5 (Yes)
- Tensorflow version (GPU?): -
- Using GPU in script?: The error does not involve GPU usage in the script.
- Using distributed or parallel set-up in script?: No
| 07-13-2020 20:31:45 | 07-13-2020 20:31:45 | @kushalj001 It may take a while to preprocess the whole dataset. Not sure if you waited long enough. Check if your have enough RAM when you are doing this since if the dataset is too large you might run out of memory.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,721 | closed | Unable to finetune BERT on own dataset | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
Hello, I am trying to use my own data to finetune transformers for summarization tasks. I have followed the instructions on the README.md by generating a text file with each article to be summarized on its own line. The text file is located in ...\workspace\hug
Below is the command that I ran and it's resulting error. I couldn't find further instructions on the README or in the stackoverflow/current or closed issues. Can you please provide more guidance with examples on how to train on your data set. I would greatly appreciate this. @sshleifer
Command ran + error:

Finetune.sh

Once again, I would really appreciate your help! | 07-13-2020 19:06:49 | 07-13-2020 19:06:49 | Seems you are using Powershell. Since this is a bash shell file, you will need to run it outside of Powershell in bash shell.
Here's an example using Google Colab
https://colab.research.google.com/github/interactive-fiction-class/interactive-fiction-class.github.io/blob/master/homeworks/language-model/hw4_transformer.ipynb
Some additional examples
https://github.com/huggingface/transformers/blob/223084e42b57cd0d8e78de38e15a42d5d6b04391/notebooks/README.md<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,720 | closed | TypeError: To be compatible with tf.contrib.eager.defun, Python functions must return zero or more Tensors; in compilation of <function tf_if_stmt.<locals>.error_checking_body at 0x7f55400e3c80>, found return value of type <class 'tensorflow.python.keras.losses.MeanSquaredError'>, which is not a Tensor. | INFO:tensorflow:Error reported to Coordinator: in converted code:
/data0/liuyongkang/.conda/envs/tf2/lib/python3.7/site-packages/transformers/trainer_tf.py:511 _forward *
per_example_loss, _ = self._run_model(features, labels, True)
/data0/liuyongkang/.conda/envs/tf2/lib/python3.7/site-packages/transformers/trainer_tf.py:534 _run_model *
outputs = self.model(features, labels=labels, training=training)[:2]
/data0/liuyongkang/.conda/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py:778 __call__
outputs = call_fn(cast_inputs, *args, **kwargs)
/data0/liuyongkang/.conda/envs/tf2/lib/python3.7/site-packages/transformers/modeling_tf_roberta.py:530 call *
loss = self.compute_loss(labels, reshaped_logits)
/data0/liuyongkang/.conda/envs/tf2/lib/python3.7/site-packages/transformers/modeling_tf_utils.py:135 compute_loss *
if shape_list(logits)[1] == 1:
/data0/liuyongkang/.conda/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/autograph/operators/control_flow.py:918 if_stmt
basic_symbol_names, composite_symbol_names)
/data0/liuyongkang/.conda/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/autograph/operators/control_flow.py:956 tf_if_stmt
error_checking_orelse)
/data0/liuyongkang/.conda/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/util/deprecation.py:507 new_func
return func(*args, **kwargs)
/data0/liuyongkang/.conda/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/ops/control_flow_ops.py:1174 cond
return cond_v2.cond_v2(pred, true_fn, false_fn, name)
/data0/liuyongkang/.conda/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/ops/cond_v2.py:83 cond_v2
op_return_value=pred)
/data0/liuyongkang/.conda/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/framework/func_graph.py:983 func_graph_from_py_func
expand_composites=True)
/data0/liuyongkang/.conda/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/util/nest.py:568 map_structure
structure[0], [func(*x) for x in entries],
/data0/liuyongkang/.conda/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/util/nest.py:568 <listcomp>
structure[0], [func(*x) for x in entries],
/data0/liuyongkang/.conda/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/framework/func_graph.py:943 convert
(str(python_func), type(x)))
TypeError: To be compatible with tf.contrib.eager.defun, Python functions must return zero or more Tensors; in compilation of <function tf_if_stmt.<locals>.error_checking_body at 0x7f55400e3c80>, found return value of type <class 'tensorflow.python.keras.losses.MeanSquaredError'>, which is not a Tensor. | 07-13-2020 19:01:11 | 07-13-2020 19:01:11 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I encounter one case: `if object: do something` will trigger this error, while if you use the condition `if object is not None` you will get through it.<|||||>Did you solved it?<|||||>> I encounter one case: `if object: do something` will trigger this error, while if you use the condition `if object is not None` you will get through it.
Thanks, you are the hero. |
transformers | 5,719 | closed | generator` yielded an element that could not be converted to the expected type. The expected type was int32, but the yielded element was None. | # 🐛 Bug
## Information
Model I am using (RoBerta):
Language I am using the model on (English):
The problem arises when using:
* [ ] the official example scripts: (give details below)
inputs = tokenizer(
text_a,
text_b,
add_special_tokens=True,
max_length=max_length,
padding="max_length",
truncation=True,
return_overflowing_tokens=True,
)
if "num_truncated_tokens" in inputs and inputs["num_truncated_tokens"] > 0:
logger.info(
"Attention! you are cropping tokens (swag task is ok). "
"If you are training ARC and RACE and you are poping question + options,"
"you need to try to use a bigger max seq length!"
)
choices_inputs.append(inputs)
label = label_map[example.label]
input_ids = [x["input_ids"] for x in choices_inputs]
attention_mask = (
[x["attention_mask"] for x in choices_inputs] if "attention_mask" in choices_inputs[0] else None
)
token_type_ids = (
[x["token_type_ids"] for x in choices_inputs] if "token_type_ids" in choices_inputs[0] else None
)
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
self.dataset = tf.data.Dataset.from_generator(
gen,
(
{
"example_id": tf.int32,
"input_ids": tf.int32,
"attention_mask": tf.int32,
"token_type_ids": tf.int32,
},
tf.int64,
),
""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
The inputs dose not include the key"token_type_ids",so a None is return,but a None can't convert to tf.int32 in the tensorflow2,so the code can't work
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| 07-13-2020 18:38:01 | 07-13-2020 18:38:01 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,718 | closed | [Don't merge - Bert2Bert] Add training scripts and slight changes to Trainer | Just a draft to keep track of Bert2Bert summary training. | 07-13-2020 18:32:59 | 07-13-2020 18:32:59 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5718?src=pr&el=h1) Report
> Merging [#5718](https://codecov.io/gh/huggingface/transformers/pull/5718?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fa5423b1695cd24856bcff47214172e0f540d924&el=desc) will **decrease** coverage by `0.46%`.
> The diff coverage is `10.76%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5718?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5718 +/- ##
==========================================
- Coverage 77.79% 77.33% -0.47%
==========================================
Files 145 146 +1
Lines 25355 25413 +58
==========================================
- Hits 19726 19652 -74
- Misses 5629 5761 +132
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5718?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/bert\_encoder\_decoder\_summary.py](https://codecov.io/gh/huggingface/transformers/pull/5718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZXJ0X2VuY29kZXJfZGVjb2Rlcl9zdW1tYXJ5LnB5) | `0.00% <0.00%> (ø)` | |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.41% <37.50%> (-0.55%)` | :arrow_down: |
| [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `77.77% <100.00%> (+0.22%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |
| [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5718?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5718?src=pr&el=footer). Last update [fa5423b...d9f6d07](https://codecov.io/gh/huggingface/transformers/pull/5718?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,717 | closed | Update tokenization_t5.py | Minor doc fix. | 07-13-2020 18:21:18 | 07-13-2020 18:21:18 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5717?src=pr&el=h1) Report
> Merging [#5717](https://codecov.io/gh/huggingface/transformers/pull/5717?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7096e47513127d4f072111a7f58f109842a2b6b0&el=desc) will **increase** coverage by `0.76%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5717?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5717 +/- ##
==========================================
+ Coverage 77.22% 77.99% +0.76%
==========================================
Files 146 146
Lines 26005 26005
==========================================
+ Hits 20083 20283 +200
+ Misses 5922 5722 -200
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5717?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `95.77% <ø> (ø)` | |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |
| [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.02% <0.00%> (-1.29%)` | :arrow_down: |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.82% <0.00%> (-0.29%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5717?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5717?src=pr&el=footer). Last update [7096e47...0538a54](https://codecov.io/gh/huggingface/transformers/pull/5717?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks! |
transformers | 5,716 | closed | Add generic text classification example in TF | This PR adds a new example script for text classification in TensorFlow with the :hugs:nlp lib. The script allows users to run a text classification task on their own CSV files. | 07-13-2020 15:56:55 | 07-13-2020 15:56:55 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5716?src=pr&el=h1) Report
> Merging [#5716](https://codecov.io/gh/huggingface/transformers/pull/5716?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7cbf0f722d23440f3342aafc27697b50ead5996b?el=desc) will **increase** coverage by `0.10%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5716?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5716 +/- ##
==========================================
+ Coverage 80.32% 80.43% +0.10%
==========================================
Files 174 174
Lines 33446 33446
==========================================
+ Hits 26867 26903 +36
+ Misses 6579 6543 -36
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5716?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/5716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.32% <0.00%> (-73.63%)` | :arrow_down: |
| [src/transformers/activations\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9uc190Zi5weQ==) | `54.16% <0.00%> (-20.84%)` | :arrow_down: |
| [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `79.03% <0.00%> (-7.80%)` | :arrow_down: |
| [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `86.87% <0.00%> (-0.36%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.01% <0.00%> (-0.33%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.27% <0.00%> (-0.17%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.04% <0.00%> (+0.13%)` | :arrow_up: |
| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/5716/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5716?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5716?src=pr&el=footer). Last update [7cbf0f7...c97f433](https://codecov.io/gh/huggingface/transformers/pull/5716?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>@LysandreJik does it looks ok for you? |
transformers | 5,715 | closed | Extending vocabulary by a large size crashes RobertaTokenizerFast | # 🐛 Bug
## Information
Model I am using: RoBERTa
Language I am using the model on: English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import RobertaTokenizerFast
tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base')
tokenizer.add_tokens([str(i) for i in range(60000)])
```
Here's the stack trace:
```
thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value: CompiledTooBig(10485760)', /__w/tokenizers/tokenizers/tokenizers/src/tokenizer/added_vocabulary.rs:299:13
stack backtrace:
0: backtrace::backtrace::libunwind::trace
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.46/src/backtrace/libunwind.rs:86
1: backtrace::backtrace::trace_unsynchronized
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.46/src/backtrace/mod.rs:66
2: std::sys_common::backtrace::_print_fmt
at src/libstd/sys_common/backtrace.rs:78
3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
at src/libstd/sys_common/backtrace.rs:59
4: core::fmt::write
at src/libcore/fmt/mod.rs:1069
5: std::io::Write::write_fmt
at src/libstd/io/mod.rs:1537
6: std::sys_common::backtrace::_print
at src/libstd/sys_common/backtrace.rs:62
7: std::sys_common::backtrace::print
at src/libstd/sys_common/backtrace.rs:49
8: std::panicking::default_hook::{{closure}}
at src/libstd/panicking.rs:198
9: std::panicking::default_hook
at src/libstd/panicking.rs:218
10: std::panicking::rust_panic_with_hook
at src/libstd/panicking.rs:477
11: rust_begin_unwind
at src/libstd/panicking.rs:385
12: core::panicking::panic_fmt
at src/libcore/panicking.rs:89
13: core::option::expect_none_failed
at src/libcore/option.rs:1272
14: tokenizers::tokenizer::added_vocabulary::AddedVocabulary::add_tokens
15: tokenizers::tokenizer::Tokenizer::add_tokens
16: tokenizers::tokenizer::__init11742626496714830824::__init11742626496714830824::__wrap
17: method_vectorcall_VARARGS_KEYWORDS
at /tmp/build/80754af9/python_1593706424329/work/Objects/descrobject.c:332
18: _PyObject_Vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:127
19: call_function
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4963
20: _PyEval_EvalFrameDefault
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:3486
21: function_code_fastcall
at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:283
22: _PyFunction_Vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:410
23: _PyObject_Vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:127
24: call_function
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4963
25: _PyEval_EvalFrameDefault
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:3486
26: _PyEval_EvalCodeWithName
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4298
27: _PyFunction_Vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:435
28: _PyObject_Vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:127
29: method_vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Objects/classobject.c:60
30: _PyObject_Vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:127
31: call_function
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4963
32: _PyEval_EvalFrameDefault
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:3515
33: _PyEval_EvalCodeWithName
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4298
34: _PyFunction_Vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:435
35: _PyObject_Vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:127
36: call_function
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4963
37: _PyEval_EvalFrameDefault
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:3486
38: gen_send_ex
at /tmp/build/80754af9/python_1593706424329/work/Objects/genobject.c:222
39: _PyGen_Send
at /tmp/build/80754af9/python_1593706424329/work/Objects/genobject.c:292
40: _PyEval_EvalFrameDefault
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:2053
41: gen_send_ex
at /tmp/build/80754af9/python_1593706424329/work/Objects/genobject.c:222
42: _PyGen_Send
at /tmp/build/80754af9/python_1593706424329/work/Objects/genobject.c:292
43: _PyEval_EvalFrameDefault
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:2053
44: gen_send_ex
at /tmp/build/80754af9/python_1593706424329/work/Objects/genobject.c:222
45: _PyGen_Send
at /tmp/build/80754af9/python_1593706424329/work/Objects/genobject.c:292
46: _PyEval_EvalFrameDefault
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:2053
47: gen_send_ex
at /tmp/build/80754af9/python_1593706424329/work/Objects/genobject.c:222
48: _PyGen_Send
at /tmp/build/80754af9/python_1593706424329/work/Objects/genobject.c:292
49: task_step_impl
at /usr/local/src/conda/python-3.8.3/Modules/_asynciomodule.c:2638
50: task_step
at /usr/local/src/conda/python-3.8.3/Modules/_asynciomodule.c:2931
51: _PyObject_MakeTpCall
at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:159
52: context_run
at /tmp/build/80754af9/python_1593706424329/work/Python/context.c:634
53: cfunction_vectorcall_FASTCALL_KEYWORDS
at /tmp/build/80754af9/python_1593706424329/work/Objects/methodobject.c:437
54: PyVectorcall_Call
at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:199
55: do_call_core
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4983
56: _PyEval_EvalFrameDefault
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:3559
57: function_code_fastcall
at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:283
58: _PyFunction_Vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:410
59: _PyObject_Vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:127
60: call_function
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4963
61: _PyEval_EvalFrameDefault
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:3486
62: function_code_fastcall
at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:283
63: _PyFunction_Vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:410
64: _PyObject_Vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:127
65: call_function
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4963
66: _PyEval_EvalFrameDefault
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:3486
67: function_code_fastcall
at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:283
68: _PyFunction_Vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:410
69: _PyObject_Vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:127
70: call_function
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4963
71: _PyEval_EvalFrameDefault
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:3486
72: function_code_fastcall
at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:283
73: _PyFunction_Vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:410
74: _PyObject_Vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:127
75: call_function
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4963
76: _PyEval_EvalFrameDefault
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:3486
77: function_code_fastcall
at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:283
78: _PyFunction_Vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:410
79: _PyObject_FastCallDict
at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:96
80: _PyObject_Call_Prepend
at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:887
81: slot_tp_call
at /tmp/build/80754af9/python_1593706424329/work/Objects/typeobject.c:6521
82: _PyObject_MakeTpCall
at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:159
83: _PyObject_Vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:125
84: call_function
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4963
85: _PyEval_EvalFrameDefault
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:3500
86: function_code_fastcall
at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:283
87: _PyFunction_Vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:410
88: _PyObject_Vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:127
89: call_function
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4963
90: _PyEval_EvalFrameDefault
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:3486
91: _PyEval_EvalCodeWithName
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4298
92: _PyFunction_Vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:435
93: _PyObject_Vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:127
94: method_vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Objects/classobject.c:60
95: _PyObject_Vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:127
96: call_function
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4963
97: _PyEval_EvalFrameDefault
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:3515
98: _PyEval_EvalCodeWithName
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4298
99: _PyFunction_Vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:435
100: _PyObject_Vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:127
101: call_function
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4963
102: _PyEval_EvalFrameDefault
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:3486
103: _PyEval_EvalCodeWithName
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4298
104: _PyFunction_Vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:435
105: _PyObject_Vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:127
106: call_function
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4963
107: _PyEval_EvalFrameDefault
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:3486
108: function_code_fastcall
at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:283
109: _PyFunction_Vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:410
110: _PyObject_Vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:127
111: call_function
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4963
112: _PyEval_EvalFrameDefault
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:3486
113: _PyEval_EvalCodeWithName
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4298
114: _PyFunction_Vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:435
115: _PyObject_Vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:127
116: method_vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Objects/classobject.c:89
117: PyVectorcall_Call
at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:199
118: do_call_core
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:5010
119: _PyEval_EvalFrameDefault
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:3559
120: _PyEval_EvalCodeWithName
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4298
121: _PyFunction_Vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:435
122: _PyObject_Vectorcall
at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:127
123: call_function
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4963
124: _PyEval_EvalFrameDefault
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:3500
125: _PyEval_EvalCodeWithName
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4298
126: PyEval_EvalCodeEx
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4327
127: PyEval_EvalCode
at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:718
128: run_eval_code_obj
at /tmp/build/80754af9/python_1593706424329/work/Python/pythonrun.c:1125
129: run_mod
at /tmp/build/80754af9/python_1593706424329/work/Python/pythonrun.c:1147
130: PyRun_FileExFlags
at /tmp/build/80754af9/python_1593706424329/work/Python/pythonrun.c:1063
131: PyRun_SimpleFileExFlags
at /tmp/build/80754af9/python_1593706424329/work/Python/pythonrun.c:428
132: pymain_run_file
at /tmp/build/80754af9/python_1593706424329/work/Modules/main.c:387
133: pymain_run_python
at /tmp/build/80754af9/python_1593706424329/work/Modules/main.c:571
134: Py_RunMain
at /tmp/build/80754af9/python_1593706424329/work/Modules/main.c:650
135: Py_BytesMain
at /tmp/build/80754af9/python_1593706424329/work/Modules/main.c:1096
136: __libc_start_main
137: <unknown>
at ../sysdeps/x86_64/elf/start.S:103
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
fatal runtime error: failed to initiate panic, error 5
Aborted (core dumped
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Tokens should get added normally
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Linux-5.3.0-1032-azure-x86_64-with-glibc2.10
- Python version: 3.8.3
- PyTorch version (GPU?): 1.5.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 07-13-2020 15:34:15 | 07-13-2020 15:34:15 | I tried using BertTokenizerFast but the problem persists.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Is there any update on the issue?
I have the same problem...<|||||>Also having the same problem while using `tokenizer.add_tokens` with a `unique_list` that holds the words I am trying to add to my tokenizer.<|||||>I also have this issue. <|||||>same issue |
transformers | 5,714 | closed | facebook/bart-large-mnli input format | Hi folks,
First off, I've been using you guys since the early days and think the effort and time that you put in is just phenomenal. Thank you. All the postgrads I know at the Uni of Edinburgh love HuggingFace.
My question concerns the usage of the ```facebook/bart-large-mnli``` checkpoint - specifically the input formatting. The paper mentions that inputs are concatenated and appended with an EOS token, which is then passed to the classification head.
Something like below perhaps? If this is the case, the probabilities do not seem right, seeing as the first two sentences are the exact same.
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, AutoModel
import torch
t = AutoTokenizer.from_pretrained("facebook/bart-large-mnli")
mc = AutoModelForSequenceClassification.from_pretrained("facebook/bart-large-mnli")
s1 = torch.tensor(t("i am good. [EOS] i am good.", padding="max_length")["input_ids"])
s2 = torch.tensor(t("i am good. [EOS] i am NOT good.", padding="max_length")["input_ids"])
s3 = torch.tensor(t("i am good. [EOS] i am bad.", padding="max_length")["input_ids"])
with torch.no_grad():
logits = mc(torch.stack((s1,s2,s3)), output_hidden_states=True)[0]
sm = torch.nn.Softmax()
print(sm(logits))
# tensor([[0.2071, 0.3143, 0.4786], # these sentences are the exact same, so why just 0.47?
# [0.6478, 0.1443, 0.2080], # slightly better, but this checkpoint gets ~80% acc on MNLI
# [0.3937, 0.2987, 0.3076]]) # This distribution is almost random, but the sentences are the exact opposite
```
I note that ```[EOS]``` is not registered with the tokenizer special tokens. When I use the registers ```<s>``` or ```</s>``` I get similar results
| 07-13-2020 15:20:17 | 07-13-2020 15:20:17 | Thanks for the kind words!
1) definitely needs to be `<s>` instead of `[EOS]`, but the tokenizer should do this for you.
2) I suspect that the tokenizer takes pairs of sentences. I know that @VictorSanh and I have used that model inside the run_glue.py script and gotten reasonable accuracy.
The logic seems to be here: https://github.com/huggingface/transformers/blob/fcf0652460753f8a81f7576e8abdaa6b3742f00e/src/transformers/data/processors/glue.py#L132
Also note that there is a different class to label mapping issue for Roberta, XLM and Bart that datasets/glue.py takes care of:
Before the fix, the classes are `dict(entailment=0, contradiction=1, neutral=2)`.
@VictorSanh please confirm that I am not spewing lies. I have not worked with this dataset very much.
<|||||>Thanks for the quick response, as always!
Firstly, using ```<s>``` or ```</s>``` with the initial code (above) seems to make no noticeable difference to the distribution.
I have taken a look at the [permalinked](https://github.com/huggingface/transformers/blob/fcf0652460753f8a81f7576e8abdaa6b3742f00e/src/transformers/data/processors/glue.py#L132
) code and have attempted to replicate it below, to no avail.
Whether the tokenizer is passed a list of two sentences, a tuple of two sentences, or a list of a tuple; it returns a list of two tokenizations - one for each sentence - rather than one overall tokenization with automatic insertion of sep_token.
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, AutoModel
import torch
t = AutoTokenizer.from_pretrained("facebook/bart-large-mnli")
s1 = torch.tensor(t(["here we are passing ...","... a list to the tokenizer"], padding="max_length")["input_ids"]).to("cuda:0") # typical use of tokenizer
s2 = torch.tensor(t(("here we are passing ...","... a tuple to the tokenizer"), padding="max_length")["input_ids"]).to("cuda:0") # atypical use
s3 = torch.tensor(t([("here we are passing ...","... a list containing a tuple to the tokenizer")], padding="max_length")["input_ids"]).to("cuda:0") # as glue.py (I think!)
s1.size() # torch.Size([2, 1024])
s2.size() # torch.Size([2, 1024])
s3.size() # torch.Size([2, 1024]) >> none are the expected torch.Size([1, 1024])
```<|||||>I know how to solve the tokenizer problem
```python
tok = AutoTokenizer.from_pretrained('facebook/bart-large')
pair1 = ("this is a sent", "another sent")
pair2 = ("this is a sent about same", "another sent")
assert tok(*pair1, return_tensors='pt').input_ids.shape == (1,10)
assert tok([pair1, pair2], return_tensors='pt',padding=True).input_ids.shape == (2,12)
```<|||||>Great this is good progress, thank you!
A remaining issue is that the distributions still look off:
1. One pair is entailed, the other contradicting - and yet their classification class remains the same.
2. The certainty of the model is possibly very low for identical sentences.
```python
t = AutoTokenizer.from_pretrained("facebook/bart-large-mnli")
mc = AutoModelForSequenceClassification.from_pretrained("facebook/bart-large-mnli").to("cuda:0")
s1 = ("this is good", "this is good")
s2 = ("this is good", "this is bad")
inputs = torch.tensor([t(*s1, padding="max_length")["input_ids"],
t(*s2, padding="max_length")["input_ids"]]
).to("cuda:0")
with torch.no_grad():
logits = mc(inputs, output_hidden_states=True)[0]
sm = torch.nn.Softmax()
print(sm(logits)) # tensor([[0.0991, 0.2503, 0.6507],
# [0.1670, 0.2707, 0.5623]], device='cuda:0')
```<|||||>Interesting. Happy to look into it if there's a bug, but otherwise I think this is just a model issue. (Bug = the prediction is very different from the fairseq model for the same input).<|||||>> Great this is good progress, thank you!
>
> A remaining issue is that the distributions still look off:
>
> 1. One pair is entailed, the other contradicting - and yet their classification class remains the same.
> 2. The certainty of the model is possibly very low for identical sentences.
>
> ```python
> t = AutoTokenizer.from_pretrained("facebook/bart-large-mnli")
> mc = AutoModelForSequenceClassification.from_pretrained("facebook/bart-large-mnli").to("cuda:0")
>
> s1 = ("this is good", "this is good")
> s2 = ("this is good", "this is bad")
>
> inputs = torch.tensor([t(*s1, padding="max_length")["input_ids"],
> t(*s2, padding="max_length")["input_ids"]]
> ).to("cuda:0")
>
> with torch.no_grad():
> logits = mc(inputs, output_hidden_states=True)[0]
> sm = torch.nn.Softmax()
> print(sm(logits)) # tensor([[0.0991, 0.2503, 0.6507],
> # [0.1670, 0.2707, 0.5623]], device='cuda:0')
> ```
Have you tried with longer sentences? MNLI has inputs that are on average longer.
While I agree that the second distribution is a bit off, the first one seems fairly spiked to me.
Agree with Sam, might just be the model's behavior (as opposed to a bug)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,713 | closed | ONNX export broken for QA models | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): bert-base-uncased
Language I am using the model on (English, Chinese ...): -
The problem arises when using:
* [x] the official example scripts: convert_graph_to_onnx.py
* [x] my own modified scripts: any script attempting to do a regular torch.onnx.export on a `PreTrainedModel`
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: SQuAD
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run:
`src/transformers/convert_graph_to_onnx.py --model bert-base-uncased --framework pt --pipeline question-answering /tmp/test_hf_onnx/test_hf.onnx`
2. Observe in console:
```
Error while converting the model: Only tuples, lists and Variables supported as JIT inputs/outputs. Dictionaries and strings are also accepted but their usage is not recommended. But got unsupported type QuestionAnsweringModelOutput
```
## Expected behavior
Successful export of the model to `/tmp/test_hf_onnx/test_hf.onnx`
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: current master ce374ba87767d551f720242d5e64bfa976531079
- Platform: Ubuntu 18.04
- Python version: 3.7
- PyTorch version (GPU?): 1.5.0
- Tensorflow version (GPU?): -
- Using GPU in script?: -
- Using distributed or parallel set-up in script?: -
| 07-13-2020 15:16:57 | 07-13-2020 15:16:57 | Might I also add that breaking regular PyTorch ONNX export like that with custom model output wrappers is IMO a Really Bad Idea (TM). Your `PreTrainedModel`'s are still `torch.nn.Module`'s, and as such should be exportable using standard `torch.onnx.export` APIs.
To make matters worse, the proposed `convert_pytorch` API that I discovered in `src/transformers/convert_graph_to_onnx.py` does not work with general argument-forwarding `forward(*args, **kwargs)` wrapper over the specific HF Transformers models. The `convert_pytorch` path's `__code__` shenanigans in their current way fail when exporting the following kinds of models:
```python
class BertWithWrappedForward(BertForQuestionAnswering):
def forward(*args, **kwargs):
# pre-forward actions here
super().forward(*args, **kwargs)
```
The default `torch.onnx.export` path might have worked here, and it would be good to see some kind of support for such models in the `convert_pytorch` API, or at least a fallback scenario.<|||||>@mfuntowicz @julien-c @LysandreJik <|||||>Just wanted to add that `convert_graph_to_onnx.py` is also broken for text-generation with GPT2 for me.
Running `python src/transformers/convert_graph_to_onnx.py --pipeline text-generation --model gpt2 --framework pt output/model.onnx` returns
```
Error while converting the model: Only tuples, lists and Variables supported as JIT inputs/outputs.
Dictionaries and strings are also accepted but their usage is not recommended. But got unsupported type CausalLMOutputWithPast
```
Although the type is different (now `CausalLMOutputWithPast`), this seems to be the same error happening.<|||||>To help with resolving the issue, this is the merge that seems to be causing the problems here: [5438](https://github.com/huggingface/transformers/pull/5438)
Also tried running the previous version before this release (`pip install -Iv transformers=3.0.1`), and now my model is properly converted to onnx.<|||||>Hi! This was an unseen error that appeared when we made the switch from tuples to namedtuples. The fix was to specify to pipelines to continue using tuples instead of namedtuples!
#6061 should have fixed it, thanks for letting us know! |
transformers | 5,712 | closed | How to download Pre-trained T5 model? | Hi,
I tried looking for ways to download & use T5-small pre-trained model but didn't get any API mentioned in documentation to download it. Though I found links but don't know will it work if I pass the path of the model?
Thanks in advance.. | 07-13-2020 14:02:27 | 07-13-2020 14:02:27 | ```python
from transformers import T5ForConditionalGeneration
t5 = T5ForConditionalGeneration.from_pretrained("t5-small")
```
does not work? <|||||>@patrickvonplaten Thanks for the pointer, I tried & getting this:
```
>>> from transformers import T5ForConditionalGeneration
>>> t5 = T5ForConditionalGeneration.from_pretrained("t5-small")
Traceback (most recent call last):
File "/root/anaconda3/envs/docsearch/lib/python3.7/site-packages/transformers/configuration_utils.py", line 242, in get_config_dict
raise EnvironmentError
OSError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/root/anaconda3/envs/docsearch/lib/python3.7/site-packages/transformers/modeling_utils.py", line 604, in from_pretrained
**kwargs,
File "/root/anaconda3/envs/docsearch/lib/python3.7/site-packages/transformers/configuration_utils.py", line 200, in from_pretrained
config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/root/anaconda3/envs/docsearch/lib/python3.7/site-packages/transformers/configuration_utils.py", line 251, in get_config_dict
raise EnvironmentError(msg)
OSError: Can't load config for 't5-small'. Make sure that:
- 't5-small' is a correct model identifier listed on 'https://huggingface.co/models'
- or 't5-small' is the correct path to a directory containing a config.json file
```
I am new on huggingface API. I am going to download it & give path of the model. Let's see how it works... But let me know if you can give me some suggestions.<|||||>Hi @deepankar27, do you mind specifying which `transformers` version you're using?<|||||>@LysandreJik Sorry for the late reply, I figured it out, issue was with my config files. Thanks... :)<|||||>Where in the computer memory is the T5-small model being downloaded by using @patrickvonplaten 's code?<|||||>The model is downloaded from AWS where it is saved and then usually saved in a cache folder (usually `~/.cache/torch/transformers` as far as I know) |
transformers | 5,711 | closed | QA Pipeline: Key Error due to predicting a token outside of allowed context | # 🐛 Bug
## Information
Model: distilbert
Language: English
The problem arises when using: QA inference via `pipeline`
The pipeline throws an exception when the model predicts a token that is not part of the document (e.g. final special token).
In the example below, the model predicts token 13 to be the end of the answer span.
The context however ends at token 12 and token 13 is the final [SEP] token. Therefore, we get a key error when trying to access
`feature.token_to_orig_map[13])` in here:
https://github.com/huggingface/transformers/blob/ce374ba87767d551f720242d5e64bfa976531079/src/transformers/pipelines.py#L1370-L1380
## To reproduce
```
nlp = pipeline("question-answering",model="distilbert-base-uncased-distilled-squad",
tokenizer="distilbert-base-uncased",
device=-1)
nlp(question="test finding", context="My name is Carla and I live in Berlin")
```
results in
```
Traceback (most recent call last):
File "/home/mp/deepset/dev/haystack/debug.py", line 16, in <module>
nlp(question="test finding", context="My name is Carla and I live in Berlin")
File "/home/mp/miniconda3/envs/py37/lib/python3.7/site-packages/transformers/pipelines.py", line 1316, in __call__
for s, e, score in zip(starts, ends, scores)
File "/home/mp/miniconda3/envs/py37/lib/python3.7/site-packages/transformers/pipelines.py", line 1316, in <listcomp>
for s, e, score in zip(starts, ends, scores)
KeyError: 13
```
## Expected behavior
Predictions that are pointing to tokens that are not part of the "context" (here: the last [SEP] token) should be filtered out from possible answers.
## Environment info
- `transformers` version: 3.0.2
- Platform: Ubuntu 18.04
- Python version: 3.7.6
- PyTorch version (GPU?): 1.5.1, CPU
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 07-13-2020 13:38:04 | 07-13-2020 13:38:04 | Hi @tholor,
Thanks for reporting the issue.
We did have an issue where predictions were going out of bounds on QA pipeline and it has been fixed on master:
```python
>>> nlp = pipeline("question-answering",model="distilbert-base-uncased-distilled-squad",
tokenizer="distilbert-base-uncased",
device=-1)
>>> nlp(question="test finding", context="My name is Carla and I live in Berlin")
>>> {'score': 0.41493675112724304, 'start': 11, 'end': 16, 'answer': 'Carla'}
```
If you are able to checkout from master branch I would be happy to hear back from you to make sure it's working as expected on your side as well.
Let us know 😃
Morgan<|||||>Hi @mfuntowicz ,
Works like a charm now. Thanks for the fix! |
transformers | 5,710 | closed | Attention heads attend equally after conversion from tensorflow checkpoint | # 🐛 Bug
## Information
Hi.
I'm using notebook https://github.com/jessevig/bertviz/blob/master/head_view_bert.ipynb from https://github.com/jessevig/bertviz for visualizing attention in the Bert model. Given example works fine with pytorch default models. The problem arises when I'm converting pre-trained (with https://github.com/google-research/bert#pre-training-with-bert) on custom dataset Bert-Base Multilingual Uncased model.
Pytorch default model visualization:

Example of attention values for default model:
[[8.3652e-03, 6.9530e-02, 6.6828e-02, ..., 9.5115e-03, 2.7546e-02, 4.5171e-01],
[3.3669e-03, 1.2537e-02, 9.2709e-03, ..., 1.5638e-03, 1.0154e-03, 9.1897e-01],
[6.5795e-03, 3.6612e-03, 5.4454e-02, ..., 1.1923e-03, 4.1071e-03, 8.4899e-01],
...,
[7.3705e-03, 2.5430e-03, 7.6645e-03, ..., 1.7184e-02, 4.6256e-02, 8.2301e-01],
[2.2311e-02, 1.8006e-03, 4.3833e-02, ..., 9.1167e-03, 1.3746e-01, 6.2295e-01],
[7.5967e-02, 4.2936e-02, 4.6500e-02, ..., 4.9925e-02, 6.6538e-02, 5.0721e-02]],
[[3.5124e-02, 2.2295e-02, 9.2680e-03, ..., 1.1409e-02, 1.7234e-02, 5.5768e-01],
[7.0571e-03, 3.7321e-01, 1.7890e-02, ..., 7.6114e-03, 8.8965e-03, 3.6259e-01],
[8.8010e-03, 4.9023e-03, 1.4315e-01, ..., 2.2279e-03, 7.9276e-02, 4.3233e-01],
...,
Visualization after model conversion from tf to pytorch:

Example of attention values after conversion:
[[0.0716, 0.0686, 0.0556, ..., 0.0776, 0.0783, 0.0648],
[0.0893, 0.0513, 0.0641, ..., 0.0606, 0.0908, 0.0554],
[0.0868, 0.0663, 0.0621, ..., 0.0822, 0.0777, 0.0471],
...,
[0.0906, 0.0750, 0.0649, ..., 0.0807, 0.1011, 0.0444],
[0.0670, 0.0667, 0.0620, ..., 0.0877, 0.0739, 0.0515],
[0.0773, 0.0738, 0.0652, ..., 0.0787, 0.0856, 0.0518]],
[[0.0553, 0.0622, 0.0665, ..., 0.0585, 0.0845, 0.0670],
[0.0631, 0.0829, 0.0592, ..., 0.0608, 0.0968, 0.0532],
[0.0561, 0.0720, 0.0617, ..., 0.0628, 0.1010, 0.0802],
...,
Another experiments:
1. loading tensorflow checkpoint directly without conversion - works fine, not equal attentions;
2. loading pytorch model after saving it from loaded tensorflow checkpoint also works fine;
3. tested with Bert-Base Multilingual Uncased model without pre-training (to be sure that pre-training doesn't cause the problem) - got the same results.
So, I guess that conversion from tf checkpoint works wrong or I'm converting model in wrong way.
Any explanation of described behavior would be appreciated, thank you.
## To reproduce
Steps to reproduce the behavior:
1. Example from https://github.com/jessevig/bertviz/blob/master/head_view_bert.ipynb - works fine.
2. Conversion tensorflow checkpoint to pytorch gives incorrect attentions - not work.
```python
from transformers import convert_bert_original_tf_checkpoint_to_pytorch
convert_bert_original_tf_checkpoint_to_pytorch.convert_tf_checkpoint_to_pytorch(
'model/multilingual_L-12_H-768_A-12/bert_model.ckpt.index',
'model/multilingual_L-12_H-768_A-12/bert_config.json',
'model/multilingual_L-12_H-768_A-12/pytorch_model.bin')
```
Then I copy converted pytorch_model.bin, config.json, vocab.txt to model/multilingual_L-12_H-768_A-12_pytorch/
```python
from transformers import BertTokenizer, BertForPreTraining
pytorch_bert_model = 'model/multilingual_L-12_H-768_A-12_pytorch/'
model = BertForPreTraining.from_pretrained(pytorch_bert_model)
tokenizer = BertTokenizer.from_pretrained(pytorch_bert_model, do_lower_case=True)
```
No errors, but attention values seems wrong as written above.
```
INFO:transformers.configuration_utils:loading configuration file model/multilingual_L-12_H-768_A-12_pytorch/config.json
INFO:transformers.configuration_utils:Model config BertConfig {
"attention_probs_dropout_prob": 0.1,
"directionality": "bidi",
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"output_attentions": true,
"pad_token_id": 0,
"pooler_fc_size": 768,
"pooler_num_attention_heads": 12,
"pooler_num_fc_layers": 3,
"pooler_size_per_head": 128,
"pooler_type": "first_token_transform",
"type_vocab_size": 2,
"vocab_size": 105879
}
INFO:transformers.modeling_utils:loading weights file model/multilingual_L-12_H-768_A-12_pytorch/pytorch_model.bin
INFO:transformers.modeling_utils:All model checkpoint weights were used when initializing BertForPreTraining.
INFO:transformers.modeling_utils:All the weights of BertForPreTraining were initialized from the model checkpoint at model/multilingual_L-12_H-768_A-12_pytorch/.
If your task is similar to the task the model of the ckeckpoint was trained on, you can already use BertForPreTraining for predictions without further training.
INFO:transformers.tokenization_utils_base:loading file model/multilingual_L-12_H-768_A-12_pytorch/vocab.txt
```
Using ```BertModel``` for loading converted model gives the same result.
3. Loading tensorflow checkpoint directly without conversion - works fine
```python
from transformers import BertTokenizer, BertForPreTraining
bert_config_file = 'model/multilingual_L-12_H-768_A-12_pytorch/config.json'
bert_vocab_file = 'model/multilingual_L-12_H-768_A-12_pytorch/vocab.txt'
tf_bert_checkpoint ='model/multilingual_L-12_H-768_A-12/bert_model.ckpt.index'
model = BertForPreTraining.from_pretrained(tf_bert_checkpoint, from_tf=True, config=bert_config_file)
tokenizer = BertTokenizer.from_pretrained(bert_vocab_file, do_lower_case=True)
```
4. Loading pytorch model after saving it from loaded tensorflow checkpoint - works fine
```python
from transformers import BertTokenizer, BertForPreTraining
bert_config_file = 'model/multilingual_L-12_H-768_A-12_pytorch/config.json'
bert_vocab_file = 'model/multilingual_L-12_H-768_A-12_pytorch/vocab.txt'
tf_bert_checkpoint ='model/multilingual_L-12_H-768_A-12/bert_model.ckpt.index'
model = BertForPreTraining.from_pretrained(tf_bert_checkpoint, from_tf=True, config=bert_config_file)
model.save_pretrained(model/test/')
pytorch_bert_checkpoint_path = 'model/test/pytorch_model.bin'
model = BertForPreTraining.from_pretrained(pytorch_bert_checkpoint_path, config=bert_config_file)
tokenizer = BertTokenizer.from_pretrained(bert_vocab_file, do_lower_case=True)
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Conversion from tensorflow checkpoint do not influence on model attentions (values doesn't seem equal), attentions can be visualized correctly.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Ubuntu 18.04.3 LTS
- Python version: 3.7.7
- PyTorch version (GPU): 1.4.0
- Tensorflow version (GPU): 1.15.0
- Using GPU in script: No
- Using distributed or parallel set-up in script: No
| 07-13-2020 13:04:43 | 07-13-2020 13:04:43 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,709 | closed | Run Language Modeling on Colab TPU cores terminates | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): GPT2
Language I am using the model on (English, Chinese ...): English (wikitext-2)
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
I'm trying to test `run_language_modeling.py` on GPT2 using all 8 TPU cores.
Running on 1 core gives the following error:
```bash
Epoch: 0% 0/3 [00:00<?, ?it/s]
Iteration: 0it [00:00, ?it/s]Exception in device=TPU:0: 'NoneType' object cannot be interpreted as an integer
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 119, in _start_fn
fn(gindex, *args)
File "/content/transformers/examples/language-modeling/run_language_modeling.py", line 292, in _mp_fn
main()
File "/content/transformers/examples/language-modeling/run_language_modeling.py", line 260, in main
trainer.train(model_path=model_path)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 519, in train
self.epoch = epoch + (step + 1) / len(epoch_iterator)
TypeError: 'NoneType' object cannot be interpreted as an integer
```
While running using all 8 cores gives this one:
```bash
/usr/lib/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown
len(cache))
Traceback (most recent call last):
File "transformers/examples/xla_spawn.py", line 72, in <module>
main()
File "transformers/examples/xla_spawn.py", line 68, in main
xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores)
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 182, in spawn
start_method=start_method)
File "/usr/local/lib/python3.6/dist-packages/torch/multiprocessing/spawn.py", line 158, in start_processes
while not context.join():
File "/usr/local/lib/python3.6/dist-packages/torch/multiprocessing/spawn.py", line 108, in join
(error_index, name)
Exception: process 0 terminated with signal SIGKILL
```
I'm running this on a Colab TPU Notebook.
## To reproduce
Steps to reproduce the behavior:
```python
VERSION = "20200325" #@param ["1.5" , "20200325", "nightly"]
!curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py
!python pytorch-xla-env-setup.py --version $VERSION
import torch_xla
import torch_xla.core.xla_model as xm
!pip install git+https://github.com/huggingface/transformers.git
!git clone https://github.com/huggingface/transformers.git
!curl https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-2-v1.zip --output wikitext-2-v1.zip
!unzip wikitext-2-v1.zip
!rm wikitext-2-v1.zip
!python transformers/examples/xla_spawn.py --num_cores 1 \
transformers/examples/language-modeling/run_language_modeling.py \
--output_dir=output \
--model_type=gpt2 \
--model_name_or_path=gpt2 \
--do_train \
--train_data_file=wikitext-2/wiki.train.tokens \
--do_eval \
--eval_data_file=wikitext-2/wiki.test.tokens \
--per_device_train_batch_size 1
```
## Expected behavior
Finetuning the model and saves it.
## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.0a0+d6149a7 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: Yes and No
| 07-13-2020 09:31:01 | 07-13-2020 09:31:01 | For the single core problem, I think it is related to how the trainer prepares the `epoch_loader` object [here](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L480-L484).<|||||>Update: for the single core problem, removing ` / len(epoch_iterator)` part from this [line](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L519) solves the problem, so I suggest to precompute the value using `len(train_loader)` before this [if statement](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L480) and use it later. The multicores problem is still exists, could it relate to RAM limits in Google Colab?<|||||>This indeed seems to be a problem. I encountered the same issue.
The hack suggested by @AliOsm seems to work for now<|||||>@julien-c could you please take a look?<|||||>The memory was the problem! In the beginning of the notebook, run the following cell to get the 35GB RAMs runtime instead of the 12GB one:
```python
import torch
torch.tensor([10.]*10000000000)
```
Then, use this snippet of code to finetune GPT-2 on wikitext-2:
```bash
VERSION = "nightly" #@param ["1.5" , "20200325", "nightly"]
!curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py
!python pytorch-xla-env-setup.py --version $VERSION
!pip install git+https://github.com/huggingface/transformers.git
!git clone https://github.com/huggingface/transformers.git
!curl https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-2-v1.zip --output wikitext-2-v1.zip
!unzip wikitext-2-v1.zip
!rm wikitext-2-v1.zip
!python transformers/examples/xla_spawn.py --num_cores 8 \
transformers/examples/language-modeling/run_language_modeling.py \
--output_dir=output \
--model_type=gpt2 \
--model_name_or_path=gpt2 \
--do_train \
--train_data_file=wikitext-2/wiki.train.tokens \
--do_eval \
--eval_data_file=wikitext-2/wiki.test.tokens \
--per_device_train_batch_size 2 \
--overwrite_output_dir
```
It will be helpful to put this in the documentation :3 |
transformers | 5,708 | closed | For Roberta pretraining, how to enable large batch training using gradient accumulation? | In the example code, where can I enable gradient accumulation for large batch size training. Thanks. https://github.com/huggingface/transformers/tree/master/examples/language-modeling | 07-13-2020 08:50:27 | 07-13-2020 08:50:27 | Hi! You can use the `--gradient_accumulation_steps=N_STEPS` argument to the `run_language_modeling.py` script for that.
You can see all available flags by doing `python run_language_modeling.py --help` |
transformers | 5,707 | closed | Span Mask Fill | I see that Transformers does not support Ernie, but am in search of a way to [MASK] phrases. Can somebody guide me to an alternative to Ernie, code, or a way I could do this myself? | 07-13-2020 06:33:20 | 07-13-2020 06:33:20 | Maybe you can get inspiration from here: https://github.com/facebookresearch/SpanBERT<|||||>> Maybe you can get inspiration from here: https://github.com/facebookresearch/SpanBERT
@RudrakshTuwani
Thank you for your response. I've actually tried to implement SpanBERT previously, but my status as a beginner must have barred me from doing it properly.
While SpanBERT has "the same format as the HuggingFace BERT models", the outputs were only special tokens or letters. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,706 | closed | can't resume training from a saved checkpoint in run_glue | # 🐛 Bug
Hi,
I'm using run_glue.py to train Roberta model. I ran the training for a few hours, but after 2 epochs it crashed due to low disk space. I now want to resume the training, and for that, I replaced the --model_name_or_path from roberta-base to [my checkpoint dir]. But then I get the following error:
"OSError: Model name 'models/tmp/roberta512/checkpoint-27000' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'models/tmp/roberta512/checkpoint-27000' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url."
in the checkpoint dir there are only two files:
pytorch_model.bin and config.json
## Information
This is the command line I used to run initially:
python run_glue.py --model_name_or_path roberta-base --task_name MNLI --do_train --do_eval --data_dir /home/nlp/ohadr/PycharmProjects/BERT_classification/glue_data/MNLI --max_seq_length 512 --per_device_train_batch_size 8 --learning_rate 5e-5 --num_train_epochs 3.0 --output_dir /tmp/MNLI/roberta512
This is the command line I tried to resume training with:
python run_glue.py --model_name_or_path /tmp/MNLI/roberta512/checkpoint-27000 --task_name MNLI --do_train --do_eval --data_dir /home/nlp/ohadr/PycharmProjects/BERT_classification/glue_data/MNLI --max_seq_length 512 --per_device_train_batch_size 8 --learning_rate 5e-5 --num_train_epochs 3.0 --output_dir models/tmp/roberta512_cont
Model I am using (Bert, XLNet ...): Roberta
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [v] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [v] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. train the model using the following:
python run_glue.py --model_name_or_path roberta-base --task_name MNLI --do_train --do_eval --data_dir /home/nlp/ohadr/PycharmProjects/BERT_classification/glue_data/MNLI --max_seq_length 512 --per_device_train_batch_size 8 --learning_rate 5e-5 --num_train_epochs 3.0 --output_dir /tmp/MNLI/roberta512
2.
use the following line to resume training from the last saved checkpoint - for that, change the marked directory below with your own directory:
python run_glue.py --model_name_or_path [/tmp/MNLI/roberta512/checkpoint-27000<<change it!>>] --task_name MNLI --do_train --do_eval --data_dir /home/nlp/ohadr/PycharmProjects/BERT_classification/glue_data/MNLI --max_seq_length 512 --per_device_train_batch_size 8 --learning_rate 5e-5 --num_train_epochs 3.0 --output_dir models/tmp/roberta512_cont
Error message:
"OSError: Model name 'models/tmp/roberta512/checkpoint-27000' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'models/tmp/roberta512/checkpoint-27000' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url."
## Expected behavior
I would expect the model to continue training from that checkpoint
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.1
- Platform: Linux-3.10.0-1127.13.1.el7.x86_64-x86_64-with-centos-7.8.2003-Core
- Python version: 3.7.7
- PyTorch version (GPU?): 1.5.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in> YES
- Using distributed or parallel set-up in script?: <fill in> PARALLEL
| 07-13-2020 06:17:35 | 07-13-2020 06:17:35 | Thanks!<|||||>sorry, in the checkpoint directory I have 5 files (and not 2 as I wrote above):
pytorch_model.bin
training_args.bin
config.json
optimizer.pt
scheduler.pt
<|||||>the vocabulary file is missing in the checkpoint folder. You can use the base model vocabulary by adding more parameter `--tokenizer_name roberta-base`.
try this:
```
python run_glue.py --model_name_or_path [/tmp/MNLI/roberta512/checkpoint-27000<<change it!>>] --tokenizer_name roberta-base --task_name MNLI --do_train --do_eval --data_dir /home/nlp/ohadr/PycharmProjects/BERT_classification/glue_data/MNLI --max_seq_length 512 --per_device_train_batch_size 8 --learning_rate 5e-5 --num_train_epochs 3.0 --output_dir models/tmp/roberta512_cont
```<|||||>I am having a similar issue when trying to evalutate checkpoints on a test set. If I copy the `vocab.txt` from the final model to the checkpoint folder and evaluate it, the accuracy is significantly lower. The final model had 0.54 and the checkpoints are all in the range from 0.31 to 0.38. That confuses me. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>how to train user custom data ?
|
transformers | 5,705 | closed | Any insight to this mystery issue? Using the Keras functional API results in whole deleted weights/layers for transformer layers. | # ❓ Questions & Help
This is probably a Keras, tensorflow bug, but just wanted to check-in here in case I overlooked anything.
I've discovered a bug, where if transformer layers are copied from a transformer model, and used as individual layers, and then a Keras model is made using the functional API, this seems to result in missing weights/layers from the list of trainable layers. However, the issue goes away if I use model subclassing to make the Keras model.
I raised the issue here
https://github.com/tensorflow/tensorflow/issues/40638#event-3468314954
but you can directly checkout the issue in this colab notebook
https://colab.research.google.com/gist/Santosh-Gupta/273361f873e4daf572fddea691b1f325/missingtrainablevars.ipynb
Which copies the layers from one of your transformer models. I also made one where I implemented the transformer layers from (near) scratch, and got the same result
https://colab.research.google.com/gist/ravikyram/0191e12b7c6d9afeb80ccc009870b255/untitled52.ipynb
This likely seems like a bug in Keras, but may take a while to pinpoint what exactly is causing this, since a transformer layer has many parts. But just wanted to pop this in here, in case there was some insight to the issue, or a way to pinpoint the cause. | 07-13-2020 05:49:53 | 07-13-2020 05:49:53 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,704 | closed | Make the order of additional special tokens deterministic | In `SpecialTokensMixin.all_special_tokens_extended`, deduplication is performed by `all_toks = list(set(all_toks))`. However, this will change the ordering of additional special tokens, and the order depends on the hash seed of set data structure. This will result in non-deterministic id of additional special tokens added to `AutoTokenizer.from_pretrained` method. Therefore, I changed this problematic line to `all_toks = list(OrderedDict.fromkeys(all_toks))`. This line will deduplicate `all_toks` while still keeping the original ordering.
| 07-13-2020 05:40:38 | 07-13-2020 05:40:38 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5704?src=pr&el=h1) Report
> Merging [#5704](https://codecov.io/gh/huggingface/transformers/pull/5704?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0befb513278f6e42b722be340dbc667e0ba2718e&el=desc) will **decrease** coverage by `0.94%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5704?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5704 +/- ##
==========================================
- Coverage 78.26% 77.32% -0.95%
==========================================
Files 146 146
Lines 25998 25998
==========================================
- Hits 20348 20102 -246
- Misses 5650 5896 +246
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5704?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5704/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.60% <100.00%> (ø)` | |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5704/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5704/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5704?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5704?src=pr&el=footer). Last update [0befb51...c706cc9](https://codecov.io/gh/huggingface/transformers/pull/5704?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,703 | closed | Make the order of additional special tokens deterministic | In `SpecialTokensMixin.all_special_tokens_extended`, deduplication is performed by `all_toks = list(set(all_toks))`. However, this will change the ordering of additional special tokens, and the order depends on the hash seed of set data structure. This will result in non-deterministic id of additional special tokens added to `AutoTokenizer.from_pretrained` method. Therefore, I changed this problematic line to `all_toks = sorted(list(set(all_toks)))`. | 07-13-2020 05:23:20 | 07-13-2020 05:23:20 | |
transformers | 5,702 | closed | help:OSError: Model name 'ctrl' was not found in tokenizers model name list (ctrl). We assumed 'ctrl' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url. | # ❓ Questions & Help

| 07-13-2020 03:34:57 | 07-13-2020 03:34:57 | As for me, I'm getting
```
OSError: Can't load weights for 'mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es'. Make sure that:
- 'mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin.
```
the line of code is
```
nlp = pipeline(
'question-answering',
model='mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es',
tokenizer=(
'mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es',
{"use_fast": False}
)
)
```
but my question is, why does pipeline download the model but can't load/find the weights?? I'm using python:3.7 dockerimage.
I'm using tf version '2.2.0'<|||||>@Heiheiyo, is it possible you have a `ctrl` folder that does not contain the vocab and merges files?
When running your command on master I have no issues with CTRL.<|||||>@Kreijstal, on what version of transformers are you running? I copy-pasted your command and it works fine on the `master` branch.<|||||>@LysandreJik Thank you for your answer.I have solved this problem.I downloaded the ctrl model and modified the model file path.
<|||||>@LysandreJik I solved this problem too, I used the dockerfiles I found on this repo to figure out the right libraries that might not have been installed |
transformers | 5,701 | closed | How to generate sentences from Transformer's sentence embeddings? | Is it possible to use the pre-trained transformer models to generate sentences from sentence embeddings?
I konw I can get the continuous representations of a sentence with for example BertModel or GPT2Model.
But can I reconstruct the sentence directly from the sentence representations generated by BertModel, with hugging-face transformer, especially the pre-trained ones?
I mean sentence embeddings as input, the readable sentence as output. | 07-13-2020 00:30:06 | 07-13-2020 00:30:06 | Sorry I'm not 100% sure, I get the question. Could you specify what you mean by `sentence embeddings` exactly and the output you would like with some code? :-) <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.