repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 3,092 | closed | ċ in gpt2 | There's a 'ċ' in gpt2.
```
vocab = list(tokenizer_gpt2.encoder.keys())
vocab[198]
```
output: ċ
Based on some examples, I guess it means "with_break". But I can't find this parameter in gpt2 tokenizer document. Can anyone tell me the meaning? thank you. | 03-02-2020 23:43:19 | 03-02-2020 23:43:19 | The proper way to decode a value is using the `decode` method:
```py
from transformers import GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
tokenizer.decode([198]) # '\n'
```
Some byte indices are shifted in the GPT-2 vocabulary, especially the control characters and characters that resemble whitespace. This is an example, and you can see the method that does it in `~transformers.tokenization_gpt2.bytes_to_unicode`. |
transformers | 3,091 | closed | Fast tokenizers fail when the input is just spaces | Slow tokenizers:
```
>>> import transformers
>>> t = transformers.AutoTokenizer.from_pretrained("bert-base-cased", use_fast=False)
>>> t.encode_plus(" ", add_special_tokens=False)
{'input_ids': [], 'token_type_ids': []}
```
Fast tokenizers:
```
>>> import transformers
>>> t = transformers.AutoTokenizer.from_pretrained("bert-base-cased", use_fast=True, add_special_tokens=False)
>>> t.encode_plus(" ")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/dirkg/anaconda3/envs/allennlp/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 1894, in encode_plus
return {key: value[0] if isinstance(value[0], list) else value for key, value in batched_output.items()}
File "/Users/dirkg/anaconda3/envs/allennlp/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 1894, in <dictcomp>
return {key: value[0] if isinstance(value[0], list) else value for key, value in batched_output.items()}
IndexError: list index out of range
```
The `add_special_tokens=False` bit is critical. Otherwise, there is no failure because the results aren't empty. | 03-02-2020 22:24:21 | 03-02-2020 22:24:21 | This is now fixed on `master`:
```
>>> t = transformers.AutoTokenizer.from_pretrained("bert-base-cased", use_fast=True)
>>> t.encode_plus(" ", add_special_tokens=False)
{'input_ids': [], 'token_type_ids': [], 'attention_mask': []}
```
Also, `add_special_tokens` now works exactly the same for both slow and fast tokenizers. We give it with `encode`, `tokenize`, ... and not during initialization anymore. |
transformers | 3,090 | closed | Cuda error during evaluation - CUBLAS_STATUS_NOT_INITIALIZED | # 🐛 Bug
## Information
Overview:
I am using the Bert pre-trained model and trying to finetune it using a customized dataset which requires me to add new tokens so that the tokenizer doesn't wordpiece them (these tokens are of the form <1234> and </1234> where 1234 can be any int converted to string).
I was able to go through the train step but when it comes to evaluating the perplexity I get :
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)`
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] my own modified scripts: (give details below)
The only bit of tweak I made was to use tokenizer.add_tokens("<my_new_token>")
before tokenizing using tokenizer.batch_encode_plus
The tasks I am working on is:
* [x] my own task or dataset: (give details below)
facebook messages dataset
## To reproduce
Steps to reproduce the behavior:
1. In LineByLineTextDataset - add new tokens by using tokenizer.add_tokens("<new_token>") for each line that is added in lines list.
(The only other change I made was to fetch the text directly from DB instead of using the text files)
2. I limited the run to use only 3 instances of text line to debug
3. Run the regular examples script to train and evaluate
Error:
```
Exception has occurred: RuntimeError
Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/data/nisoni/anaconda3/envs/trans/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker
output = module(*input, **kwargs)
File "/data/nisoni/anaconda3/envs/trans/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/data/nisoni/anaconda3/envs/trans/lib/python3.6/site-packages/transformers/modeling_bert.py", line 987, in forward
encoder_attention_mask=encoder_attention_mask,
File "/data/nisoni/anaconda3/envs/trans/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/data/nisoni/anaconda3/envs/trans/lib/python3.6/site-packages/transformers/modeling_bert.py", line 790, in forward
encoder_attention_mask=encoder_extended_attention_mask,
File "/data/nisoni/anaconda3/envs/trans/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/data/nisoni/anaconda3/envs/trans/lib/python3.6/site-packages/transformers/modeling_bert.py", line 407, in forward
hidden_states, attention_mask, head_mask[i], encoder_hidden_states, encoder_attention_mask
File "/data/nisoni/anaconda3/envs/trans/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/data/nisoni/anaconda3/envs/trans/lib/python3.6/site-packages/transformers/modeling_bert.py", line 368, in forward
self_attention_outputs = self.attention(hidden_states, attention_mask, head_mask)
File "/data/nisoni/anaconda3/envs/trans/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/data/nisoni/anaconda3/envs/trans/lib/python3.6/site-packages/transformers/modeling_bert.py", line 314, in forward
hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask
File "/data/nisoni/anaconda3/envs/trans/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/data/nisoni/anaconda3/envs/trans/lib/python3.6/site-packages/transformers/modeling_bert.py", line 216, in forward
mixed_query_layer = self.query(hidden_states)
File "/data/nisoni/anaconda3/envs/trans/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/data/nisoni/anaconda3/envs/trans/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 87, in forward
return F.linear(input, self.weight, self.bias)
File "/data/nisoni/anaconda3/envs/trans/lib/python3.6/site-packages/torch/nn/functional.py", line 1372, in linear
output = input.matmul(weight.t())
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)`
File "/data/nisoni/transformers/transformers/examples/run_language_modeling.py", line 550, in evaluate
outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels)
File "/data/nisoni/transformers/transformers/examples/run_language_modeling.py", line 910, in main
result = evaluate(args, model, tokenizer, prefix=prefix)
File "/data/nisoni/transformers/transformers/examples/run_language_modeling.py", line 918, in <module>
main()
```
## Expected behavior
A regular examples run giving a perplexity score as it gives without adding new tokens
## Environment info
- `transformers` version: 2.5.1
- Platform: Linux-4.4.0-171-generic-x86_64-with-debian-stretch-sid
- Python version: 3.6.10
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: not explicitly
- Using distributed or parallel set-up in script?: not explicitly
| 03-02-2020 22:20:34 | 03-02-2020 22:20:34 | Tried debugging with CPU (an aside - this has an issue in itself apparently when --no_cuda flag is used --> run_language_modeling.py needs to set args.n_gpu to 0)
Found the fix -> Needed to call model.resize_token_embeddings(len(tokenizer)) after adding tokens in the eval mode as well. |
transformers | 3,089 | closed | add models cards for camembert-base-fquad camembert-base-squad | Following #2893
Model links:
- [`fmikaelian/camembert-base-fquad`](https://huggingface.co/fmikaelian/camembert-base-fquad)
- [`fmikaelian/camembert-base-squad`](https://huggingface.co/fmikaelian/camembert-base-squad) | 03-02-2020 21:47:31 | 03-02-2020 21:47:31 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3089?src=pr&el=h1) Report
> Merging [#3089](https://codecov.io/gh/huggingface/transformers/pull/3089?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f169957d0cf17b110f27cacc1b1fb43efaa01218?src=pr&el=desc) will **increase** coverage by `<.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3089?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3089 +/- ##
==========================================
+ Coverage 77.59% 77.59% +<.01%
==========================================
Files 98 98
Lines 16250 16250
==========================================
+ Hits 12609 12610 +1
+ Misses 3641 3640 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3089?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3089/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.02% <0%> (+0.15%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3089?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3089?src=pr&el=footer). Last update [f169957...ac7be7c](https://codecov.io/gh/huggingface/transformers/pull/3089?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks for sharing! |
transformers | 3,088 | closed | Fast tokenizers can't `encode_plus` a list of ids; slow tokenizers can | With the slow tokenizers:
```
>>> import transformers
>>> t = transformers.AutoTokenizer.from_pretrained("bert-base-cased", use_fast=False)
>>> t.encode_plus([1000])
{'input_ids': [101, 1000, 102],
'token_type_ids': [0, 0, 0],
'attention_mask': [1, 1, 1]}
```
With the fast tokenizers:
```
>>> import transformers
>>> t = transformers.AutoTokenizer.from_pretrained("bert-base-cased", use_fast=True)
>>> t.encode_plus([1000])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/dirkg/anaconda3/envs/allennlp/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 1889, in encode_plus
**kwargs,
File "/Users/dirkg/anaconda3/envs/allennlp/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 1815, in batch_encode_plus
tokens = self._tokenizer.encode(*batch_text_or_text_pairs[0])
File "/Users/dirkg/anaconda3/envs/allennlp/lib/python3.7/site-packages/tokenizers/implementations/base_tokenizer.py", line 141, in encode
return self._tokenizer.encode(sequence, pair)
TypeError
```
| 03-02-2020 21:32:23 | 03-02-2020 21:32:23 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,087 | closed | Error when runnig run_tf_ner.py | # 🐛 Bug
## Information
```python
!python3 /content/transformers/examples/ner/run_tf_ner.py --data_dir /content/ner_dataset \
--model_type bert \
--labels /content/labels.txt \
--model_name_or_path dccuchile/bert-base-spanish-wwm-cased \
--output_dir model_output \
--max_seq_length 256 \
--num_train_epochs 5\
--per_gpu_train_batch_size 42 \
--save_steps 2000\
--do_train \
--do_eval
Traceback (most recent call last):
File "/content/transformers/examples/ner/run_tf_ner.py", line 14, in <module>
from transformers import (
ImportError: cannot import name 'GradientAccumulator'
```
## Transformer version ```transformers==2.5.1``` | 03-02-2020 19:35:30 | 03-02-2020 19:35:30 | You should not have any issues with this as `GradientAccumulator` is correctly imported in the __init__.py file:
https://github.com/huggingface/transformers/blob/298bed16a841fae3608d334441ccae4d9043611f/src/transformers/__init__.py#L426-L427
Is `transformers` correctly installed in your pip environment, or did you simply clone the repository?<|||||>Ok. Thank you. I cannot check it now because Colab PRO accounts are having problems. I will let you know ASAP.<|||||>I met analogous problem,but it was solved after rebooting computer. And another issue is it can not find the installed module,this issue can be solved by reinstall the module that can not be found.<|||||>In my case I was not using tf 2.x
El mar., 10 mar. 2020 9:50, over-shine <[email protected]> escribió:
> I met analogous problem,but it was solved after rebooting computer. And
> another issue is it can not find the installed module,this issue can be
> solved by reinstall the module that can not be found.
>
> —
> You are receiving this because you modified the open/close state.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/3087?email_source=notifications&email_token=AA34BHNOLTNBPN37WKT4UELRGX5MHA5CNFSM4K73DTO2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEOKQYOY#issuecomment-596970555>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AA34BHIJAGHRKSZE5TNILB3RGX5MHANCNFSM4K73DTOQ>
> .
>
|
transformers | 3,086 | closed | Disabling Eager Mode Prevents Loading Pre-Trained BERT | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
**BERT**
Language I am using the model on (English, Chinese ...):
**English**
## To reproduce
Steps to reproduce the behavior:
1. Disable Tensorflow eager mode `tf.compat.v1.disable_v2_behavior()`
2. Create a pretrained BERT instance `model = transformers.TFBertModel.from_pretrained("bert-base-uncased")`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
`model` contains a pre-trained BERT model
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.7.6
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.0.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| 03-02-2020 19:18:22 | 03-02-2020 19:18:22 | TFBertModel can auto use gpu? i find model(np.array(input_ids)) is slow,cost 100+ms
In [14]: %timeit model_outputs = model(np.array(input_ids))
133 ms ± 3.4 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [15]: %time model_outputs = model(np.array(input_ids))
CPU times: user 330 ms, sys: 14.4 ms, total: 344 ms
Wall time: 158 ms
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I have a similar issue. Did you manage to find a fix for this @msandrewhoang03 ? |
transformers | 3,085 | closed | TF GPU CI | 03-02-2020 18:32:28 | 03-02-2020 18:32:28 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3085?src=pr&el=h1) Report
> Merging [#3085](https://codecov.io/gh/huggingface/transformers/pull/3085?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0e56b37e805279ecb61670159fa8c71487214e0a?src=pr&el=desc) will **decrease** coverage by `1.02%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3085?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3085 +/- ##
=========================================
- Coverage 77.62% 76.6% -1.03%
=========================================
Files 98 98
Lines 16230 16230
=========================================
- Hits 12599 12433 -166
- Misses 3631 3797 +166
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3085?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3085/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3085/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3085/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3085/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.03% <0%> (-2.21%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3085/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3085/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.02% <0%> (-0.16%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3085?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3085?src=pr&el=footer). Last update [0e56b37...83f65ff](https://codecov.io/gh/huggingface/transformers/pull/3085?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 3,084 | closed | Update README.md | - Add example of usage
- Update metrics | 03-02-2020 18:32:08 | 03-02-2020 18:32:08 | |
transformers | 3,083 | closed | Memory error : load 200GB file in run_language_model.py |
line 107 in run_language_model.py
with open(file_path, encoding="utf-8") as f:
text = f.read()
any idea how to use generators to load large files? | 03-02-2020 17:36:03 | 03-02-2020 17:36:03 | I agree that an on-the-fly tokenisation would be neat as an alternative to pre-processing the whole input file and saving the tensors in memory. <|||||>Hi @BramVanroy , this is mentioned in the blog post bout training models from scratch, as something that could be done (https://huggingface.co/blog/how-to-train). Is it possible?
Thanks!<|||||>I am fairly new to how contributing to HuggingFace works but gave this a little thought today.
At first I thought we could maybe solve it like this:
If we consider this code:
`tokenized_text = tokenizer.convert_tokens_to_ids(tokenizer.tokenize(text))`
My first intuition was that the space needed to save the result of this code is significantly less than the space needed to store f.read().
So if we could get the result of this code by just reading a line (or reading parts of the texts with overlaps to the previous part as big as max_len of tokens), we might solve it...
However I ran a litte experiment and it turns out that `tokenized_text` would still take up around 133GB with an input txt file of 200GB.
So not a solution.
Do you guys have any idea how to approach this differently? Because of the RandomSampler we also can't store parts of the file trivially.<|||||>The solution would be to work with a dataset that on every call, fetches the lines from the file that are required, rather than reading the whole file in memory. This is bound to be slower, but it is very lenient on memory. A good candidate is linecache, which does smart caching in the process.
```python
import linecache
from pathlib import Path
from torch.utils.data import Dataset
class LazyTextDataset(Dataset):
def __init__(self, fin):
self.fin = fin
self.num_entries = self._get_n_lines(self.fin)
@staticmethod
def _get_n_lines(fin):
with Path(fin).resolve().open(encoding='utf-8') as fhin:
for line_idx, _ in enumerate(fhin, 1):
pass
return line_idx
def __getitem__(self, idx):
# linecache starts counting from one, not zero, +1 the given index
idx += 1
return linecache.getline(self.fin, idx)
def __len__(self):
return self.num_entries
```
Then you can add a custom `collate_fn` to your data loader that will automatically tokenize the whole batch.<|||||>I thought about this but figured it would definitely be too slow for such big files. I didn't know about linecache though, cool!<|||||>I've been using a similar solution to @BramVanroy for a couple of weeks, though I too was not aware of `linecache`, so assume that my solution can be improved by using that tool.
I implemented this because the up-front loading was taking hours and hours. I did some rough comparisons on smaller data files and found that I was getting the same iters/second using this method as the existing methods.
```python
class LazyUnSupervisedTextExamples:
"""
Deals with file i/o for lazy retrieval of specific lines of text in file.
"""
def __init__(self, path):
"""
:args:
path: str
: The path to the data file to be loaded.
"""
self.data_path = path
self.data_stream = open(self.data_path, 'r')
self.offsets = [0]
for line in self.data_stream:
self.offsets.append(self.offsets[-1] + len(line.encode('utf-8')))
self.offsets = self.offsets[1:-1]
self.data_stream.seek(0)
self.current_offset = 0
def __len__(self):
return len(self.offsets)
def __getitem__(self, _id):
"""
:returns:
str; the line of text given by _id if no errors.
None if errors occur.
PEP8 note: we really do want a bare exception here because an uncaught exception in here has the potential
to bring down a large training run with an error in a single line of the data file.
"""
offset = self.offsets[_id]
try:
self.data_stream.seek(offset)
line = self.data_stream.readline()
example = line.strip()
self.data_stream.seek(self.current_offset)
except:
example = None
return example
def __next__(self):
line = self.data_stream.readline()
self.current_offset = self.data_stream.tell()
return line.strip()
def close(self):
self.data_stream.close()
class LazyUnSupervisedTextDataset(Dataset):
"""
Works with datasets of simple lines of text. Lines are loaded and tokenized
lazily rather than being pulled into memory up-front. This reduces the memory
footprint when using large datasets, and also remedies a problem seen when using
the other Datasets (above) whereby they take too long to load all
of the data and tokenize it before doing any training.
The file i/o work is handled within self.examples. This class just indexes
into that object and applies the tokenization.
"""
def __init__(self, tokenizer, file_path, block_size=512):
"""
:args:
tokenizer: tokenizer.implementations.BaseTokenizer object (instantiated)
: This tokenizer will be directly applied to the text data
to prepare the data for passing through the model.
file_path: str
: Path to the data file to be used.
block_size: int
: The maximum length of a sequence (truancated beyond this length).
:returns: None.
"""
self.examples = LazyUnSupervisedTextExamples(file_path)
self.tokenizer = tokenizer
self.max_len = block_size
def __len__(self):
return len(self.examples)
def _text_to_tensor(self, item):
"""
Defines the logic for transforming a single raw text item to a tokenized
tensor ready to be passed into a model.
:args:
item: str
: The text item as a string to be passed to the tokenizer.
"""
return torch.tensor(self.tokenizer.encode(item, max_length=self.max_len))
def _text_to_item(self, text):
"""
Convenience functino to encapsulate re-used logic for converting raw
text to the output of __getitem__ of __next__.
:returns:
torch.Tensor of tokenized text if no errors.
None if any errors encountered.
"""
try:
if (text is not None):
return self._text_to_tensor(text)
else:
return None
except:
return None
def __getitem__(self, _id):
"""
:returns:
torch.Tensor of tokenized text if no errors.
None if any errors encountered.
"""
text = self.examples[_id]
return self._text_to_item(text)
def __next__(self):
text = next(self.examples)
return self._text_to_item(text)
def close(self):
"""
Since the LazyUnSupervisedTextExamples object self.examples contains a
file handle, this method provides access to its close function to safely
close the open data file when finished. This should be run when the
dataset object is finished with.
"""
self.examples.close()
```
The only change I found necessary to make to the `collate_fn` was a line to filter out lines that failed to load. I'm currently tokenising one item at a time, but prefer @BramVanroy's suggestion of batch tokenisation in the `collate_fn`.
```python
def collate(examples: List[torch.Tensor]):
examples = list(filter(lambda ex: ex is not None, examples))
if tokenizer._pad_token is None:
return pad_sequence(examples, batch_first=True)
return pad_sequence(examples, batch_first=True, padding_value=tokenizer.pad_token_id)
```
Happy to make above-mentioned sensible changes and contribute.
Does anyone have any advice about more sophisticated performance testing to shore-up my above claim that lazy loading isn't any slower per iteration?<|||||>I would recommend to, indeed, run the tokenisation in collate_fn. You can use `batch_encode_plus` there. Concerning your collate function: the filter function can be simplified to `filter(None, examples)` but in fact I'd go with a list comprehension right away: `[ex for ex in examples if ex is not None]`.
For timing you can use the timeit module.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,082 | closed | Summarization Examples: add Bart CNN Evaluation | 03-02-2020 16:13:33 | 03-02-2020 16:13:33 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3082?src=pr&el=h1) Report
> Merging [#3082](https://codecov.io/gh/huggingface/transformers/pull/3082?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b74c9b106b97b0722ff8f98e77e2e2210c688b23?src=pr&el=desc) will **increase** coverage by `0.4%`.
> The diff coverage is `87.27%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3082?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3082 +/- ##
========================================
+ Coverage 77.19% 77.6% +0.4%
========================================
Files 98 98
Lines 16063 16219 +156
========================================
+ Hits 12400 12586 +186
+ Misses 3663 3633 -30
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3082?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3082/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `100% <100%> (ø)` | :arrow_up: |
| [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3082/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `100% <100%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3082/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.76% <76.66%> (+0.38%)` | :arrow_up: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3082/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.05% <88.82%> (+8.47%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3082?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3082?src=pr&el=footer). Last update [b74c9b1...5656b5e](https://codecov.io/gh/huggingface/transformers/pull/3082?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>
New organization @LysandreJik <|||||>I like that organisation |
|
transformers | 3,081 | closed | Why there is no TransfoXLForSequenceClassification class? | Hello, huggingface! Thank you for the great work with doing neat interfaces for transformers family!
I'm analyzing performance of transformers on a task of texts classification (like SST-2 in GLUE). I found that many architectures has `<ModelName>ForSequenceClassification` class, But `transformerXL` has restricted set of auxiliary classes (https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_transfo_xl.py).
Is there any reasons for missing these useful class? Is it planned to be implemented or is there any restrictions for such implementation?
| 03-02-2020 16:02:50 | 03-02-2020 16:02:50 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,080 | closed | Docker Hub automatic tests failing | The automated docker tests are failing on HEAD and when you click details you get 404, so tough to debug | 03-02-2020 15:58:43 | 03-02-2020 15:58:43 | Should be fixed, this tests were running because I asked to build on every master's commit.
Now it should only build on new tags. |
transformers | 3,079 | closed | Bart CUDA not working | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): BART - bart-large
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Load model
2. Tokenize text
3. Send model and tensor to cuda
4. Forward the model
```python
from transformers import BartConfig, BartTokenizer, BartForMaskedLM, BartModel
configuration = BartConfig()
tokenizer_class = BartTokenizer
model_class = BartForMaskedLM(configuration)
tokenizer = tokenizer_class.from_pretrained('bart-large')
model = model_class.from_pretrained('bart-large')
model.eval()
model.to('cuda')
tokens = tokenizer.encode("Text example to test natural language generation with bart.")
input_ids = torch.tensor([tokens])
input_ids = input_ids.to('cuda')
with torch.no_grad():
last_hidden_states = model(input_ids)[0]
print("Len tokens:", len(tokens))
print("Shape last hidden states:", last_hidden_states.shape)
```
This code raises the following error:
```python
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-24-3bba66ed6aeb> in <module>
3
4 with torch.no_grad():
----> 5 last_hidden_states = model(input_ids)[0]
6
7 print("Len tokens:", len(tokens))
~/projects/p130-nlg/.conda/envs/p130-nlg-env-kgpubart/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
--> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
~/projects/p130-nlg/.conda/envs/p130-nlg-env-kgpubart/lib/python3.6/site-packages/transformers/modeling_bart.py in forward(self, input_ids, attention_mask, encoder_outputs, decoder_input_ids, decoder_attention_mask, decoder_cached_states, lm_labels, **unused)
923 encoder_outputs=encoder_outputs,
924 decoder_attention_mask=decoder_attention_mask,
--> 925 decoder_cached_states=decoder_cached_states,
926 )
927 lm_logits = self.lm_head.forward(outputs[0])
~/projects/p130-nlg/.conda/envs/p130-nlg-env-kgpubart/lib/python3.6/site-packages/transformers/modeling_bart.py in forward(self, input_ids, attention_mask, decoder_input_ids, encoder_outputs, decoder_attention_mask, decoder_cached_states)
842 attention_mask,
843 decoder_attn_mask,
--> 844 decoder_cached_states=decoder_cached_states,
845 )
846 # Attention and hidden_states will be [] or None if they aren't needed
~/projects/p130-nlg/.conda/envs/p130-nlg-env-kgpubart/lib/python3.6/site-packages/transformers/modeling_bart.py in forward(self, input_ids, encoder_hidden_states, encoder_padding_mask, combined_mask, decoder_cached_states, **unused)
497 decoder_cached_states=layer_state,
498 attention_mask=combined_mask,
--> 499 need_attn_weights=self.output_attentions,
500 )
501 if self.output_past:
~/projects/p130-nlg/.conda/envs/p130-nlg-env-kgpubart/lib/python3.6/site-packages/transformers/modeling_bart.py in forward(self, x, encoder_hidden_states, encoder_attn_mask, decoder_cached_states, attention_mask, need_attn_weights)
370 decoder_cached_states=decoder_cached_states,
371 need_weights=need_attn_weights,
--> 372 attn_mask=attention_mask,
373 )
374 x = F.dropout(x, p=self.dropout, training=self.training)
~/projects/p130-nlg/.conda/envs/p130-nlg-env-kgpubart/lib/python3.6/site-packages/transformers/modeling_bart.py in forward(self, query, key, value, key_padding_mask, decoder_cached_states, need_weights, static_kv, attn_mask)
627
628 if attn_mask is not None:
--> 629 attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) + attn_mask
630 attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len)
631
RuntimeError: expected device cuda:0 but got device cpu
```
But I have tested this code with another examples (like GPT-2) and it works.
## Expected behavior
I would expect to get the tensor size, as with another models I have tested this code.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.1
- Platform: Linux
- Python version: 3.6.7
- PyTorch version (GPU?): 1.4.0. GPU: Yes
- Tensorflow version (GPU?): No used
- Using GPU in script?: Trying to
- Using distributed or parallel set-up in script?: No
| 03-02-2020 15:12:24 | 03-02-2020 15:12:24 | I just merged a PR and your example works for me. Would you mind seeing if it is still broken in your system? Thanks for posting!
(note that the fix won't be in v 2.5.1 if you pip installed).
<|||||>Reopen if still broken! |
transformers | 3,078 | closed | correct greedy generation when doing beam search | This PR changes the behavior of greedy beam search generation as discussed and wished in #2415 .
Also two assertion statements are added:
1. It is not allowed to generate multiple sequences from the same input_ids when greedy generation (`num_return_sequences > 1`, `do_sample=False`, `num_beams` == 1 => `AssertionError`) because it would always lead to the same output sequence for all `num_return_sequences`.
2. It is not allowed to generate more sequences when doing greedy beam serach generation than the number of beams that are used (`num_return_sequences` <= `num_beams`, `do_sample=False` => `AssertionError`) because this is not possible or would also lead to the same output sequences.
Discussion:
- [x] the generation function becomes bigger and bigger handling more and more exceptions - might need a big refactoring at some point which modularizes it for more flexibility and more readability. Also when thinking about including the encoder-decoder models in the model.generate() function.
Also maybe the `no_beam_search_generation` fn could simply be handled by `beam_search_generation(num_beams=1)` ?
- [x] beam search when do_sample=True still does not work really (see PR #2317 ). Should discuss how exactly it should be implemented.
@thomwolf, @LysandreJik, @sshleifer | 03-02-2020 14:04:13 | 03-02-2020 14:04:13 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3078?src=pr&el=h1) Report
> Merging [#3078](https://codecov.io/gh/huggingface/transformers/pull/3078?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c0135194ebc5de4b1bbef98b31f9c457a0bf746a?src=pr&el=desc) will **decrease** coverage by `0.99%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3078?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3078 +/- ##
=======================================
- Coverage 77.6% 76.6% -1%
=======================================
Files 98 98
Lines 16221 16230 +9
=======================================
- Hits 12588 12433 -155
- Misses 3633 3797 +164
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3078?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3078/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.02% <100%> (+0.26%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3078/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3078/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3078/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3078/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.03% <0%> (-2.21%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3078/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3078?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3078?src=pr&el=footer). Last update [c013519...5b9164a](https://codecov.io/gh/huggingface/transformers/pull/3078?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>The self-hosted runner tests that fail are:
FAILED tests/test_pipelines.py::MonoColumnInputTestCase::test_tf_feature_extraction
FAILED tests/test_pipelines.py::MonoColumnInputTestCase::test_tf_fill_mask
FAILED tests/test_pipelines.py::MonoColumnInputTestCase::test_tf_ner
FAILED tests/test_pipelines.py::MonoColumnInputTestCase::test_tf_sentiment_analysis
and are all related to memory exhaustion (`Resource exhausted: OOM when allocating tensor ... `).
The tests are not related to the PR. Not sure what to do @julien-c
<|||||>LGTM for merging<|||||>Discussed with @thomwolf as well and also agreed that generate() function is not too complex and good as it is now. I will take a closer look at the issue with beam search decoding when `do_sample=True` (PR #2317 ) in a separate PR. Good to merge for me! |
transformers | 3,077 | closed | fix n_gpu count when no_cuda flag is activated | As I understand it, `no_cuda` should prevent the use of GPU in the `run_*` example scripts. However, `n_gpu` doesn't take it into account and count the numbers of GPUs available on the machine. It sends the model on the GPUs while the tensors are still on CPUs... | 03-01-2020 20:30:20 | 03-01-2020 20:30:20 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3077?src=pr&el=h1) Report
> Merging [#3077](https://codecov.io/gh/huggingface/transformers/pull/3077?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/298bed16a841fae3608d334441ccae4d9043611f?src=pr&el=desc) will **increase** coverage by `<.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3077?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3077 +/- ##
==========================================
+ Coverage 77.18% 77.19% +<.01%
==========================================
Files 98 98
Lines 16063 16063
==========================================
+ Hits 12399 12400 +1
+ Misses 3664 3663 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3077?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3077/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.38% <0%> (+0.16%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3077?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3077?src=pr&el=footer). Last update [298bed1...69041c1](https://codecov.io/gh/huggingface/transformers/pull/3077?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,076 | closed | XLNet multiple sentence modeling token type ids | XLNet was designed to handle multiple segment modeling (i.e. > 2 sentences) by using relative segment encodings. For sentence-level classification tasks with arbitrary sentence counts, what is the structure of the segment (token type) ids? I’ve found from the documentation that 2-sequence classification is supported by using `create_token_type_ids` but what about more than two segments?
If more than two segments are supported, would it be correct to infer (from examples in the documentation) that a 3-sentence input with `<cls>` after each sentence (`<sep>` token) should have the form:
0, 0, 0, 0, 0, 1, 2, 2, 2, 2, 3, 4, 4, 4, 4, 5.
where 1, 3, 5 are classification token segment ids? Would the transformers XLNet implementation support segment ids of this form? | 03-01-2020 18:34:48 | 03-01-2020 18:34:48 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,075 | closed | Training TFBertForSequenceClassification with custom X and Y data | I am working on a TextClassification problem, for which I am trying to traing my model on TFBertForSequenceClassification given in huggingface-transformers library.
I followed the example given on their github page, I am able to run the sample code with given sample data using tensorflow_datasets.load('glue/mrpc'). However, I am unable to find an example on how to load my own custom data and pass it in model.fit(train_dataset, epochs=2, steps_per_epoch=115, validation_data=valid_dataset, validation_steps=7).
How can I define my own X, do tokenization of my X and prepare train_dataset with my X and Y. Where X represents my input text and Y represents classification category of given X.
Sample Training dataframe :
```
text category_index
0 Assorted Print Joggers - Pack of 2 ,/ Gray Pri... 0
1 "Buckle" ( Matt ) for 35 mm Width Belt 0
2 (Gagam 07) Barcelona Football Jersey Home 17 1... 2
3 (Pack of 3 Pair) Flocklined Reusable Rubber Ha... 1
4 (Summer special Offer)Firststep new born baby ... 0
```
```
Question already asked on SO :
https://stackoverflow.com/questions/60463829/training-tfbertforsequenceclassification-with-custom-x-and-y-data
``` | 03-01-2020 17:30:45 | 03-01-2020 17:30:45 | Maybe this is a little late but you could take a look in both `examples/run_tf_glue.py` and [this function](https://github.com/huggingface/transformers/blob/master/src/transformers/data/processors/glue.py#L31-L168) from`src/transformers/data/processors/glue.py` and write a custom training script based from those.<|||||>To make things a little more concrete, I've written and annotated [an end-to-end example](https://gist.github.com/papapabi/124c6ac406e6bbd1f28df732e953ac6d) of how to fine-tune a `bert-base-cased` model from your `DataFrame`'s spec. Do comment if it helps you out!<|||||>@papapabi Thank you for your inputs. I will check this out.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,074 | closed | Only enable some labels in BERT fine-tuned NER | # ❓ Questions & Help
Only enable some labels in BERT fine-tuned NER
## Details
I would be interested to enable only some labels in BERT fine-tuned NER prediction.
For example, I know what are the entities and I would like to train a model that will classify entities - but I am not interested to train/predict O tag.
What would be the best way to do it?
Thanks | 03-01-2020 14:45:37 | 03-01-2020 14:45:37 | I'm not sure I understand your question. The O tag is here in order to identify a token which is not an entity in the IOB1 tagging scheme. If you don't have such a tag, every token will have to be classified as an entity, which does not make sense?
If you would like to define a custom tagging scheme and train a model to predict on that tagging scheme, you would have to create a dataset for that and train your model on that dataset.<|||||>Thanks @LysandreJik
Let me provide some additional details. Given a sentence, I use some external resources to find what are the candidates for tagging. For the candidates, I need to classify between 2 different labels (binary classification). That's the reason why I wrote that I am not interested in predicting the O tag, since I use external resources for it.
I have data for train/test. <|||||>Hey @jmamou , we might be able to help. Mind sending me an email at clement [at] huggingface [dot] co?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,073 | closed | Finetuned BERT model does not seem to predict right labels/work properly? | # ❓ Questions & Help
I am trying out a finetuned BERT model for token classification (--> https://huggingface.co/bert-base-cased-finetuned-conll03-english), but when I observe the model output (i.e. the logits after applying the softmax) and compare it with the true label_ids, they are totally uncorrelated (see pictures).
https://i.stack.imgur.com/gVyMn.png
https://i.stack.imgur.com/qS62L.png
## Details
I assume that the finetuned model (bert-base-cased-finetuned-conll03-english) is correctly pretrained, but I don't seem to understand why its predictions are off. I think one issue is that the pretrained model has another labelling scheme than I made myself during data prep (so that the tag2name dict is different), but I don't know how I can find out what label-index map the model uses for its predictions. Even then it is not the case that the model consistently makes the same mistakes, it is outputting things quite randomly.
Any idea what the issue could be?
`` | 03-01-2020 14:41:53 | 03-01-2020 14:41:53 | Please post your code using [code blocks](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks). Don't post screenshots.<|||||>FYI:
photo of **format input data**; https://i.stack.imgur.com/t472b.png
photo of **tag2name** ; https://i.stack.imgur.com/RO7dp.png
Assuming the data goes in in the right format, here is the model initialization and evaluation loop.
```
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased-finetuned-conll03-english")
model = BertForTokenClassification.from_pretrained('bert-base-cased-finetuned-conll03-english')
#eval LOOP
model.eval();
eval_loss, eval_accuracy = 0, 0
nb_eval_steps, nb_eval_examples = 0, 0
y_true = []
y_pred = []
valdataset = []
print("***** Running evaluation *****")
print(" Num examples ={}".format(len(val_inputs)))
print(" Batch size = {}".format(batch_num))
for step, batch in enumerate(valid_dataloader):
batch = tuple(t.to(device) for t in batch) # set every example of batch to device
input_ids, input_mask, label_ids = batch #same as we did in training loop but only 1 epoch now
with torch.no_grad(): #means we don't care about gradients and updating tensors
outputs = model(input_ids, token_type_ids=None,
attention_mask=input_mask)
# For eval mode, the first result of outputs is logits (for training mode this was loss)
logits = outputs[0] # In context of deep learning the logits layer means the layer that feeds in to softmax (or other such normalization).
# Get NER predict result
logits = torch.argmax(F.log_softmax(logits,dim=2),dim=2)#feed logits into softmax and take the prediction that is maximal
logits = logits.detach().cpu().numpy()
if step==1:
print(logits[0][0:15])
print(logits[1][0:15])
print(logits[3][0:15])
print(logits[4][0:15])
print(logits[5][0:15])
print(label_ids[0][0:15])
print(label_ids[1][0:15])
print(label_ids[2][0:15])
print(label_ids[3][0:15])
# Get NER true result
label_ids = label_ids.to('cpu').numpy()
# Only predict the real word, mark=0, will not calculate
input_mask = input_mask.to('cpu').numpy()
# Compare the valuable predict result
for i,mask in enumerate(input_mask):
# Real one
temp_1 = []
# Predicted one
temp_2 = []
valtemp = []
for j, m in enumerate(mask):
# Mark=0, meaning its a pad word, dont compare
if m:
if tag2name[label_ids[i][j]] != "X" and tag2name[label_ids[i][j]] != "[CLS]" and tag2name[label_ids[i][j]] != "[SEP]" : # Exclude the X label
temp_1.append(tag2name[label_ids[i][j]])
temp_2.append(tag2name[logits[i][j]])
if tag2name[label_ids[i][j]] != "[CLS]" and tag2name[label_ids[i][j]] != "[SEP]" :
valtemp.append(input_ids[i][j].item())
else:
break
#here are the two lists that contain true and pred labels.
y_true.append(temp_1)
y_pred.append(temp_2)
valdataset.append(valtemp)
tokenized_text_con = [tokenizer.decode(val) for val in valdataset]
```
print output: https://i.stack.imgur.com/qS62L.png
<|||||>Hi! From my experience using the community-contributed `dbmdz/bert-large-cased-finetuned-conll03-english` (which is the same checkpoint as) `bert-large-cased-finetuned-conll03-english`, using the `bert-base-cased` tokenizer instead of the tokenizer loaded from that checkpoint works better.
You can see an example of this in the [usage](https://huggingface.co/transformers/usage.html#named-entity-recognition), let me know if it helps.
I suspect the difference between the tokenizers is due to a lowercasing of all inputs. I'm looking into it now.
PS: the file `bert-large-cased-finetuned-conll03-english` is deprecated in favor of the aforementionned `dbmdz/bert-large-cased-finetuned-conll03-english` as they are duplicates. @julien-c is currently deleting it from the S3, please use the `dbmdz` file/folder.<|||||>Also cc'ing @stefan-it for information :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,072 | closed | Chinese BERT model can be used represented by words instead of character | # ❓ Questions & Help
I want to ask about the Chinese BERT model can be used represented by words instead of character?
Because when I do BERT visual for Chinese can only see its attention from character to character.
I want to see attention from words to words. Can I change this?
Thanks a lot for your help | 03-01-2020 11:45:30 | 03-01-2020 11:45:30 | I am not sure which Chinese BERT you are referring to, but the original multilingual BERT has trained Chinese on the character-level. From their [README](https://github.com/google-research/bert/blob/master/multilingual.md):
> Because Chinese (and Japanese Kanji and Korean Hanja) does not have whitespace characters, we add spaces around every character in the CJK Unicode range before applying WordPiece. This means that Chinese is effectively character-tokenized. Note that the CJK Unicode block only includes Chinese-origin characters and does not include Hangul Korean or Katakana/Hiragana Japanese, which are tokenized with whitespace+WordPiece like all other languages.<|||||>OK, Thanks for your help
But I mean I want Chinese BERT for word-level not for character-level.<|||||>Yes, I understand that, and as I said the default multilingual BERT does **not** support that. You'll have to find another implementation, perhaps https://arxiv.org/abs/1906.08101 |
transformers | 3,071 | closed | Predict the next word in sentence context from the list of possible words in Russian | Hello from Russia. I have a task of predicting the next word from the list of possible words in Russian. How can I do this?
| 03-01-2020 10:15:54 | 03-01-2020 10:15:54 | Example
My sentence: Мой кот ... (My cat ...)
3 word is ест (eat)
List of possible words: ест (eat), поглощает (absorb), глотает (swallow), кушают (eats) etc.
I need to determine the probabilities of each word from the given list in the context of the phrase and make the most correct sentence.
Output: Мой кот ест (My cat eat).<|||||>This is a very general question. Please use [Stack Overflow](https://stackoverflow.com/) for this.
Note that you'll need to use a model that is pretrained on Russian. |
transformers | 3,070 | closed | load_tf_weights_in_bert : 'BertModel' object has no attribute 'bias' | ```
AttributeError Traceback (most recent call last)
<ipython-input-14-0d66155b396d> in <module>
12
13 K.clear_session()
---> 14 model = create_model()
15 optimizer = tf.keras.optimizers.Adam(learning_rate=2e-5)
16 model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['acc', 'mae'])
<ipython-input-13-4f429fe61419> in create_model()
5 config = BertConfig.from_pretrained(BERT_PATH + 'bert_config.json')
6 config.output_hidden_states = False
----> 7 bert_model = BertModel.from_pretrained(BERT_PATH + 'bert_model.ckpt.index', from_tf=True, config=config)
8 # if config.output_hidden_states = True, obtain hidden states via bert_model(...)[-1]
9 embedding = bert_model(input_id, attention_mask=input_mask, token_type_ids=input_atn)[0]
~/anaconda3/envs/fasterai/lib/python3.7/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
482 if resolved_archive_file.endswith(".index"):
483 # Load from a TensorFlow 1.X checkpoint - provided by original authors
--> 484 model = cls.load_tf_weights(model, config, resolved_archive_file[:-6]) # Remove the '.index'
485 else:
486 # Load from our TensorFlow 2.0 checkpoints
~/anaconda3/envs/fasterai/lib/python3.7/site-packages/transformers/modeling_bert.py in load_tf_weights_in_bert(model, config, tf_checkpoint_path)
103 pointer = getattr(pointer, "weight")
104 elif scope_names[0] == "output_bias" or scope_names[0] == "beta":
--> 105 pointer = getattr(pointer, "bias")
106 elif scope_names[0] == "output_weights":
107 pointer = getattr(pointer, "weight")
~/anaconda3/envs/fasterai/lib/python3.7/site-packages/torch/nn/modules/module.py in __getattr__(self, name)
574 return modules[name]
575 raise AttributeError("'{}' object has no attribute '{}'".format(
--> 576 type(self).__name__, name))
577
578 def __setattr__(self, name, value):
AttributeError: 'BertModel' object has no attribute 'bias'
```
related libs&version:
transformers 2.5.1
tensorflow 2.1.0
environment:
NVIDIA-SMI 440.59 Driver Version: 440.59 CUDA Version: 10.2 | 03-01-2020 05:50:22 | 03-01-2020 05:50:22 | change
`BertModel.form_pretrained`
to
`BertForPreTraining.from_pretrained`
it seems to work<|||||>Glad you could get it to work! Indeed, `BertForPreTraining` should be used to convert from official BERT models. |
transformers | 3,069 | closed | No Causal Attention Masking in GPT-2 LM Finetuning Script | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): GPT-2
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: (give the name)
running run_language_modeling.py on WikiText-2 dataset
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. attention_mask is None at each forward step of the GPT-2 model (GPT2LMHeadModel)
## Expected behavior
attention_mask should reflect causal attention masking for the LM objective in finetuning GPT-2 so that outputs (t) only attend to inputs at previous time steps (1,..,t-1) instead of relying on input at the same time-step of output (t) where GPT-2 can simply learn to copy the input as output to optimize the LM objective.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.1
- Platform: Linux-4.15.0-76-generic-x86_64-with-glibc2.10
- Python version: 3.8.1
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
| 03-01-2020 04:29:04 | 03-01-2020 04:29:04 | Hi @alvinchangw,
GPT2 always uses causal masking no matter what kind of attention_mask you give it.
It's easy to see when you print out the computed attentions for each layer (by setting `output_attentions=True`) => see for this also #2975.
In the code this is done in this line:
https://github.com/huggingface/transformers/blob/298bed16a841fae3608d334441ccae4d9043611f/src/transformers/modeling_gpt2.py#L146
I admit it is very cryptic and probably should have better naming. Essentially what happens here is the following:
`self.bias` is defined as a lower triangular mask (see torch function [here](https://pytorch.org/docs/stable/torch.html?highlight=tril#torch.tril) ). according to the sequence length (params `nd` and `ns`), we derive `b`. `b` is then a lower triangular mask of shape sequence length x sequence length. Using this mask, we substract 10^4 from all values in `w` which should be masked, which sets their attention to 0.
<|||||>Hi @patrickvonplaten,
Thank you for pointing this out and for the detailed explanation! |
transformers | 3,068 | closed | Problem with using pretrained BertTokenizer for Korean | I have a corpus that contains Korean sentences. Here is the output of a berttokenizer for a token:
`tok = BertTokenizer.from_pretrained('bert-base-multilingual-uncased')`
`tok.convert_tokens_to_ids(tok.tokenize('잘해놨습니다'))`
`[44628, 14840, 97071, 97089, 97104, 13212, 79427]`
`tok = BertTokenizer.from_pretrained('bert-base-multilingual-cased')`
`tok.convert_tokens_to_ids(tok.tokenize('잘해놨습니다'))`
`[100]`
transformers version: 2.4.1
Totally, 'cased' tokenizer produces more 'unknown' token than 'uncased' one for Korean.
Is it a bug? | 02-28-2020 22:42:22 | 02-28-2020 22:42:22 | Intuitively, I would say that this might not be a bug but a limitation of the size of the vocabulary. In the cased version, all data that the tokenizer is 'trained' on is cased, meaning that there are tokens in the vocabulary that only differs by case (e.g. `Be` and `be`). As a consequence, this may cause the vocabulary to be a lot bigger, leaving less room for other tokens.
That is just my intuition and not based on any research.
You can try out KoBERT, though: https://github.com/SKTBrain/KoBERT<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,067 | closed | No speed diference when doing prediction between BERT and ALBERT | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I'm comparing two trained models, one using BERT and the other using ALBERT. I'm using the following code to do a prediction in both models: it tokenizes a list of phrases, padding them according to biggest max length per tokenized phrase, and then it applies a prediction:
```
def padding(phrase_list):
"""
Add padding to phrases in phrase list
"""
max_size = 0
for phrase in phrase_list:
max_size = max(max_size, len(tokenizer_phrase.encode(phrase)))
print(f"Max_size: {max_size}")
padded_list = []
for phrase in phrase_list:
phrase_encoded = tokenizer_phrase.encode_plus(phrase, max_length=max_size, pad_to_max_length=True)
padded_list.append(phrase_encoded)
return padded_list
```
```
def predict_batch_outputs(phrase_list):
"""
Receive list of phrases and return model prediction WITHOUT softmax
"""
with torch.no_grad():
phrases_padded = padding(phrase_list)
input_ids = torch.tensor([pad['input_ids'] for pad in phrases_padded])
token_type_ids = torch.tensor([pad['token_type_ids'] for pad in phrases_padded])
attention_mask = torch.tensor([pad['attention_mask'] for pad in phrases_padded])
labels = torch.tensor([[1] for i in range(0, len(input_ids))])
outputs = model_phrase(input_ids.to('cuda'), token_type_ids = token_type_ids.to('cuda'), attention_mask = attention_mask.to('cuda'), labels = labels.to('cuda'))
return outputs[1].tolist()
```
My question is: I'm not seeing any difference in speed between two models. For instance, I have a script that read a csv dataset, break every row into a list of phrases using nltk, and then send such frases to a model (either Bert or Albert) and print it prediction. Using the same script, with the same methods, same dataset, and changing only which model is doing the prediction, I have the opposite result than expected: Bert can predict 11.89 docs per second, while Albert can predict 9.13 docs per second. I've done other tests and they also showed Albert being SLOWER than Bert.
Can someone share their experiences between Bert and Albert in matters of speed? Thanks. | 02-28-2020 21:43:08 | 02-28-2020 21:43:08 | Hi, there's no reason ALBERT would be faster than BERT. They have the same number of layers, ALBERT just uses repeating layers (its `n` layers are a just a single one) instead of different layers (BERT's `n` layers are `n` different layers).<|||||>Hi @LysandreJik !
If ALBERT is a lot smaller than BERT (in terms of parameters), wouldn't it be faster? Taking up less memory, allowing bigger batch sizes, etc. Isn't this "scalability" one of the advantages of ALBERT over BERT?
Thanks!<|||||>If training speeds "proportionality" is similar to inferencing, then Section 4.3 and Table 2, page 7 of the latest Albert paper, [https://arxiv.org/pdf/1909.11942.pdf](https://arxiv.org/pdf/1909.11942.pdf) compares "Speedup" of BERT & ALBERT models. For example, Albert_xxlarge "speed of data throughput" is 0.3x of BERT_large the baseline, so only 30% the throughput.<|||||>> If training speeds "proportionality" is similar to inferencing, then Section 4.3 and Table 2, page 7 of the latest Albert paper, https://arxiv.org/pdf/1909.11942.pdf compares "Speedup" of BERT & ALBERT models.
Interesting. Section 4.3 also says:
>ALBERT models have higher data throughput compared to their corresponding BERT models. If we
use BERT-large as the baseline, we observe that ALBERT-large is about 1.7 times faster in iterating
through the data while ALBERT-xxlarge is about 3 times slower because of the larger structure.
Judging from this Section and from Table 2, some versions of ALBERT are indeed faster than BERT.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,066 | closed | Documentation and code mismatch in BertForMaskedLM forward method | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
BertForMaskedLM
Language I am using the model on (English, Chinese ...):
Not important.
The problem arises when using:
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
I am trying to obtain LM probabilities for research purposes.
## To reproduce
Steps to reproduce the behavior:
1. According to the documentation, the code below should work as expected.
However, masked_lm_loss and ltr_lm_loss values are in fact in 2nd and 1st positions. This is apparent if the code is inspected, i.e. lm_labels related code is executed after masked_lm_labels related code.
See https://github.com/huggingface/transformers/blob/908fa43b543cf52a3238129624f502240725a6a6/src/transformers/modeling_bert.py#L1001-L1014
and
https://github.com/huggingface/transformers/blob/908fa43b543cf52a3238129624f502240725a6a6/src/transformers/modeling_bert.py#L1014
The following code is an excerpt from my code.
```python
model = BertForMaskedLM.from_pretrained('bert-base-cased')
with torch.no_grad():
outputs = model(token_ids_in_sentence_tensor,
masked_lm_labels=token_ids_in_sentence_tensor,
lm_labels=token_ids_in_sentence_tensor,
token_type_ids=segments_tensors)
masked_lm_loss = outputs[0]
ltr_lm_loss = outputs[1]
predictions = outputs[2]
```
## Expected behavior
Explained above.
## Environment info
- `transformers` version:
- Platform: Darwin XXX-3.local 19.3.0 Darwin Kernel Version 19.3.0: Thu Jan 9 20:58:23 PST 2020; root:xnu-6153.81.5~1/RELEASE_X86_64 x86_64
- Python version: 3.7.4
- PyTorch version (GPU?): no GPU, 1.3.1
- Tensorflow version (GPU?): 2.0.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 02-28-2020 20:59:04 | 02-28-2020 20:59:04 | |
transformers | 3,065 | closed | XLM-RoBERTa can't add new tokens. | # 🐛 Bug
Model I am using (Bert, XLNet ...): XLM-RoBERTa
## To reproduce
Steps to reproduce the behavior:
1. tokenizer = XLMRobertaTokenizer.from_pretrained('xlm-roberta-large', do_lower_case=True)
2. tokenizer.add_tokens(['<a1>', '</a1>', '<a2>', '</a2>'])
3. tokenizer.convert_tokens_to_ids('<a1>')
It always respond 1 as ids for the new tokens.
The problem seems connected to:
if (
token != self.unk_token
and self.convert_tokens_to_ids(token) == self.convert_tokens_to_ids(self.unk_token)
and token not in to_add_tokens
):
in the second condition.
When it tries to get self.convert_tokens_to_ids(token) always returns 1 instead of 3.
3 is the id for unk_token.
1 is the id for pad. | 02-28-2020 18:45:11 | 02-28-2020 18:45:11 | Can you post a minimal verifiable example that we can just copy-and-paste to try? I guess you don't really try to add empty strings as tokens?<|||||>Hi @BramVanroy, no ofc, probably i've pasted the wrong snippet of code.
If you try to expand the vocabulary of the tokenizer with the following code:
```
tokenizer = XLMRobertaTokenizer.from_pretrained('xlm-roberta-large', do_lower_case=True)
tokenizer.add_tokens(['[A1]', '[/A1]', '[A2]', '[/A2]'])
```
the size of tokenizer remains the same.
Obviously if you try the same identical code with Bert or DistilBert (the ones i'm testing) all works fine.
All seems connected with the second condition of if-else block:
```
if (
token != self.unk_token
and self.convert_tokens_to_ids(token) == self.convert_tokens_to_ids(self.unk_token)
and token not in to_add_tokens
):
```
This condition seems returns the wrong id for self.unk_token.
Removing this condition let me add the new tokens and extend the tokenizers.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,064 | closed | Add LM capabilities to TFTransfoXLLMHead | # 🚀 Feature request
It is currently not possible to generate language with the TFTransfoXLLMHeadModel because
the `lm_head` is not implemented (see https://github.com/huggingface/transformers/blob/908fa43b543cf52a3238129624f502240725a6a6/src/transformers/modeling_tf_transfo_xl.py#L745)
Doing:
```
from transformers import TFTransfoXLLMHeadModel
model = TFTransfoXLLMHeadModel.from_pretrained('transfo-xl-wt103')
model(tf.convert_to_tensor([[1, 5, 8]]))
```
currently leads to an error.
## Motivation
Pytorch has TransfoXLLMHead model working - TF should has well.
## Your contribution
@LysandreJik , @thomwolf I could implement it if needed.
| 02-28-2020 17:52:28 | 02-28-2020 17:52:28 | |
transformers | 3,063 | closed | Add generate() functionality to TF 2.0 | I added the `_generate_no_beam_search` functionality for TF 2.0. It works for the following models:
'gpt2', 'openai', 'xlnet', 'xlm', 'ctrl'. Only for the model 'transfo-xl' it doesn't work because the
lm_head is not implemented yet in TF 2.0 (added an issue here: #3064 ).
Also I checked whether the pytorch 'distilgpt2' and TF 2.0 'distilgpt2' generate the same output (added one Integration test for this). Will add other integration tests in a future PR.
Setting only certain indices to values is much less straight forward in TF 2.0 than in pytorch, which is why I added more code for the TF 2.0 version.
Would be very happy about some feedback @LysandreJik @thomwolf
EDIT: There was also a bug in TFCTRL where tf.concat uses a pytorch argument 'dim' instead of 'axis'
## TODO
- [x] Discuss how to change the test_torch_tf_conversion.py() @LysandreJik @thomwolf
## Future PR:
- [ ] Adapt all LMHead Integration Tests to greedy generate to be able to compare PT & TF
- [ ] Add generate() to TFTransfoXL (see Issue #3064 ) | 02-28-2020 14:37:10 | 02-28-2020 14:37:10 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3063?src=pr&el=h1) Report
> Merging [#3063](https://codecov.io/gh/huggingface/transformers/pull/3063?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/eec5ec807135ae61fa2266f3c7ad947cc207abda?src=pr&el=desc) will **increase** coverage by `0.22%`.
> The diff coverage is `90.1%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3063?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3063 +/- ##
==========================================
+ Coverage 77.59% 77.82% +0.22%
==========================================
Files 98 98
Lines 16250 16422 +172
==========================================
+ Hits 12610 12780 +170
- Misses 3640 3642 +2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3063?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/3063/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.91% <100%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3063/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.22% <100%> (-0.02%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3063/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.77% <100%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/3063/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `96.14% <100%> (+1.47%)` | :arrow_up: |
| [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/3063/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.07% <100%> (-0.09%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3063/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `88.98% <100%> (+0.6%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3063/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `99.57% <100%> (+1.74%)` | :arrow_up: |
| [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/3063/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `75.63% <100%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/3063/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `91.13% <20%> (-1%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3063/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `91.56% <89.85%> (-1.22%)` | :arrow_down: |
| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/3063/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3063?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3063?src=pr&el=footer). Last update [eec5ec8...b996a97](https://codecov.io/gh/huggingface/transformers/pull/3063?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>> This is cool, good job on the TensorFlow implementation! Regarding the use of the `past` and `mems`, I don't think they're actually implemented in the models though?
I think the tf models also have the `past` and `mems` functionality implemented (when looking into the tf modeling files the `past` and `mems` variables are used in the code.<|||||>Good to merge for me!
Changed all `<tf_tensor>.shape` to `shape_list(<tf_tensor>)` to make function compatible with both eagermode and no eagermode after discussion with @thomwolf and @jplu .
Will add additional Integration tests (so far only `tf_gpt2`) for other LMHead models and add beam_search once completed in torch version. |
transformers | 3,062 | closed | Should weight distribution change more when fine-tuning transformers-based classifier? | ## ❓Should weight distribution change more when fine-tuning transformers-based classifier?
This question was posted on DataScience stack exchange:
[https://datascience.stackexchange.com/questions/68641/should-weight-distribution-change-more-when-fine-tuning-transformers-based-class](https://datascience.stackexchange.com/questions/68641/should-weight-distribution-change-more-when-fine-tuning-transformers-based-class)
## Details
I'm using pre-trained DistilBERT model with custom classification head, which is almost the same as in the [reference implementation ](https://github.com/huggingface/transformers/blob/fb560dcb075497f61880010245192e7e1fdbeca4/src/transformers/modeling_distilbert.py#L579)
```python
class PretrainedTransformer(nn.Module):
def __init__(
self, target_classes):
super().__init__()
base_model_output_shape=768
self.base_model = DistilBertModel.from_pretrained("distilbert-base-uncased")
self.classifier = nn.Sequential(
nn.Linear(base_model_output_shape, out_features=base_model_output_shape),
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(base_model_output_shape, out_features=target_classes),
)
for layer in self.classifier:
if isinstance(layer, nn.Linear):
layer.weight.data.normal_(mean=0.0, std=0.02)
if layer.bias is not None:
layer.bias.data.zero_()
def forward(self, input_, y=None):
X, length, attention_mask = input_
base_output = self.base_model(X, attention_mask=attention_mask)[0]
base_model_last_layer = base_output[:, 0]
cls = self.classifier(base_model_last_layer)
return cls
```
During training, I use linear LR warmup schedule with max LR=5-e5 and cross entropy loss. In general, the model is able to learn on my dataset and reach high precision/recall metrics.
**My question is:**
Should weights distributions and biases in classification layers change more during training? It seems like the weights almost do not change at all, even when I do not initialize them as in the code (to mean=0.0 and std=0.02). Is this an indication that something is wrong with my model or it's just because the layers I've added are redundant and model does not learn nothing new?
Take look at the image of weight from the tensorboard:
<img width="1021" alt="Screenshot 2020-02-24 at 20 56 36" src="https://user-images.githubusercontent.com/6958772/75526050-4bbf6600-5a11-11ea-8d62-37407f968e06.png"> | 02-28-2020 09:03:16 | 02-28-2020 09:03:16 | I am curious: what kind of behaviour do you see when you freeze the whole base model, and only train the classifier? Also, you may want avoid the use of the `cls` variable name because it is a reserved keyword for the class. In general, the classifier is trained quite quickly (often in two or epochs or less), so you are right in saying that in relative terms the weights of the layers that you add matter very little compared to the rest of the model.<|||||>When I freeze the base model, the overall learning pace is drastically slower - 5 epochs is only enough to reach fraction of the quality when base model not frozen.
Histograms when base model is frozen:
<img width="929" alt="Screenshot 2020-03-02 at 13 56 24" src="https://user-images.githubusercontent.com/6958772/75678405-dc09df00-5c8d-11ea-995e-853fbaa15b71.png">
<|||||>@BramVanroy do you have any further insights on this?<|||||>Something else that might cause this is that your layers are stuck in some local optimum, or that they are nullified by ReLU. What happens if you use, e.g., gelu instead of relu? But that can't explain everything (because the first linear layer also barely changes its weights. So I'm not sure. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,061 | closed | Bad word list for text generation | # 🚀 Feature request
Add a word of lists that you do not want the model to generate for whatever reason.
## Motivation
When creating a text generation model, especially if you will serve that model publicly, it is desirable to have assurances that a model is physically incapable of outputting certain tokens.
Such tokens would include profanity of all kinds, prejudicial language, or even just out-of-domain vocabulary.
## Your contribution
I am unsure as to your coding style guide, but I will detail how I would implement this below.
Firstly, you cannot simply set the offending word's values to `-Inf` as done here https://github.com/huggingface/transformers/blob/908fa43b543cf52a3238129624f502240725a6a6/src/transformers/modeling_utils.py#L1114 and here https://github.com/huggingface/transformers/blob/908fa43b543cf52a3238129624f502240725a6a6/src/transformers/modeling_utils.py#L1131
when calculating `top_k_top_p_filtering` as it would not capture multi-token words.
As an example, I will use the common Irish exclamation "feck" (see [here ](https://en.wikipedia.org/wiki/Feck) for an off-topic discussion of the word).
'feck' is tokenized into `['fe', 'ck']` in the `gpt2-xl` tokenizer. It would be unreasonable to simply set the logit values of any mention of the token `'fe'` or `'ck'` to -Inf as that would stop the creation of purely harmless words such as 'feasible' (`['fe', 'as', 'ible']`) and 'peck' (`['pe', 'ck']`). Therefore, I suggest a list of tokens that are not allowed at any one instance, and are updated, depending on the previous tokens.
This functionality would mean that a model would be allowed to generate the `'fe'` token, but would not be able to follow it with a `'ck'` token straight after.
I would operationalize this by having:
* a `bad_words_list` which the user has passed into the generate function
* a `possible_bad_words` dict which describes all the possible bad words that you can make with the current token history and how far along they are
* a `prohibited_tokens_list` which would prevent the model from choosing those tokens at a given time.
E.g.
Let's say that we pass the following list to the .generate function:
`bad_words_list = ["feck", "arse", "booze", "up yours"]`
which would be then tokenized and made into ids in the .generator function before the generation starts:
`bad_words_list = [tokenizer.encode(x, add_special_tokens=False) for x in bad_words_list]`
Then, the following function would be run just before the model outputs are obtained from both beam and no beam generators
```
def update_possible_bad_words(previous_token, bad_words_list, possible_bad_words):
# Start with an empty list of prohibited bad words
prohibited_tokens_list = []
unmatched_bad_words= []
# We cycle through the provided list of bad words, and if a bad word has only one token, then we add it to our prohibited list
# Else, if a bad word starts with the token of our previous token, then we add it to our list of possible_bad_words
for bad_word in bad_words_list:
if len(bad_word) == 1:
unmatched_bad_words.append(bad_word[0])
elif previous_token == bad_word[0] and bad_word not in possible_bad_words.keys():
possible_bad_words[bad_word] = 0
# We cycle through all our possible bad words
for bad_word in possible_bad_words.keys():
bad_word_index = possible_bad_words[bad_word]
# if the previous token matches the token currently indicated by the stored bad word index, then we increase this index by one.
if previous_token == bad_word[bad_word_index]:
new_bad_word_index = bad_word_index +1
# If the length of the bad word is one greater than the currently stored bad word index, then that means that we need to stop the next token from being generated. Thus, we add it to the prohibited list. We also add this word to the unmatched_bad_words, as we can now consider deleting it from possible bad words as it has been potentially mitigated.
if len(bad_word) == new_bad_word_index+1:
prohibited_tokens_list.append(bad_word[-1])
unmatched_bad_words.append(bad_word)
# We set the dict value to be this new incremented index
possible_bad_words[bad_word] = new_bad_word_index
else:
# Else if there is no match (I.e. the pattern of tokens created is different to the bad word) then we can mark this word as unmatched
unmatched_bad_words.append(bad_word)
# We cycle through all words that were not matched and we delete them from the possible_bad_words dict, as long as their starting token was not the previous token, in which case it could still be possible bad word.
for unmatched_bad_word in unmatched_bad_words:
if previous_token == unmatched_bad_word[0]:
possible_bad_words[unmatched_bad_word] = 0
else:
del possible_bad_words[unmatched_bad_word]
return prohibited_tokens_list
```
and I would call this function like so:
`prohibited_tokens_list = update_possible_bad_words(input_ids[-1], bad_words_list, possible_bad_words)`
here
https://github.com/huggingface/transformers/blob/908fa43b543cf52a3238129624f502240725a6a6/src/transformers/modeling_utils.py#L827
and here
https://github.com/huggingface/transformers/blob/908fa43b543cf52a3238129624f502240725a6a6/src/transformers/modeling_utils.py#L934
where `possible_bad_words` has been initialized as an empty dict directly before the generation loop.
Finally, we would pass `prohibited_tokens_list` to `top_k_top_p_filtering`
https://github.com/huggingface/transformers/blob/908fa43b543cf52a3238129624f502240725a6a6/src/transformers/modeling_utils.py#L1100
and would simply perform `logits[prohibited_tokens_list] = filter_value` before or after the top p and k filtering in that function. | 02-28-2020 04:54:12 | 02-28-2020 04:54:12 | I like this idea. Perhaps it can be added as an example, or even as an argument to the generation script. |
transformers | 3,060 | closed | How to init a subclass of BertForTokenClassification | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I want to build a subclass of BertForTokenClassification and also want to use the weights of pretrained model
`
class SeqLabelClassifier(BertForTokenClassification):
def __init__(self, pretrained_model_name, config):
super(SeqLabelClassifier,self).__init__(config)
self.lstm=nn.LSTM(...)
config = BertConfig()
model = SeqLabelClassifier(pretrained_model_name, config)
model = model.from_pretrained(args.pretrained_model_name, config=config)
`
But I get this error
> File "/home/haoran/anaconda3/envs/nsd/lib/python3.8/site-packages/transformers/modeling_utils.py", line 466, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
TypeError: __init__() missing 1 required positional argument: 'config'
How to correctly pass the args?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | 02-28-2020 03:08:00 | 02-28-2020 03:08:00 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,059 | closed | Bart-CNN | ## Sources
- copy pastes code from generate but does not share very much in an effort to simplify. there is no big abstraction yet.
- also copy pastes some code from fairseq
- encoder_outputs and previous decoder attentions are cached.
## Differences with PretrainedModel.generate
these are all bc thats the way fairseq does it!
- BeamHypotheses(early_stopping=True)
- assumptions about various token_ids being present
- force decoder to start with EOS, then predict BOS
- decoder only considers the most recently generated token bc everything else is cached.
- prevent predictions of various special tokens at inopportune moments (all the -inf stuff)
- force eos if you hit max length
- max_length is about how many tokens you want to generate. Doesn't matter how many you have.
- min_len parameter to prevent short summaries
- no_ngram_repetition parameter (set to 3 in Bart-CNN) to prevent repetition.
## TODO
- [ ] docstrings
- [ ] Mystery: results are identical to fairseq 98.6% of the time, 1.4% of the time they differ by a few words.
- [ ] run rouge metrics, compare run time to fairseq.
- [ ] Resist pressure to make big seq2seq abstraction before there are more callers
- [ ] Deeper dive on the MaskedLM.tie_weights hack, what is right way to do it?
| 02-28-2020 02:28:56 | 02-28-2020 02:28:56 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3059?src=pr&el=h1) Report
> :exclamation: No coverage uploaded for pull request base (`master@271344f`). [Click here to learn what that means](https://docs.codecov.io/docs/error-reference#section-missing-base-commit).
> The diff coverage is `86.95%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3059?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3059 +/- ##
=========================================
Coverage ? 76.59%
=========================================
Files ? 98
Lines ? 16219
Branches ? 0
=========================================
Hits ? 12423
Misses ? 3796
Partials ? 0
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3059?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3059/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.08% <ø> (ø)` | |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3059/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.05% <86.95%> (ø)` | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3059?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3059?src=pr&el=footer). Last update [271344f...6e13b56](https://codecov.io/gh/huggingface/transformers/pull/3059?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>> ## Differences with PretrainedModel.generate
> these are all bc thats the way fairseq does it!
>
> * BeamHypotheses(early_stopping=True)
I think we have that option as well
> * assumptions about various token_ids being present
> * force decoder to start with EOS, then predict BOS
That's weird, no?
> * decoder only considers the most recently generated token bc everything else is cached.
> * prevent predictions of various special tokens at inopportune moments (all the -inf stuff)
> * force eos if you hit max length
We had this in our code before as well - I deleted it because I think unfinished sentences (sentences that were finished because they hit `max_length`) should not be ended with an EOS.
> * max_length is about how many tokens you want to generate. Doesn't matter how many you have.
This makes sense since encoder-decoder models always start from 0 `input_ids` for the decoder model and only have `encoder_input_ids` where as the standard "only-decoder" models (GPT2) have `decoder_input_ids` and append their output to it
|
transformers | 3,058 | closed | Fast tokenizers calculate wrong offsets when special characters are present | Example:
```
>>> import transformers
>>> t_fast = transformers.AutoTokenizer.from_pretrained("bert-base-cased", use_fast=True, add_special_tokens=False)
>>> sentence = "A, naïve [MASK] AllenNLP sentence."
>>> tokenized = t_fast.encode_plus(sentence, add_special_tokens=False, return_offsets_mapping=True)
>>> for start, end in tokenized['offset_mapping']:
... print(repr(sentence[start:end]))
'A'
','
'naïve'
' [MASK'
' Alle'
'nN'
'L'
' sentenc'
'e'
```
As you can see, after the word "naïve", the offsets go off the rails. | 02-28-2020 01:19:57 | 02-28-2020 01:19:57 | @mfuntowicz, would it make sense for me to integrate our tokenizer tests into your code, so you can see these things immediately? I'd be happy to do so.<|||||>Looks like this is a duplicate of #2917.<|||||>On second reading, this is not the same issue as #2917, though they may be related.<|||||>Roberta has another related issue:
```
>>> import transformers
>>> t_fast = transformers.AutoTokenizer.from_pretrained("roberta-base", use_fast=True, add_special_tokens=False)
>>> sentence = "I went to the zoo yesterday, but they had only one animal."
>>> tokenized = t_fast.encode_plus(sentence, add_special_tokens=False, return_offsets_mapping=True)
>>> for start, end in (t for t in tokenized['offset_mapping'] if t is not None):
... print(repr(sentence[start:end]))
'I'
' went'
' to'
' the'
' zoo'
' yesterday'
','
' but'
' they'
' had'
' only'
' one'
' animal'
'.'
```
There are two problems here. `add_special_tokens` is being ignored (#2919), but also, it adds those extra spaces at the front of the words.<|||||>Hi @dirkgr,
Thanks for your report.
Regarding the integration of your tests, it definitively a good idea, if you can put @LysandreJik and myself as reviewers of the PR, we'll have a look 👍.
Regarding `add_special_tokens`, the behaviour on Rust side is quite different as it's a parameter that needs to be provided a construction time, whereas Python allows at tokenisation time. We should make it clearer in the doc.
```python
>>> t = transformers.BertTokenizerFast.from_pretrained('bert-base-cased')
>>> tokenized = t.encode_plus(sentence, add_special_tokens=False, return_offsets_mapping=True)
>>> tokenized['input_ids']
[101, 138, 117, 22607, 103, 4522, 20734, 2101, 5650, 119, 102]
>>> t = transformers.BertTokenizerFast.from_pretrained('bert-base-cased', add_special_tokens=False)
>>> tokenized = t.encode_plus(sentence, add_special_tokens=False, return_offsets_mapping=True)
>>> tokenized['input_ids']
>>> [138, 117, 22607, 103, 4522, 20734, 2101, 5650, 119]
```
For Roberta we're aware of this extra space being included, cc'ing @n1t0 for keeping track of this.<|||||>Ok some more context, `GPT2TokenizerFast` has also the same behaviour, so I would extrapolate this occurs more generally on BPE model.
Adding `<ModelFast>.from_pretrained(..., add_prefix_space=True)` doesn't append the space before but after the token:
```python
'I '
'went '
'to '
'the '
'zoo '
'yesterday,'
' '
'but '
'they '
'had '
'only '
'one '
'animal.'
```<|||||>Hi @dirkgr!
So there are multiple different things here:
- Your first example is due to the space between `naïve` and `[MASK]` not being trimmed out. We are aware of this behavior and are currently working on a fix.
- The fact that `add_prefix_space=True` moves the space at the end is actually a bug too. This happens because we mess with the offsets while adding the prefix. I am working on a fix for this too.
- Now, the second example you gave is actually expected behavior:
```python
import transformers
t_fast = transformers.AutoTokenizer.from_pretrained("roberta-base", use_fast=True, add_special_tokens=False)
sentence = "I went to the zoo yesterday, but they had only one animal."
tokenized = t_fast.encode_plus(sentence, return_offsets_mapping=True)
offsets = tokenized['offset_mapping']
tokens = t_fast.tokenize(sentence)
for token, (start, end) in (t for t in zip(tokens, offsets) if t[1] is not None):
print(repr(token))
print(repr(sentence[start:end]))
print(repr(t_fast.decode(t_fast.convert_tokens_to_ids([token]))))
```
will give the following output:
```
'I'
'I'
'I'
'Ġwent'
' went'
' went'
'Ġto'
' to'
' to'
'Ġthe'
' the'
' the'
'Ġzoo'
' zoo'
' zoo'
'Ġyesterday'
' yesterday'
' yesterday'
','
','
','
'Ġbut'
' but'
' but'
'Ġthey'
' they'
' they'
'Ġhad'
' had'
' had'
'Ġonly'
' only'
' only'
'Ġone'
' one'
' one'
'Ġanimal'
' animal'
' animal'
'.'
'.'
'.'
```
Here you can see that the space is actually part of these tokens. That's just the way the byte-level BPE used by GPT-2 and Roberta works. The `Ġ` is actually an encoded space. Does it make sense?<|||||>We hit a similar issue when we add new tokens. The input ids are correct, but offsets after the new token are off.
Example: https://colab.research.google.com/drive/1e2a3iyLF9NSMWZR50pnDYRhpizcsRV_6
```
text = "A [test] C"
print(tokenizer.encode(text, add_special_tokens=True))
results = tokenizer.encode_plus(text,
return_offsets_mapping=True,
pad_to_max_length=False,
max_length=128,
return_overflowing_tokens=False,
add_special_tokens=True)
for se in results['offset_mapping']:
if se:
print(text[se[0]:se[1]], se)
```
```
[101, 1037, 30522, 1039, 102]
A (0, 1)
[test (1, 7)
(8, 9)
```
Potentially related issue huggingface/tokenizers#143
<|||||>I think all of the mentioned bugs on this issue should now be fixed on `master`<|||||>```
>>> import transformers
>>> t_fast = transformers.AutoTokenizer.from_pretrained("bert-base-cased", use_fast=True, add_special_tokens=False)
>>> sentence = "A, naïve [MASK] AllenNLP sentence."
>>> tokenized = t_fast.encode_plus(sentence, add_special_tokens=False, return_offsets_mapping=True)
>>> for start, end in tokenized['offset_mapping']:
... print(repr(sentence[start:end]))
'A'
','
'naïve'
'[MASK]'
'Allen'
'NL'
'P'
'sentence'
'.'
```
and
```
>>> import transformers
>>> t_fast = transformers.AutoTokenizer.from_pretrained("roberta-base", use_fast=True, add_special_tokens=False)
>>> sentence = "I went to the zoo yesterday, but they had only one animal."
>>> tokenized = t_fast.encode_plus(sentence, add_special_tokens=False, return_offsets_mapping=True)
>>> for start, end in (t for t in tokenized['offset_mapping'] if t is not None):
... print(repr(sentence[start:end]))
'I'
'went'
'to'
'the'
'zoo'
'yesterday'
','
'but'
'they'
'had'
'only'
'one'
'animal'
'.'
```
and the last one:
```
from transformers import BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')
tokenizer.add_tokens(['[test]'])
text = "A [test] C"
print(tokenizer.encode(text, add_special_tokens=True))
results = tokenizer.encode_plus(text,
return_offsets_mapping=True,
pad_to_max_length=False,
max_length=128,
return_overflowing_tokens=False,
add_special_tokens=True)
for se in results['offset_mapping']:
if se:
print(text[se[0]:se[1]], se)
```
gives
```
[101, 1037, 30522, 1039, 102]
(0, 0)
A (0, 1)
[test] (2, 8)
C (9, 10)
(0, 0)
``` |
transformers | 3,057 | closed | Fast tokenizers don't properly tokenize special tokens | Slow tokenizers:
```
>>> import transformers
>>> t_slow = transformers.AutoTokenizer.from_pretrained("roberta-base", use_fast=False)
>>> t_slow.encode_plus("A <mask> sentence.")
{'input_ids': [0, 83, 50264, 3645, 4, 2],
'token_type_ids': [0, 0, 0, 0, 0, 0],
'attention_mask': [1, 1, 1, 1, 1, 1]}
>>> t_slow.convert_ids_to_tokens([0, 83, 50264, 3645, 4, 2])
['<s>', 'ĠA', '<mask>', 'Ġsentence', '.', '</s>']
```
Fast tokenizers:
```
>>> import transformers
>>> t_fast = transformers.AutoTokenizer.from_pretrained("roberta-base", use_fast=True)
>>> t_fast.encode_plus("A <mask> sentence.")
{'input_ids': [0, 250, 1437, 50264, 3645, 4, 2],
'token_type_ids': [0, 0, 0, 0, 0, 0, 0],
'attention_mask': [1, 1, 1, 1, 1, 1, 1]}
>>> t_fast.convert_ids_to_tokens([0, 250, 1437, 50264, 3645, 4, 2])
['<s>', 'A', 'Ġ', '<mask>', 'Ġsentence', '.', '</s>']
``` | 02-28-2020 01:08:46 | 02-28-2020 01:08:46 | I think it's a duplicate of #2919. Btw, I should have told you when I saw you open the PR, [I opened a bunch of issues related to AllenNLP usage](https://github.com/huggingface/transformers/issues?utf8=%E2%9C%93&q=is%3Aissue+author%3Abryant1410). I think one that was closed it's not completely solved, but not sure.<|||||>It's not the same issue. This one is about special tokens in text form in the middle of your string. #2919 is about `[CLS]` and `[SEP]` being added to the beginning and end. Also, #2919 has already been fixed.<|||||>Ohh, :+1: <|||||>This is now fixed on `master`:
```
>>> import transformers
>>> t_fast = transformers.AutoTokenizer.from_pretrained("roberta-base", use_fast=True)
>>> t_fast.encode_plus("A <mask> sentence.")
{'input_ids': [0, 83, 50264, 3645, 4, 2], 'attention_mask': [1, 1, 1, 1, 1, 1]}
>>> t_fast.convert_ids_to_tokens([0, 83, 50264, 3645, 4, 2])
['<s>', 'ĠA', '<mask>', 'Ġsentence', '.', '</s>']
``` |
transformers | 3,056 | closed | (Gross WIP) Bart-CNN | 02-27-2020 21:29:05 | 02-27-2020 21:29:05 | ||
transformers | 3,055 | closed | Pipeline doc | This PR adds documentation to the pipelines and slightly modifies their behavior:
- `modelcard` is no longer available when using the `pipeline` factory method. As discussed with @thomwolf and @mfuntowicz, it doesn't serve any purpose for the user.
- All task-specific pipelines can now be instantiated without any model/tokenizer, instead relying on the defaults defined for the `pipeline` factory method. | 02-27-2020 20:39:37 | 02-27-2020 20:39:37 | Hmm, why do we want to be able to spawn a `NerPipeline` or a `QuestionAnsweringPipeline` without a model or a tokenizer?
Isn't this what `pipeline("ner")` or `pipeline("question-answering")` is for?<|||||>I think the question is rather why can't we spawn a `NerPipeline` or a `QuestionAnsweringPipeline` even though we have defined defaults for them?
What I see in the `pipeline` factory is the ability to simply specify a task and get the appropriate pipeline. I don't see a strong reason for the task-specific pipelines to not be able to load a default, but I may be missing part of the picture.
If you think this is unnecessary I can revert the code - we'll just need to make sure that the doc explains what the `pipeline` factory is for and how it handles defaults compared to task-specific pipelines, because I was misled.<|||||>I'll let @mfuntowicz and @thomwolf chime in, but for me, the subclasses of Pipeline are the actual implementations – preferably well-typed – that do not expose too much magic.
I don't see the point of having two public APIs that do exactly the same thing.
E.g., the logic behind [get_defaults](https://github.com/huggingface/transformers/pull/3055/files#diff-1e87b75d7b313550a38be1daecd653f7R485-R504) is a duplication of what's already in `pipeline()`
In any cases, the subclasses of Pipeline dont accept a model/tokenizer of type `str`, in contradiction to the doc (it crashes), because the spawning of model/tokenizer with `from_pretrained()` is only inside the `pipeline` wrapper <|||||>Ok, this makes sense. I'll revert that later today. Thanks @julien-c |
transformers | 3,054 | closed | Bart: Use bool attention_mask for encoder | Wasn't breaking because bool(-1e4) is True, but clearer this way. | 02-27-2020 19:21:57 | 02-27-2020 19:21:57 | I believe the `bool` operator was introduced in PyTorch 1.2.0, won't this break compatibility with PyTorch 1.0.0?<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3054?src=pr&el=h1) Report
> Merging [#3054](https://codecov.io/gh/huggingface/transformers/pull/3054?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8bcb37bfb80d77e06001f989ad982c9961a69c31?src=pr&el=desc) will **decrease** coverage by `0.01%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3054?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3054 +/- ##
==========================================
- Coverage 77.2% 77.18% -0.02%
==========================================
Files 98 98
Lines 16063 16063
==========================================
- Hits 12401 12399 -2
- Misses 3662 3664 +2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3054?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3054/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `84.58% <100%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3054/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.22% <0%> (-0.33%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3054?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3054?src=pr&el=footer). Last update [8bcb37b...2116c2b](https://codecov.io/gh/huggingface/transformers/pull/3054?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,053 | closed | Changes to NER examples for PLT and TPU | * Simplify the NER example to support new features added for us by pytorch-lightning.
* Pull out all rank and backend special casing in the code base.
* Setup data so that TPU examples work using the new code base.
Testing:
* Verify that standard examples of training work.
* Confirm that new TPU code works and runs https://colab.research.google.com/drive/1dBN-wwYUngLYVt985wGs_OKPlK_ANB9D
In my example, I see a 4x speedup over colab GPU and multi-gpu k40, but a slow down on loading and saving model. So certainly a win for larger datsets. | 02-27-2020 17:16:54 | 02-27-2020 17:16:54 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3053?src=pr&el=h1) Report
> Merging [#3053](https://codecov.io/gh/huggingface/transformers/pull/3053?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8bcb37bfb80d77e06001f989ad982c9961a69c31?src=pr&el=desc) will **decrease** coverage by `1.03%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3053?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3053 +/- ##
==========================================
- Coverage 77.2% 76.16% -1.04%
==========================================
Files 98 98
Lines 16063 16063
==========================================
- Hits 12401 12234 -167
- Misses 3662 3829 +167
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3053?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3053/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3053/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3053/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3053/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.03% <0%> (-2.21%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3053/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3053/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.22% <0%> (-0.33%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3053?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3053?src=pr&el=footer). Last update [8bcb37b...9dc0964](https://codecov.io/gh/huggingface/transformers/pull/3053?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>@LysandreJik This doesn't touch the main code, so should be fine to merge. |
transformers | 3,052 | closed | Is ALBERT the right implement from paper ? | I read ALBERT paper and code form Google: github.com/google-research/ALBERT. One of the main contribute of ALBert is cross-layer parameter sharing and i can see it on Google's code. But i can't see sharing on this code. Every Layers(or Blocks) are new object and their parameter will be different after fine-tuning.
Is the implement is wrong or i have misunderstanding about parameter sharing? | 02-27-2020 16:45:36 | 02-27-2020 16:45:36 | I read the code carefully, and it do share. |
transformers | 3,051 | closed | Adding Docker images for transformers + notebooks | Docker images are as follow:
- transformers-cpu (PyTorch + TF)
- transformers-gpu (PyTorch + TF)
- transformers-pytorch-cpu
- transformers-pytorch-gpu
- transformers-tensorflow-cpu
- transformers-tensorflow-gpu
Images are tagged according to the version of the library they bring and always use the latest version of DL frameworks.
Notebooks introduce:
- How to use tokenizers
- Overall transformers overview
- How to use pipelines
Currently notebooks are added to the repo, let's discuss internally if it's a good idea. | 02-27-2020 16:01:57 | 02-27-2020 16:01:57 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3051?src=pr&el=h1) Report
> Merging [#3051](https://codecov.io/gh/huggingface/transformers/pull/3051?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/53ce3854a16ad2a715bc6ac8af3e30c18b5a1d11?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3051?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3051 +/- ##
======================================
Coverage 76.1% 76.1%
======================================
Files 98 98
Lines 15946 15946
======================================
Hits 12136 12136
Misses 3810 3810
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3051?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3051?src=pr&el=footer). Last update [53ce385...ff701d9](https://codecov.io/gh/huggingface/transformers/pull/3051?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>On the Docker files, do we really need conda? (I try to stay away from conda as much as possible)<|||||>Well, conda brings Intel MKL with just `conda install mkl, mkl-devel` which improves PyTorch and TF by a significant factor.
Depending on what level of performances we want to provide:
- I totally remove MKL and install some Open BLAS/LAPACK libraries
- I'll build MKL in the images and include into PATH<|||||>MKL is also available on PyPi. Have a looks-y [here](https://software.intel.com/en-us/articles/installing-the-intel-distribution-for-python-and-intel-performance-libraries-with-pip-and) to check whether everything you need is there. This might also be of interest: https://software.intel.com/en-us/distribution-for-python/choose-download/linux
|
transformers | 3,050 | closed | Should be able to turn off logging | # 🚀 Feature request
When doing a simple pipeline, I want to supress:
```
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████| 230/230 [00:00<00:00, 136kB/s]
convert squad examples to features: 100%|█████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 241.08it/s]
add example index and unique id: 100%|███████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 7037.42it/s]
```
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
My code is pretty straightforward:
```
args = parse_args()
f = open(args.text_path, "r")
context = f.read()
# print(context)
tokenizer = AutoTokenizer.from_pretrained(args.model)
model = AutoModelForQuestionAnswering.from_pretrained(args.model)
qa = pipeline('question-answering',
model='distilbert-base-uncased-distilled-squad', tokenizer='bert-base-cased')
response = qa(context=context,
question=args.question)
print(response['answer'])
``` | 02-27-2020 14:57:39 | 02-27-2020 14:57:39 | Any progress on this? Has anyone found a way to disable the logging this?
The issue appears to be tqdm. A work-around is to disable it before importing transformers:
```
import tqdm
def nop(it, *a, **k):
return it
tqdm.tqdm = nop
import transformers
```<|||||>I agree that it's a valid requirement, we'll look into it<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Wish this would stay open.<|||||>Logger could be really annoying when it comes to applications. Should have some way to turn it off.<|||||>Here you go:
```
# To control logging level for various modules used in the application:
import logging
import re
def set_global_logging_level(level=logging.ERROR, prefices=[""]):
"""
Override logging levels of different modules based on their name as a prefix.
It needs to be invoked after the modules have been loaded so that their loggers have been initialized.
Args:
- level: desired level. e.g. logging.INFO. Optional. Default is logging.ERROR
- prefices: list of one or more str prefices to match (e.g. ["transformers", "torch"]). Optional.
Default is `[""]` to match all active loggers.
The match is a case-sensitive `module_name.startswith(prefix)`
"""
prefix_re = re.compile(fr'^(?:{ "|".join(prefices) })')
for name in logging.root.manager.loggerDict:
if re.match(prefix_re, name):
logging.getLogger(name).setLevel(level)
```
Usage:
1. override all module-specific loggers to a desired level (except whatever got logged during modules importing)
```
import everything, you, need
import logging
set_global_logging_level(logging.ERROR)
```
2. In case of transformers you most likely need to call it as:
```
import transformers, torch, ...
import logging
set_global_logging_level(logging.ERROR, ["transformers", "nlp", "torch", "tensorflow", "tensorboard", "wandb"])
```
add/remove modules as needed.
To disable logging globally - place at the beginning of the script
```
import logging
logging.disable(logging.INFO) # disable INFO and DEBUG logging everywhere
# or
# logging.disable(logging.WARNING) # disable WARNING, INFO and DEBUG logging everywhere
```
If desired, `set_global_logging_level` could be expanded to be a scope manager too.<|||||>Will that kill tqdm? I want to keep tqdm!<|||||>> Will that kill tqdm? I want to keep tqdm!
```
set_global_logging_level(logging.ERROR, ["transformers", "nlp", "torch", "tensorflow", "tensorboard", "wandb"])
from tqdm import tqdm
for i in tqdm(range(10000)): x = i**i
```
works just fine
and so does disable all:
```
set_global_logging_level(logging.ERROR])
from tqdm import tqdm
for i in tqdm(range(10000)): x = i**i
```
or in the case of "total logging silence" setting:
```
import logging
logging.disable(logging.INFO) # disable INFO and DEBUG logging everywhere
from tqdm import tqdm
for i in tqdm(range(10000)): x = i**i
```
works too.
I don't think it uses `logging`.
<|||||>PR with the proposed code, plus adding the ability to do that during `pytest` https://github.com/huggingface/transformers/pull/6816<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>This was resolved by https://github.com/huggingface/transformers/pull/6434 |
transformers | 3,049 | closed | Regarding attention received by the distilbert model | I receive the attention of all the 6 layers with all the 12 heads while exporting the tfdistilbert model. I just want to take the attention of dimension equal to the sequence length. Which layer and head attention is the most effective attention value that I should take in order to get the best value of attention scores with respect to my sentence. | 02-27-2020 13:34:07 | 02-27-2020 13:34:07 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,048 | closed | Set specific hidden_size for ClassificationHead | # 🚀 Feature request
Add a config term `head_hidden_size` to the model configurations that will be used for the head of models such as `RobertaForSequenceClassification`
## Motivation
HuggingFace transformers library provides a very accessible API and PyTorch models that can be used "plug n play" for various task such as classification.
In many case, varying the hidden size of the last layer (that outputs the logits) is one of the first thing we tweak to improve the performance on such a task.
Currently, the dense layer uses the `hidden_size` config parameter, which is the same as the one used in the transformer (BERT). One cannot change the hidden size of the last layer without changing the hidden size of the entire transformer model behind it.
This means we have to code a new PyTorch module in order to do something as simple as that.
## Your contribution
I could PR this should the change be welcomed. | 02-27-2020 13:07:14 | 02-27-2020 13:07:14 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,047 | closed | [WIP] deleted special tokens as attributes from model config | Given our conversation in #3011 , I thought about deleting all model special config attributes to make sure that no data is duplicated between the tokenizer of a model and the model.
There are two instances where config special attributes were used (e.g. self.config.pad_token_id)
1. the generate() function. But the generate function also takes all those tokens as attributes, so it should not at all rely on self.config.pad_token_id. This one is trivial to fix.
2. the bart model. It seems like the pad_token_id is actually an integral part of the bart model itself, so to me it seems very hard to disentangle the pad_token_id from the bart model.
I see three options:
1) Leave the code as it is and leave default attributes self.config.pad_token_id, self.config.bos_token_id, self.config.eos_token_id = None in the PretrainedConfig class.
2) Remove the self.config.pad_token_id, ... from the PretrainedConfig class, make the generate function independent of those variables, but add those variables to the BartConfig only.
3) Make the models completely independent from all special tokens. This probably would mean that the bart model class needs quite a lot of changes.
I tend to option 2) or 3). I like the idea of separation token ids from internal model behavior completely, but I cannot really estimate whether this is feasible for Bart (@sshleifer you probably have a better opinion on this).
What do you think? @LysandreJik , @julien-c , @thomwolf , @sshleifer | 02-27-2020 13:04:44 | 02-27-2020 13:04:44 | Not feasible for bart at the moment, sadly.<|||||>As mentioned in #3011, I think #3011 is the way to go. |
transformers | 3,046 | closed | [WIP] add generate tests to more models | - [ ] add prepare_input_for_generation() to all bert masked lm models
- [ ] add slow integration tests to check results (also include camembert for this one) | 02-27-2020 11:10:46 | 02-27-2020 11:10:46 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3046?src=pr&el=h1) Report
> Merging [#3046](https://codecov.io/gh/huggingface/transformers/pull/3046?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/298bed16a841fae3608d334441ccae4d9043611f?src=pr&el=desc) will **increase** coverage by `<.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3046?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3046 +/- ##
==========================================
+ Coverage 77.18% 77.19% +<.01%
==========================================
Files 98 98
Lines 16063 16063
==========================================
+ Hits 12399 12400 +1
+ Misses 3664 3663 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3046?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3046/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.38% <0%> (+0.16%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3046?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3046?src=pr&el=footer). Last update [298bed1...e295138](https://codecov.io/gh/huggingface/transformers/pull/3046?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,045 | closed | [docs] Provide a barebones GPT-2 colab notebook | Please provide a barebones "pick up and go" GPT-2 colab notebook for text generation, [just like gpt-2-simple does](https://colab.research.google.com/drive/1VLG8e7YSEwypxU-noRNhsv5dW4NfTGce) | 02-27-2020 07:02:44 | 02-27-2020 07:02:44 | Do you mean just for inference? Or fine-tuning too, like in the one you linked?<|||||>Yes, fine-tuning as well.<|||||>The notebook described in #2676 is a good example of something that could work; however the current implementation is not very user friendly, which was the design goal of the `gpt-2-simple` notebook. (my text generating package which extends `transformers` will have it as a feature)<|||||>> The notebook described in #2676 is a good example of something that could work; however the current implementation is not very user friendly, which was the design goal of the `gpt-2-simple` notebook. (my text generating package which extends `transformers` will have it as a feature)
@minimaxir your provided notebook has external dependencies (`examples/run_lm_finetuning.py`), which is a no-no for this case, all the source has to be laid out in the notebook's code blocks, **just like in gpt-2-simple**'s.<|||||>Agreed. The issue is that there is no functional training interface in the library itself, which is why I'm creating one that extends it (as it's a narrow use case).<|||||>@minimaxir so perhaps you can make a notebook that **fully** satisfies this issue in this case?<|||||>so, guys, can you give me an approx ETA for this issue? Kinda need that fix now<|||||>> so, guys, can you give me an approx ETA for this issue? Kinda need that fix now
I don't think there are currently specific plans to create a GPT-2 notebook. If you have a look at all the pull requests (https://github.com/huggingface/transformers/pulls) you can see that the team is hard at work on a range of different features and fixes. One of those is ready-to-go docker images with notebooks (https://github.com/huggingface/transformers/pull/3051) but as far as I can see GPT-2 doesn't have a special place there.
You can always try to create this yourself or ask specific questions on [Stack Overflow](https://stackoverflow.com/).
That being said, you can have a look at https://github.com/huggingface/transformers/pull/3063 which is currently implementing generation for GPT-2 and others in Tensorflow.<|||||>> which is currently implementing generation for GPT-2 and others in Tensorflow.
that actually sucks, since i'm targeting pytorch<|||||>If you *really* need to generate text in PyTorch on short notice, you can finetune the GPT-2 model using gpt-2-simple, and run the TF -> PyTorch conversion scripts in transformers, then you can load that and generate it from it.<|||||>The #3063 that Bram mentioned targets TensorFlow because it's already implemented in [PyTorch](https://huggingface.co/transformers/main_classes/model.html#transformers.PreTrainedModel.generate).<|||||>> If you _really_ need to generate text in PyTorch on short notice, you can finetune the GPT-2 model using gpt-2-simple, and run the TF -> PyTorch conversion scripts in transformers, then you can load that and generate it from it.
except finetuning should be done later (if at all), as for right now it's either distilgpt2 or gpt-2-large, pretrained.<|||||>So essentially there's nothing so far. Even after ~~7~~ ~~14~~ ~~21~~ ~~28~~ ~~35~~ 42+ days, issue is still hot 🔥 <|||||>I'm interested on this<|||||>upd: despite not providing any feedback in this issue they've sneakily added **at least** [**something**](https://huggingface.co/transformers/notebooks.html)<|||||>> upd: despite not providing any feedback in this issue they've sneakily added **at least** [**something**](https://huggingface.co/transformers/notebooks.html)
Please be aware that this is a large open-source repository that is maintained by a company that has many other concerns, too. However, being open-source, collaboration is encouraged. Because of the huge interest in NLP and specifically this library, it is incredibly hard to monitor all new issues while also fixing bugs and taking care of all other responsibilities, i.e. a day-to-day job.
Bumping this topic by complaining does not help anyone, but the team is very open to receiving and reviewing PRs, so feel free to add your contributions to make the library better. Alternatively, you can encourage others to help you out by sharing this issue on other platforms. I have marked the issue as a "Good first issue', encouraging others to give it a go.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,044 | closed | How can I use the this result? | I wan to call the next sentence prediction function on new data.
and this webpage tell it
https://stackoverflow.com/questions/55111360/using-bert-for-next-sentence-prediction
input_ids = torch.LongTensor([[31, 51, 99], [15, 5, 0]])
input_mask = torch.LongTensor([[1, 1, 1], [1, 1, 0]])
token_type_ids = torch.LongTensor([[0, 0, 1], [0, 1, 0]])
config = BertConfig(vocab_size_or_config_json_file=32000, hidden_size=768,
num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072)
model = BertForNextSentencePrediction(config)
seq_relationship_logits = model(input_ids, token_type_ids, input_mask)
but when I run this demo and get the result ( seq_relationship_logits) ,
(tensor([[-0.0728, 0.1863],
[ 0.3190, -0.1528]], grad_fn=<AddmmBackward>),)
how to use it as the predict result (like if sentence B follows sentence A, so the predict label is 0,else the label is 1) | 02-27-2020 07:01:17 | 02-27-2020 07:01:17 | That answer links to an old version of the codebase. Here is an example using the current-day implementation. The documentation can be improved for this particular task, though, since as currently written the given example doesn't seem to do any NSP.
```python
from torch.nn.functional import softmax
from transformers import BertForNextSentencePrediction, BertTokenizer
seq_A = 'I like cookies !'
seq_B = 'Do you like them ?'
# load pretrained model and a pretrained tokenizer
model = BertForNextSentencePrediction.from_pretrained('bert-base-cased')
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
# encode the two sequences. Particularly, make clear that they must be
# encoded as "one" input to the model by using 'seq_B' as the 'text_pair'
encoded = tokenizer.encode_plus(seq_A, text_pair=seq_B, return_tensors='pt')
print(encoded)
# {'input_ids': tensor([[ 101, 146, 1176, 18621, 106, 102, 2091, 1128, 1176, 1172, 136, 102]]),
# 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1]]),
# 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}
# NOTE how the token_type_ids are 0 for all tokens in seq_A and 1 for seq_B,
# this way the model knows which token belongs to which sequence
# a model's output is a tuple, we only need the output tensor containing
# the relationships which is the first item in the tuple
seq_relationship_logits = model(**encoded)[0]
# we still need softmax to convert the logits into probabilities
# index 0: sequence B is a continuation of sequence A
# index 1: sequence B is a random sequence
probs = softmax(seq_relationship_logits, dim=1)
print(seq_relationship_logits)
print(probs)
# tensor([[9.9993e-01, 6.7607e-05]], grad_fn=<SoftmaxBackward>)
# very high value for index 0: high probability of seq_B being a continuation of seq_A
# which is what we expect!
```<|||||>Got it,thanks a lot! |
transformers | 3,043 | closed | Issue with Makefile | # 🐛 Bug
## Information
Model I am using (Bert ...):
Language I am using the model on (English):
The problem arises when using:
* [x ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. pip install -e ".[testing]"
2. pip install -r examples/requirements.txt
3. make test-examples
Running 3 throws a few errors and warnings including a AttributeError: 'TestCaseFunction' object has no attribute 'get_marker'. Wasn't happening a few days ago. What should I do? | 02-27-2020 06:33:20 | 02-27-2020 06:33:20 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,042 | closed | Fix spelling of strictly in error messages | 02-27-2020 05:54:01 | 02-27-2020 05:54:01 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3042?src=pr&el=h1) Report
> Merging [#3042](https://codecov.io/gh/huggingface/transformers/pull/3042?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b370cc7e99c5b8c7436154d4694c33b461ea0f08?src=pr&el=desc) will **decrease** coverage by `<.01%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3042?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3042 +/- ##
==========================================
- Coverage 77.28% 77.27% -0.01%
==========================================
Files 98 98
Lines 16038 16038
==========================================
- Hits 12395 12394 -1
- Misses 3643 3644 +1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3042?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3042/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.22% <100%> (-0.17%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3042?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3042?src=pr&el=footer). Last update [b370cc7...5bdaba9](https://codecov.io/gh/huggingface/transformers/pull/3042?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 3,041 | closed | Fix batch_encode_plus | Fix #3037 | 02-27-2020 04:37:21 | 02-27-2020 04:37:21 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3041?src=pr&el=h1) Report
> Merging [#3041](https://codecov.io/gh/huggingface/transformers/pull/3041?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b370cc7e99c5b8c7436154d4694c33b461ea0f08?src=pr&el=desc) will **increase** coverage by `<.01%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3041?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3041 +/- ##
==========================================
+ Coverage 77.28% 77.29% +<.01%
==========================================
Files 98 98
Lines 16038 16037 -1
==========================================
Hits 12395 12395
+ Misses 3643 3642 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3041?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3041/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.7% <100%> (+0.13%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3041?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3041?src=pr&el=footer). Last update [b370cc7...59d9a21](https://codecov.io/gh/huggingface/transformers/pull/3041?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,040 | closed | Knowledge distillation from internal representation GPT2 | # ❓ Questions & Help
I am trying to implement above paper for the GPT2 model. The attention outputs are softmax probabilities as seen from modeling_gpt2.py line (152-162). If KLD loss is computed on these values from teacher and student I am getting negative value of KLD indicating that the inputs are not probability distributions.
The attention output is of dimensions (bs, nh, sl, sl) nh=12 I am just flattening the output and computing kld loss.
Is my understanding correct.?
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | 02-27-2020 03:30:06 | 02-27-2020 03:30:06 | The attention outputs are indeed softmax probabilities and should all be > 0. Could you post a code snippet that reproduces negative attention outputs? Or a code snippet of what exactly you are doing with the attention outputs? <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,039 | closed | Paragraph re-ranking using MS MARCO dataset | How can I use transformers package to train on MS MARCO dataset by adding my own domain data. Or can I use pre trained model to add my own domain data?
SO link: https://stackoverflow.com/questions/60424723/paragraph-re-ranking-using-ms-marco-dataset | 02-27-2020 00:43:56 | 02-27-2020 00:43:56 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,038 | closed | AttributeError: 'Model2Model' object has no attribute 'prepare_model_kwargs' in 2.5.1 | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
Model2Model
Language I am using the model on (English, Chinese ...):
English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
based on the quick start guide for Model2Model.
if I create a model using any of the following, it get an exception during the forward call :
```
model = Model2Model.from_pretrained('bert-base-uncased')
model = PreTrainedEncoderDecoder.from_pretrained('bert-base-uncased','bert-base-uncased')
decoder_config = BertConfig.from_pretrained('bert-base-uncased', is_decoder=True)
model = PreTrainedEncoderDecoder.from_pretrained('bert-base-uncased', 'bert-base-uncased', decoder_config=decoder_config)
```
```
model(torch.tensor([[10,20,300,4,500,600]]).cuda(), torch.tensor([[400,500]]).cuda(), decoder_lm_labels=torch.tensor([[400,500]]).cuda())[0]
```
this started happening in 2.5.1
2.5.0 didn't throw the error
stacktrace:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-13-77add3526cdd> in <module>()
----> 1 model(torch.tensor([[10,20,300,4,500,600]]).cuda(), torch.tensor([[400,500]]).cuda(), decoder_lm_labels=torch.tensor([[400,500]]).cuda())[0]
2 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
--> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
/usr/local/lib/python3.6/dist-packages/transformers/modeling_encoder_decoder.py in forward(self, encoder_input_ids, decoder_input_ids, **kwargs)
221 kwargs: (`optional`) Remaining dictionary of keyword arguments.
222 """
--> 223 kwargs_encoder, kwargs_decoder = self.prepare_model_kwargs(**kwargs)
224
225 # Encode if needed (training, first prediction pass)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __getattr__(self, name)
574 return modules[name]
575 raise AttributeError("'{}' object has no attribute '{}'".format(
--> 576 type(self).__name__, name))
577
578 def __setattr__(self, name, value):
AttributeError: 'Model2Model' object has no attribute 'prepare_model_kwargs'
```
## Expected behavior
No error should occur.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.1
- Platform: google colab
- Python version: 3.6
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| 02-27-2020 00:28:28 | 02-27-2020 00:28:28 | Looks like `PreTrainedEncoderDecoder.prepare_model_kwargs()` was removed in #2745
Is there a reason for that? either `prepare_model_kwargs` should be added again or the line:
`kwargs_encoder, kwargs_decoder = self.prepare_model_kwargs(**kwargs)`
should be removed from the forward call. I'd be happy to submit a PR with that guidance<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,037 | closed | Wrong logic in `batch_encode_plus()` | # 🐛 Bug
I'm trying to use `batch_encode_plus()` with already tokenized ID as input.
So I have a `list of list of int` as input. For example :
```python
x = [[0, 83, 313, 11, 10551, 2278, 16, 2183, 1958, 804, 7, 916, 11, 13933, 982, 4, 286, 68, 5046, 6, 37, 40, 3627, 231, 2697, 9, 1958, 11, 41, 36027, 27894, 1001, 21466, 424, 2233, 4, 2], [0, 20, 291, 212, 13989, 191, 3772, 42, 983, 4, 815, 34, 1714, 8617, 187, 63, 17692, 11, 8008, 4, 993, 864, 549, 1492, 2624, 5391, 9686, 8, 12291, 240, 7, 464, 4, 2]]
```
When I call `batch_encode_plus()` with this input, I receive `AssertionError` :
```
tokenizer.batch_encode_plus(x)
# >>> line 1130, in batch_encode_plus
# assert len(ids_or_pair_ids) == 2
```
---
After reviewing the code, it seems there is an error in the logic. Here :
https://github.com/huggingface/transformers/blob/b370cc7e99c5b8c7436154d4694c33b461ea0f08/src/transformers/tokenization_utils.py#L1128-L1137
since I gave only tokenized ID and not a tuple of tokenized ID (not a pair), the logic should bring me to the `else` clause. But instead, I enter the `if` clause, leading to the assertion error.
**It's currently not possible to use `batch_encode_plus()` with already tokenized text / IDs** if the input is not a pair.
---
I think this line :
https://github.com/huggingface/transformers/blob/b370cc7e99c5b8c7436154d4694c33b461ea0f08/src/transformers/tokenization_utils.py#L1129
Should be changed to this :
```python
if isinstance(ids_or_pair_ids, (list, tuple)) and len(ids_or_pair_ids) == 2:
```
| 02-27-2020 00:03:09 | 02-27-2020 00:03:09 | |
transformers | 3,036 | closed | Cannot use `transformers.GradientAccumulator` with `tf.function` | # 🐛 Bug
I'm using gradient accumulation to get larger batch sizes when pretraining BERT, and GradientAccumulator works wonderfully. Thanks for including that!
However, tf.AutoGraph fails when I wrap in the gradient accumulation step in tf.function. I have to write code like this:
```code
@tf.function
def batch_step():
step_grads = ...
return step_grads
@tf.function
def allreduce():
grads = [hvd.allreduce(grad) for grad in gradient_accumulator.gradients]
optimizer.apply_gradients(grads)
gradient_accumulator.reset()
def step():
for i in range(gradient_accumulation_steps):
step_grads = batch_step(dataset[i])
gradient_accumulator(step_grads)
allreduce()
```
This code works. However, it is very inefficient because TensorFlow stores the GradientAccumulator on CPU rather than GPU. My PCI throughput is around 9 GiB/s (on 8-GPU single-node training) when using gradient accumulation, as opposed to in the kilobytes when everything is wrapped in a tf.function. GPU utilization also drops because the bottleneck is transfer of gradients between GPU and CPU. I also suspect that it's causing a memory leak somewhere, but that's another story :) Being able to wrap the GradientAccumulator in a tf.function (or somehow pin it to the GPU, using `tf.device()` doesn't seem to be working) would be wonderful. Any tips?
Sample PCI throughput, ideally RX (GPU-to-CPU) should be close to 0.
<img width="682" alt="Screen Shot 2020-02-26 at 11 21 47 AM" src="https://user-images.githubusercontent.com/4564897/75380114-5223d580-588b-11ea-9594-e85755ac0db0.png">
## Information
Model I am using (Bert, XLNet ...): TFAlbert
Language I am using the model on (English, Chinese ...): English
## Environment info
- `transformers` version: 2.5.0
- Platform: Ubuntu 18.04
- Python version: 3.6.9
- PyTorch version (GPU?): None
- Tensorflow version (GPU?): 2.1
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes, but it also applies to single-node
| 02-26-2020 19:30:52 | 02-26-2020 19:30:52 | `transformers`'s implementation of the GradientAccumulator has been inspired by the OpenNMT implementation. Can you try their (updated) version and report the results? You can find it here:
https://github.com/OpenNMT/OpenNMT-tf/blob/0311e473d9788b0363ca663cafbdfaf7777e53f9/opennmt/optimizers/utils.py#L64
You can also see how they use it in their training script, e.g.
https://github.com/OpenNMT/OpenNMT-tf/blob/0311e473d9788b0363ca663cafbdfaf7777e53f9/opennmt/optimizers/utils.py#L64
Disclaimer: I'm a PyTorch guy, so I can't help with TF. It just seems like something to test.<|||||>@jarednielsen Can you try moving the call to `gradient_accumulator(step_grads)` directly inside `batch_step()` and avoid returning the gradients from the function?
The implementation over at OpenNMT-tf was indeed updated because the previous one (and included in `transformers`) used the `device_map` property which is removed in recent TensorFlow versions.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,035 | closed | Force pad_token_id to be set before padding for standard tokenizer | I think batch_encode_plus with a proper padding strategy should not be allowed if the pad_token_id is not set. I don't feel like it helps the user to have a python list of lists with None values that he can't transform to a torch.Tensor anyways.
As a remedy I think it is alright if a new pad_token is added or whether it is set to an existing special_token.
This behavior is already enforced for FastTokenizer, so the PR should also make it easier to transition from Tokenizer to FastTokenizer.
I will fix the tests and add a new one if you guys agree.
@mfuntowicz @thomwolf @LysandreJik | 02-26-2020 17:48:27 | 02-26-2020 17:48:27 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3035?src=pr&el=h1) Report
> Merging [#3035](https://codecov.io/gh/huggingface/transformers/pull/3035?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/298bed16a841fae3608d334441ccae4d9043611f?src=pr&el=desc) will **decrease** coverage by `1.02%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3035?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3035 +/- ##
==========================================
- Coverage 77.18% 76.16% -1.03%
==========================================
Files 98 98
Lines 16063 16065 +2
==========================================
- Hits 12399 12236 -163
- Misses 3664 3829 +165
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3035?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3035/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.72% <100%> (+0.02%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3035/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3035/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3035/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3035/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.03% <0%> (-2.21%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3035/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3035?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3035?src=pr&el=footer). Last update [298bed1...2879683](https://codecov.io/gh/huggingface/transformers/pull/3035?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Is good to merge for me after checking adapted tests in tokenization_utils.py @LysandreJik |
transformers | 3,034 | closed | fix several typos in Distil* readme | Hi, feel free to reject if you think these changes aren't necessary. Just saw a couple typos while reading the documentation and wanted to help 😄. The first change looks a bit hard to tell what changed, but it was 'superseeds' to 'supersedes'. | 02-26-2020 17:01:22 | 02-26-2020 17:01:22 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3034?src=pr&el=h1) Report
> Merging [#3034](https://codecov.io/gh/huggingface/transformers/pull/3034?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9df74b8bc42eedc496f7148b9370728054ca3b6a?src=pr&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3034?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3034 +/- ##
==========================================
+ Coverage 77.27% 77.29% +0.01%
==========================================
Files 98 98
Lines 16037 16037
==========================================
+ Hits 12393 12395 +2
+ Misses 3644 3642 -2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3034?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3034/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.54% <0%> (+0.32%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3034?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3034?src=pr&el=footer). Last update [9df74b8...1482c03](https://codecov.io/gh/huggingface/transformers/pull/3034?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,033 | closed | Fix attn mask gpt2 when using past | - fixed issue #3031
- updated doc-string for GPT2
- added two test for GPT2 that I think are important:
- check that when using past as an input to speed up decoding, the results are equivalent to not using past.
- check that when using past and attn_mask as an input to speed up decoding, the results are equivalent to not using past where as an input_id slice was corrupted that is masked by the attn_mask. | 02-26-2020 16:32:38 | 02-26-2020 16:32:38 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3033?src=pr&el=h1) Report
> Merging [#3033](https://codecov.io/gh/huggingface/transformers/pull/3033?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bb7c46852051f7d031dd4be0240c9c9db82f6ed9?src=pr&el=desc) will **decrease** coverage by `1.03%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3033?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3033 +/- ##
==========================================
- Coverage 77.26% 76.23% -1.04%
==========================================
Files 98 98
Lines 16047 16048 +1
==========================================
- Hits 12399 12234 -165
- Misses 3648 3814 +166
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3033?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/3033/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.16% <100%> (+0.04%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3033/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3033/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3033/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3033/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.03% <0%> (-2.21%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3033/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3033/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.22% <0%> (-0.17%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3033?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3033?src=pr&el=footer). Last update [bb7c468...0909d8e](https://codecov.io/gh/huggingface/transformers/pull/3033?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,032 | closed | Loading custom weights for BERT in pytorch | Is it possible to load custom pre-trained weights for BERT, other than the one you provide ? | 02-26-2020 15:21:09 | 02-26-2020 15:21:09 | Yes. Have a look at the documentation for [`from_pretrained`](https://huggingface.co/transformers/main_classes/model.html#transformers.PreTrainedModel.from_pretrained), particularly the `pretrained_model_name_or_path ` argument.<|||||>Yes sorry I looked after I asked and found everything I needed it |
transformers | 3,031 | closed | Forward pass with GPT2 using both past and attention_mask as an input leads to dimension error | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): GPT2
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ x ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ x ] my own task or dataset: (give details below)
## To reproduce run the following code:
```
from transformers import GPT2LMHeadModel
import torch
model = GPT2LMHeadModel.from_pretrained('gpt2')
input_ids = torch.tensor([8, 8, 0, 50256, 50256]).unsqueeze(0)
attn_mask = torch.tensor([1, 1, 1, 0, 0]).unsqueeze(0)
# first step there is no past so lets get it from the model and append new embedding id to inputs and extend the attn_mask
logits_output, past = model(input_ids, attention_mask=attn_mask)
next_token = torch.argmax(logits_output[:, -1, :]).unsqueeze(0)
input_ids = torch.cat([input_ids, next_token.unsqueeze(-1)], dim=-1)
attn_mask = torch.cat([attn_mask, torch.ones((attn_mask.shape[0], 1)).long()], dim=1)
# now we have a past so we can use it to speed up training
model_inputs = model.prepare_inputs_for_generation(input_ids=input_ids, past=past)
logits_output, past = model(**model_inputs, attention_mask=attn_mask) # this leads to an error which it should not
```
## Expected behavior
No error should be thrown and the forward pass to get `logits_output` and `past` should work
## Environment info
- `transformers` version: 2.5.1
- Platform: Linux-5.3.0-40-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 02-26-2020 14:39:57 | 02-26-2020 14:39:57 | FYI - I created a stackoverflow question about this here: https://stackoverflow.com/questions/60459292/using-past-and-attention-mask-at-the-same-time-for-gpt2<|||||>@Damiox - answered you stackoverflow question :-) |
transformers | 3,030 | closed | run_tf_glue with AdamW optimizer and distributed training | # ❓ Questions & Help
Can you please update the run_tf_glue example (Tensorflow version) that includes the Adam optimizer with Warm-up and weight decay. If you can also show how to run the code on multiple GPUs and on TPU.
| 02-26-2020 14:37:22 | 02-26-2020 14:37:22 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,029 | closed | How to add data to pretrained model. | I want to add wiki data to pretrained model.
i.e., kor wiki data is added to XLMRoberta Model weight.
I think that files in /transformers/templates/adding_a_new_model do
Is it right?
please let me know way to add data.
| 02-26-2020 12:01:15 | 02-26-2020 12:01:15 | Have a look at the language modeling example:
https://huggingface.co/transformers/examples.html#language-model-training
https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py<|||||>Thank you for your answering.
I did that^^<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,028 | closed | AttributeError: 'Tensor' object has no attribute 'size' | This is the model that I have defined:
```
input_layer = keras.layers.Input(shape = (attention_mask.shape[1],), dtype='int64')
bert = DistilBertModel.from_pretrained("distilbert-base-cased")(input_layer)
bert = bert[0][:,0,:]
# bert = keras.layers.Dense(units=10, activation='relu')(bert)
classifier = keras.layers.Dense(units=1, activation='sigmoid')(bert)
model = keras.models.Model(inputs=input_layer, outputs=classifier)
model.summary()
```
This is the error I am getting.
```
AttributeError Traceback (most recent call last)
<ipython-input-12-6d7e88036056> in <module>()
1 input_layer = keras.layers.Input(shape = (attention_mask.shape[1],), dtype='int64')
----> 2 bert = DistilBertModel.from_pretrained("distilbert-base-cased")(input_layer)
3 bert = bert[0][:,0,:]
4 # bert = keras.layers.Dense(units=10, activation='relu')(bert)
5 classifier = keras.layers.Dense(units=1, activation='sigmoid')(bert)
1 frames
/usr/local/lib/python3.6/dist-packages/transformers/modeling_distilbert.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds)
449 raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
450 elif input_ids is not None:
--> 451 input_shape = input_ids.size()
452 elif inputs_embeds is not None:
453 input_shape = inputs_embeds.size()[:-1]
AttributeError: 'Tensor' object has no attribute 'size'
```
The same code works fine when distilbert is replaced with bert. What to do in this case. | 02-26-2020 10:44:28 | 02-26-2020 10:44:28 | You are using TF. Did you mean to use `TFDistilBertModel` instead of `DistilBertModel`?<|||||>@BramVanroy got it. Thanks.<|||||>I wonder if it would be possible to know in advance whether a model supports TF or pyTorch, or both... and throw an exception accordingly? |
transformers | 3,027 | closed | This class and module cannot be found | I want to use BertForNextSentencePrediction in ' modeling_bert.py' line 1020 ,but when I run Examples in line 1065, error happens
File "E:\work\pycharm\transformers-master\src\transformers\tokenization_bert.py", line 24, in <module>
from tokenizers import BertWordPieceTokenizer
ImportError: No module named 'tokenizers'
where I can find this module tokenizers,
thanks !!
| 02-26-2020 06:42:31 | 02-26-2020 06:42:31 | You can do `pip install tokenizers` or [install from source](https://github.com/huggingface/tokenizers).<|||||>> You can do `pip install tokenizers` or [install from source](https://github.com/huggingface/tokenizers).
OK, I've got it ,thank you<|||||>> pip install tokenizers
```bash
Requirement already satisfied: tokenizers in ./anaconda3/envs/dnabert/lib/python3.6/site-packages (0.8.1rc2)
```
I did it,but it also has the error<|||||>in my another computer, the env displays its version is 0.5.0, so why give me 0.8.1rc2? sometimes i want to say, the doc is hard to understand, when understanded, the errors occurs one by one because of some small tips, is it my fault? maybe, but why not write it straightly? |
transformers | 3,026 | closed | language_modeling.py doesn't continue from last global step | # 🐛 Bug
Hey, The checkpoint's suffix is the last optimization step rather than the last global step (I'm working with accumulation steps)
## Information
The problem arises when using:
* [ ] the official example scripts: language_modeling.py
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: language_modeling.py
## To reproduce
Steps to reproduce the behavior:
1. run a model on language_modeling.py script with an accumulation step > 0
2. save a checkpoint after x > 0 steps and exit
3. try to continue training and it will continue from the last optimization step rather than global step
```
roberta-base-openai-detector, roberta-large-openai-detector). Assuming 'tmlm_roberta_output/checkpoint-480' is a path, a model identifier, or url to a directory containing tokenizer files.
02/26/2020 07:39:49 - INFO - transformers.tokenization_utils - Didn't find file tmlm_roberta_output/checkpoint-480/added_tokens.json. We won't load it.
02/26/2020 07:39:49 - INFO - transformers.tokenization_utils - loading file tmlm_roberta_output/checkpoint-480/vocab.json
02/26/2020 07:39:49 - INFO - transformers.tokenization_utils - loading file tmlm_roberta_output/checkpoint-480/merges.txt
02/26/2020 07:39:49 - INFO - transformers.tokenization_utils - loading file None
02/26/2020 07:39:49 - INFO - transformers.tokenization_utils - loading file tmlm_roberta_output/checkpoint-480/special_tokens_map.json
02/26/2020 07:39:49 - INFO - transformers.tokenization_utils - loading file tmlm_roberta_output/checkpoint-480/tokenizer_config.json
02/26/2020 07:39:49 - INFO - transformers.modeling_utils - loading weights file tmlm_roberta_output/checkpoint-480/pytorch_model.bin
init tud head
02/26/2020 07:40:08 - INFO - __main__ - Training/evaluation parameters Namespace(adam_epsilon=1e-08, block_size=512, cache_dir=None, config_name=None, device=device(type='cuda'), do_eval=True, do_train=True, eval_all_checkpoints=False, eval_data_file='/specific/netapp5_2/gamir/advml19/yuvalk/project/transformers/examples/lm_data/wiki.test.raw.time_filter.normalized', evaluate_during_training=False, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=64, learning_rate=5e-05, line_by_line=False, local_rank=-1, logging_steps=500, max_grad_norm=1.0, max_steps=-1, mlm=True, mlm_probability=0.15, model_name_or_path='tmlm_roberta_output/checkpoint-480', model_type='roberta', n_gpu=4, no_cuda=False, num_train_epochs=1.0, output_dir='tmlm_roberta_output', overwrite_cache=False, overwrite_output_dir=True, per_gpu_eval_batch_size=4, per_gpu_train_batch_size=1, save_steps=80, save_total_limit=None, seed=42, server_ip='', server_port='', should_continue=True, tokenizer_name=None, train_data_file='/specific/netapp5_2/gamir/advml19/yuvalk/project/transformers/examples/lm_data/wiki.train.raw.time_filter.normalized', warmup_steps=0, weight_decay=0.0)
02/26/2020 07:40:08 - INFO - __main__ - Loading features from cached file /specific/netapp5_2/gamir/advml19/yuvalk/project/transformers/examples/lm_data/roberta_cached_lm_510_wiki.train.raw.time_filter.normalized
02/26/2020 07:40:16 - INFO - __main__ - ***** Running training *****
02/26/2020 07:40:16 - INFO - __main__ - Num examples = 163046
02/26/2020 07:40:16 - INFO - __main__ - Num Epochs = 1
02/26/2020 07:40:16 - INFO - __main__ - Instantaneous batch size per GPU = 1
02/26/2020 07:40:16 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 256
02/26/2020 07:40:16 - INFO - __main__ - Gradient Accumulation steps = 64
02/26/2020 07:40:16 - INFO - __main__ - Total optimization steps = 636
02/26/2020 07:40:16 - INFO - __main__ - Continuing training from checkpoint, will skip to saved global_step
02/26/2020 07:40:16 - INFO - __main__ - Continuing training from epoch 0
02/26/2020 07:40:16 - INFO - __main__ - Continuing training from global step 480
02/26/2020 07:40:16 - INFO - __main__ - Will skip the first 480 steps in the first epoch
```
## Expected behavior
I expect it to run from the last global step. I.e. optimization steps * gradient accumulation steps. Note that optimization steps == checkpoint suffix.
## I made the following changes and it seems to work ok:
Former code:
```global_step = int(checkpoint_suffix)
epochs_trained = global_step // (len(train_dataloader) // args.gradient_accumulation_steps)
steps_trained_in_current_epoch = global_step % (len(train_dataloader) // args.gradient_accumulation_steps)```
New code:
```global_step = int(checkpoint_suffix) * args.gradient_accumulation_steps
epochs_trained = global_step // len(train_dataloader)
steps_trained_in_current_epoch = global_step % (len(train_dataloader))```
- `transformers` version: latest
- Platform:
- Python version: 3.7
- PyTorch version (GPU?): GPU
- Using GPU in script?: yes
| 02-26-2020 06:12:59 | 02-26-2020 06:12:59 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,025 | closed | Why do I run example/run_ner.py with no output | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
### This is the log I ran. Any questions? Please help me

<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | 02-26-2020 02:45:35 | 02-26-2020 02:45:35 | Please don't post a screenshot. Copy-and-paste your code or output instead.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>@Vvegetables , does this problem still exist?<|||||>it's fine,thanks
发自我的iPhone
------------------ Original ------------------
From: Stefan Schweter <[email protected]>
Date: 周日,4月 26,2020 20:55
To: huggingface/transformers <[email protected]>
Cc: Vvegetables <[email protected]>, Mention <[email protected]>
Subject: Re: [huggingface/transformers] Why do I run example/run_ner.py with no output (#3025)
@Vvegetables , does this problem still exist?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,024 | closed | Fix (non-slow) tests on GPU (torch) | Fixes ~29 failing tests
Also cf previous commit from december: https://github.com/huggingface/transformers/pull/2055/commits/61978c1dd3f340a545e74537c3dae41a4514e867
(not sure why T5 and the common methods were not failing back then) | 02-26-2020 01:26:29 | 02-26-2020 01:26:29 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3024?src=pr&el=h1) Report
> Merging [#3024](https://codecov.io/gh/huggingface/transformers/pull/3024?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bb7c46852051f7d031dd4be0240c9c9db82f6ed9?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3024?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3024 +/- ##
=======================================
Coverage 77.26% 77.26%
=======================================
Files 98 98
Lines 16047 16047
=======================================
Hits 12399 12399
Misses 3648 3648
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3024?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `84.58% <100%> (ø)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3024?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3024?src=pr&el=footer). Last update [bb7c468...9c14bc7](https://codecov.io/gh/huggingface/transformers/pull/3024?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,023 | closed | BART : host `bart-large-cnn` | # 🌟 New model addition
BART pretrained model `bart-large` is currently provided, as well as fine-tuned BART model `bart-large-mnli`.
**How about [`bart-large-cnn`](https://github.com/pytorch/fairseq/tree/master/examples/bart#pre-trained-models) ?** | 02-26-2020 01:20:25 | 02-26-2020 01:20:25 | IIRC @sshleifer is working on getting summarization working with that model. |
transformers | 3,022 | closed | Make format consistent with that of PreTrainedTokenizer | 02-25-2020 22:57:50 | 02-25-2020 22:57:50 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3022?src=pr&el=h1) Report
> :exclamation: No coverage uploaded for pull request base (`master@c913eb9`). [Click here to learn what that means](https://docs.codecov.io/docs/error-reference#section-missing-base-commit).
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3022?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3022 +/- ##
=========================================
Coverage ? 77.26%
=========================================
Files ? 98
Lines ? 16040
Branches ? 0
=========================================
Hits ? 12393
Misses ? 3647
Partials ? 0
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3022?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3022/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.57% <100%> (ø)` | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3022?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3022?src=pr&el=footer). Last update [c913eb9...34be93e](https://codecov.io/gh/huggingface/transformers/pull/3022?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
|
transformers | 3,021 | closed | Can GPT2LMHeadModel do batch inference with variable sentence lengths? | Given GPT2 tokenizer do not have an internal pad_token_id, how do I pad sentences and do batch inference using GPT2LMHeadModel?
Specifically my code as:
```
prompt_text = [
'in this paper we',
'we are trying to',
'The purpose of this workshop is to check whether we can', ]
tokens = [tokenizer.convert_tokens_to_ids(tokenizer.tokenize(x, add_prefix_space=True)) for x in prompt_text]
inputs = pad_sequence([torch.LongTensor(x) for x in tokens], batch_first = True, padding_value=tokenizer.eos_token_id)
outputs, past = model(input_ids=inputs, attention_mask=None)
```
This will return non-relevant predictions since GPT2 will consider the eos_tokens and start a new sentence in the batch.
Can anyone please share sample codes that using GPT2LMHeadModel to do batch inference with various sentence lengths?
Thanks!
| 02-25-2020 22:05:02 | 02-25-2020 22:05:02 | It seems possible to by-pass this issue by setting appropriate `attention_mask` so that no tokens will attend the positions that are supposed to be paddings, this way you can use whatever token as padding. I'm working on this issue too, will try to follow up if it works out.<|||||>I tried a rough version, basically adding attention mask to the padding positions and keep updating this mask as generation grows. One thing worth noting is that in the first step instead of extract the -1-th positions output for each sample, we need to keep track of the real prompt ending position, otherwise sometimes the output from padding positions will be extracted and produce random results.
Code snippet:
```
import torch
from transformers import GPT2Tokenizer, GPT2LMHeadModel
model = GPT2LMHeadModel.from_pretrained('gpt2')
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
prompt_text = [
'in this paper we',
'we are trying to',
'The purpose of this workshop is to check whether we can', ]
batch_size = len(prompt_text)
max_length = 30
eos_token_id = tokenizer.eos_token_id
model = model.cuda()
token_ids = [tokenizer.encode(s, add_special_tokens=False) for s in prompt_text]
prompt_lengths = [len(s) for s in token_ids]
max_prompt_len = max(prompt_lengths)
# use 0 as padding id, shouldn't matter
padded_tokens = [tok_ids + [0] * (max_prompt_len - len(tok_ids)) for tok_ids in token_ids]
input_ids = torch.LongTensor(padded_tokens).cuda()
attn_mask = torch.zeros(input_ids.shape).long().cuda()
for ix, tok_ids in enumerate(token_ids):
attn_mask[ix][:len(tok_ids)] = 1
unfinished_sents = input_ids.new(batch_size).fill_(1)
past = None
cur_len = input_ids.shape[1]
def post_processing(input_ids, attn_mask):
"""Remove padding tokens in the middle of the sequence."""
input_ids_proc = []
for ix, seq in enumerate(input_ids):
input_ids_proc.append([tok_id for tok_id, mask in zip(seq, attn_mask[ix]) if mask != 0])
return input_ids_proc
input_lengths_index = torch.tensor([x - 1 for x in prompt_lengths]).cuda()
input_lengths_index = input_lengths_index.view(-1, 1).repeat(1, 50257).unsqueeze(1)
while cur_len < max_length:
model_inputs = model.prepare_inputs_for_generation(input_ids, past=past, attention_mask=attn_mask)
outputs = model(**model_inputs)
if cur_len == max_prompt_len:
# at first step we can't directly extract the -1-th position's
# prediction for next word, since for some samples the -1-th
# token is PAD. Instead we keep track of the real prompt ending.
next_token_logits = outputs[0].gather(1, input_lengths_index).squeeze(1)
else:
next_token_logits = outputs[0][:, -1, :]
past = outputs[1]
next_token = torch.argmax(next_token_logits, dim=-1)
tokens_to_add = next_token * unfinished_sents + 0 * (1 - unfinished_sents)
input_ids = torch.cat([input_ids, tokens_to_add.unsqueeze(-1)], dim=-1)
attn_mask = torch.cat([attn_mask, torch.ones((batch_size, 1)).long().cuda()], dim=1)
unfinished_sents.mul_(tokens_to_add.ne(eos_token_id).long())
cur_len += 1
if unfinished_sents.max() == 0:
break
input_ids = post_processing(input_ids, attn_mask)
for item in input_ids:
print(tokenizer.decode(item))
```
Also a minor change to `src/transformers/modeling_gpt2.py`:
line 422: `attention_mask = attention_mask.view(-1, input_shape[-1])`
change to `attention_mask = attention_mask.view(input_shape[0], -1)`
(not sure if this change will break other things)
Output:
`in this paper we have a very good idea of how to use the data to make predictions about the future. We`
`we are trying to get the best possible deal for the best price. We are not going to be able to offer`
`The purpose of this workshop is to check whether we can make a difference in the lives of people who are struggling with mental illness.`
<|||||>@schizism Concerning LM inference on batches of different lengths is actually a problem we are currently looking at. Ideally, you should be able to simple put your input_ids (and an attention_mask) to model.generate() to make it work.
@XinyuHua thanks for your great contribution to make LM inference work on batches having different lengths. Also it seems like you found a bug, when using the `past` and `attention_mask` variables as an input in GPT2. That's great! I will open a new issue for that and take a look :-)
Below, I am adding a simplified code snippet using simpler tokenization functions.
In this code, no `past` variable is used related to the bug found by @XinyuHua.
```
from transformers import GPT2LMHeadModel, GPT2Tokenizer
import torch
model = GPT2LMHeadModel.from_pretrained('gpt2')
tokenizer = GPT2Tokenizer.from_pretrained('gpt2', pad_token='<PAD>')
# IMPORTANT: Note that setting the <PAD> token like this itn the constructor gives the
# pad_token the pad_token_id = 50256, which normally belongs to <BOS> token_ids in GPT2
# This is a very ugly way that works at the moment of setting the pad_token_id to the <BOS> token that is already included in the vocab size. This will be updated in the coming weeks! # noqa: E501
prompt_text = [
'in this paper we',
'we are trying to',
'The purpose of this workshop is to check whether we can']
# encode plus batch handles multiple batches and automatically creates attention_masks
seq_len = 11
encodings_dict = tokenizer.batch_encode_plus(prompt_text, max_length=seq_len, pad_to_max_length=True)
# ideally we should be able to just input the following two variables to the function model.generate() ... => to be implemented soon! # noqa: E501
input_ids = torch.tensor(encodings_dict['input_ids'])
attn_mask = torch.tensor(encodings_dict['attention_mask'])
num_tokens_to_produce = 20
pad_token_id = tokenizer.pad_token_id
eos_token_id = tokenizer.eos_token_id
eos_not_in_sents = torch.ones(input_ids.shape[0]).long()
# we need to get the token ids of the last non-padded value
last_non_masked_idx = torch.sum(attn_mask, dim=1) - 1
start_idx = inp_idx = (last_non_masked_idx).view(-1, 1).repeat(1, tokenizer.vocab_size).unsqueeze(1)
past = None
# get correct position ids
position_ids = torch.tensor([list(range(seq_len)) for i in range(input_ids.shape[0])])
for i, position_ids_slice in enumerate(position_ids):
position_ids_slice[last_non_masked_idx[i]:] = position_ids_slice[last_non_masked_idx[i]]
for step in range(num_tokens_to_produce):
outputs = model(input_ids, attention_mask=attn_mask, position_ids=position_ids)
# in the first decoding step, we want to use the 'real' last position for each sentence
if step == 0:
next_token_logits = outputs[0].gather(1, start_idx).squeeze(1)
else:
next_token_logits = outputs[0][:, -1, :]
next_tokens = torch.argmax(next_token_logits, dim=-1)
# this updates which sentences have not seen an <EOS> token so far
# if one <EOS> token was seen the sentence is finished
eos_not_in_sents.mul_(next_tokens.ne(eos_token_id).long())
# either append a padding token here if <EOS> has been seen or append next token
tokens_to_add = next_tokens * (eos_not_in_sents) + pad_token_id * (1 - eos_not_in_sents)
# Update input_ids, attn_mask and position_ids
input_ids = torch.cat([input_ids, tokens_to_add.unsqueeze(-1)], dim=-1)
attn_mask = torch.cat([attn_mask, torch.ones((attn_mask.shape[0], 1)).long()], dim=1)
position_ids = torch.cat([position_ids, (position_ids[:, -1] + 1).unsqueeze(-1)], dim=1)
[print(tokenizer.decode(output, skip_special_tokens=True)) for output in input_ids]
```
<|||||>Thanks for this much cleaned version @patrickvonplaten! Just one quick issue, I forgot to modify the position ids for each sample, so the padding will add up to the position ids and future tokens will get wrong position ids. This might cause issues when the prompt lengths in a batch are very different.<|||||>Fixed the issue #3033 regarding the attention mask with your proposed solution @XinyuHua - thanks! <|||||>> Thanks for this much cleaned version @patrickvonplaten! Just one quick issue, I forgot to modify the position ids for each sample, so the padding will add up to the position ids and future tokens will get wrong position ids. This might cause issues when the prompt lengths in a batch are very different.
added the correct position ids. Feel free to review and comment! <|||||>Thank you @XinyuHua @patrickvonplaten! These are very helpful!<|||||>@patrickvonplaten It looks like `tokens_to_add` in your script is unused, should that be used in place of `next_tokens` in the line `input_ids = torch.cat([input_ids, next_tokens.unsqueeze(-1)], dim=-1)`?<|||||>Uups! Yeah definitely - thanks a lot for pointing this out. Edited the script :-) <|||||>Hi, padding still seems to be an issue with LMHeads in case of just perplexity calculation (and not generation). I am trying to run examples/run_language_modelling.py and having a hard time using GPT2LMHeadModel and same is the case with transformer-XL. I am running it in just evaluation mode (by setting --do_eval).
That example code uses training.py and data/data_collator.py, which throws the following error while batching sentences:
"ValueError: You are attempting to pad samples but the tokenizer you are using (TransfoXLTokenizer) does not have one."
Any idea where I could be going wrong?
Thanks<|||||>@bajajahsaas Are you using the `--line_by_line` flag? Can you post the exact command you're running?<|||||>@julien-c I just ran into the exact same issue and I am indeed using the `--line_by_line` flag. The exact command I'm using:
```
python run_language_modeling.py \
--output_dir='/content/drive/My Drive/finetuned_models/run1' \
--model_type=gpt2 \
--model_name_or_path=gpt2 \
--save_total_limit=5 \
--num_train_epochs=1.0 \
--overwrite_output_dir \
--do_train \
--evaluate_during_training \
--logging_steps=1000 \
--save_steps=1000 \
--train_data_file=/content/train.txt \
--line_by_line \
--do_eval \
--eval_data_file=/content/valid.txt \
--per_gpu_train_batch_size=2 \
--per_gpu_eval_batch_size=2 \
```
If I take the `--line_by_line` flag out, the command executes fine.<|||||>HI @julien-c, thanks for checking this. I am using `--line_by_line` and my exact command is as below:
`python run_lm.py --model_type gpt2 --model_name_or_path gpt2 --do_eval --eval_data_file ../../data/wikitext-103/valid.txt --line_by_line --output_dir logslm `
I am just running inference on wikitext-103 dataset, and both xlnet and transformer-xl are throwing this error. However, since the error is caused by: https://github.com/huggingface/transformers/blob/4e817ff41885063e08bb3bcd63e5adfd835b9911/src/transformers/data/data_collator.py#L106
I tried a simple workaround using: `tokenizer.pad_token = "<pad>"`. I am not sure if this is a correct fix and even perplexity scores are not matching on standard datasets. Note: I am not doing any training, just perplexity calculation.<|||||>Yes GPT2 is not compatible with the LineByLineDataset, because it doesn't have a padding token out of the box.
Feel free to propose an update to the error's wording if you think of a clearer way to express that.<|||||>Sure, thanks for looking into this. Moreover, how shall we use this example code (run_language_modelling.py) for such models? I tried removing `--line_by_line` for wikitext-103 dataset, but that screws up the data processing in my opinion.<|||||>This is not a real fix, more of a hack, but if you change the code in `transformers.data.data_collator.DataCollatorForLanguageModelling._tensorize_batch`
from:
```
if self.tokenizer._pad_token is None:
raise ValueError(...)
```
to:
```
if self.tokenizer._pad_token is None:
return pad_sequence(examples, batch_first=True)
```
The language modelling script will run fine with the --line_by_line. In practice, it means it does padding with zeros, which is the default value for padding_value.
This "error" was introduced a week ago with the commit to master dd9d483d03962fea127f59661f3ae6156e7a91d2 by @julien-c that refactored the LM train script. I was using the LM script with the same data before that and it was working.
I am not sure how "wrong" this is, but I'm using a dataset of relatively short texts (up to 400 words each, often shorter), and I'm getting decent results. I get a bunch of "!" (the token 0) at the end of the generation sometimes, but other than that, it looks good.
I tried an alternative of separating the short texts with <|endoftext|> tokens, and training without the --line_by_line option, but the results I get in generation are qualitatively much worse.<|||||>Hi @jorgemcgomes, thanks for checking. However, check this [issue](https://github.com/huggingface/transformers/issues/586), it seems tokenizers have 0 index pointed to some vocab token.<|||||>How about using left side padding for GPT-2, and use attention mask to avoid attending to those padded words? Of course, position_ids shall be set properly to avoid impacting position embeddings. This approach could work with past state since padding word will not be in the middle after appending generated word.
<|||||>> How about using left side padding for GPT-2, and use attention mask to avoid attending to those padded words? Of course, position_ids shall be set properly to avoid impacting position embeddings. This approach could work with past state since padding word will not be in the middle after appending generated word.
@tianleiwu this worked for me! Saved me HOURS in compute time, thank you!
```python
tokenizer.padding_side = "left"
encoded_prompt_dict = tokenizer.batch_encode_plus(input, return_tensors="pt", pad_to_max_length=True)
encoded_prompt = encoded_prompt_dict['input_ids'].to(args.device)
encoded_mask = encoded_prompt_dict['attention_mask'].to(args.device)
```<|||||>> @schizism Concerning LM inference on batches of different lengths is actually a problem we are currently looking at. Ideally, you should be able to simple put your input_ids (and an attention_mask) to model.generate() to make it work.
>
> @XinyuHua thanks for your great contribution to make LM inference work on batches having different lengths. Also it seems like you found a bug, when using the `past` and `attention_mask` variables as an input in GPT2. That's great! I will open a new issue for that and take a look :-)
>
> Below, I am adding a simplified code snippet using simpler tokenization functions.
> In this code, no `past` variable is used related to the bug found by @XinyuHua.
>
> ```
> from transformers import GPT2LMHeadModel, GPT2Tokenizer
> import torch
>
> model = GPT2LMHeadModel.from_pretrained('gpt2')
> tokenizer = GPT2Tokenizer.from_pretrained('gpt2', pad_token='<PAD>')
> # IMPORTANT: Note that setting the <PAD> token like this itn the constructor gives the
> # pad_token the pad_token_id = 50256, which normally belongs to <BOS> token_ids in GPT2
> # This is a very ugly way that works at the moment of setting the pad_token_id to the <BOS> token that is already included in the vocab size. This will be updated in the coming weeks! # noqa: E501
>
> prompt_text = [
> 'in this paper we',
> 'we are trying to',
> 'The purpose of this workshop is to check whether we can']
>
> # encode plus batch handles multiple batches and automatically creates attention_masks
> seq_len = 11
> encodings_dict = tokenizer.batch_encode_plus(prompt_text, max_length=seq_len, pad_to_max_length=True)
>
> # ideally we should be able to just input the following two variables to the function model.generate() ... => to be implemented soon! # noqa: E501
> input_ids = torch.tensor(encodings_dict['input_ids'])
> attn_mask = torch.tensor(encodings_dict['attention_mask'])
>
> num_tokens_to_produce = 20
> pad_token_id = tokenizer.pad_token_id
> eos_token_id = tokenizer.eos_token_id
> eos_not_in_sents = torch.ones(input_ids.shape[0]).long()
>
> # we need to get the token ids of the last non-padded value
> last_non_masked_idx = torch.sum(attn_mask, dim=1) - 1
> start_idx = inp_idx = (last_non_masked_idx).view(-1, 1).repeat(1, tokenizer.vocab_size).unsqueeze(1)
> past = None
>
> # get correct position ids
> position_ids = torch.tensor([list(range(seq_len)) for i in range(input_ids.shape[0])])
> for i, position_ids_slice in enumerate(position_ids):
> position_ids_slice[last_non_masked_idx[i]:] = position_ids_slice[last_non_masked_idx[i]]
>
> for step in range(num_tokens_to_produce):
> outputs = model(input_ids, attention_mask=attn_mask, position_ids=position_ids)
>
> # in the first decoding step, we want to use the 'real' last position for each sentence
> if step == 0:
> next_token_logits = outputs[0].gather(1, start_idx).squeeze(1)
> else:
> next_token_logits = outputs[0][:, -1, :]
>
> next_tokens = torch.argmax(next_token_logits, dim=-1)
>
> # this updates which sentences have not seen an <EOS> token so far
> # if one <EOS> token was seen the sentence is finished
> eos_not_in_sents.mul_(next_tokens.ne(eos_token_id).long())
>
> # either append a padding token here if <EOS> has been seen or append next token
> tokens_to_add = next_tokens * (eos_not_in_sents) + pad_token_id * (1 - eos_not_in_sents)
>
> # Update input_ids, attn_mask and position_ids
> input_ids = torch.cat([input_ids, tokens_to_add.unsqueeze(-1)], dim=-1)
> attn_mask = torch.cat([attn_mask, torch.ones((attn_mask.shape[0], 1)).long()], dim=1)
> position_ids = torch.cat([position_ids, (position_ids[:, -1] + 1).unsqueeze(-1)], dim=1)
>
> [print(tokenizer.decode(output, skip_special_tokens=True)) for output in input_ids]
> ```
@patrickvonplaten Thanks for sharing this, I wonder if inputting `input_ids` and `attn_mask` to `model.generate` is possible now? is this feature available now?
I've tried it and I think there should be some concerns regarding positional_embedding since I don't get meaningful result.
On the other hand when I try setting `tokenizer.padding_side = "left"` as suggested/tried by @tianleiwu @AADeLucia, I get the same output for different hyper parameters like k_sampling, p_sampling, length, ...
@AADeLucia @tianleiwu have you been successful on this? did you take any action regarding position_ids?
Would appreciate any pointer.
<|||||>@fabrahman I realize I have huggingface version 2.8 installed, which was not working with `generate()`. I used the left-side padding with p-sampling and it worked for me (i.e. the outputs were reasonable for the settings and I was not getting the same issues as when I did not use left-side padding). I took no action regarding position_ids and I only provided the attention mask. Maybe the newest version of huggingface implemented `generate()` correctly?
What do you mean you get the same output? Can you post your code?<|||||>@AADeLucia thanks for you quick reply. When you say it was not working with `generate()`, does that mean you got errors when passing `encoded_prompt ` and 'encoded_mask` to generate function?
Actually, I resolved same outputs with different decoding issue, but now I get similar outputs if I sample 5 times `(num_return_sequences=5)`. That is the returning sequences are the same:
This is the code I am trying as an example:
```
from transformers import GPT2LMHeadModel, GPT2Tokenizer
import torch
model = GPT2LMHeadModel.from_pretrained('gpt2')
tokenizer = GPT2Tokenizer.from_pretrained('gpt2', pad_token='<PAD>')
prompt_text = [
'in this paper we',
'we are trying to',
'The purpose of this workshop is to check whether we can']
# encode plus batch handles multiple batches and automatically creates attention_masks
seq_len = 11
tokenizer.padding_side = "left"
encodings_dict = tokenizer.batch_encode_plus(prompt_text, max_length=seq_len, pad_to_max_length=True)
input_ids = torch.tensor(encodings_dict['input_ids'])
attn_mask = torch.tensor(encodings_dict['attention_mask'])
outputs = model.generate(input_ids, attention_mask=attn_mask, do_sample=True, max_length=40, top_k=10, num_return_sequences=5)
outputs = [tokenizer.decode(output, skip_special_tokens=True) for output in outputs]
outputs = [text[:text.find(".")+1] for text in outputs if "." in text]
outputs
```
and here is the output results:
```
['in this paper we present a new approach to the problem of the "unconscious" and the "conscious" in the study of the unconscious.',
'in this paper we present a new approach to the problem of the "unconscious" and the "conscious" in the study of the unconscious.',
'in this paper we present a new approach to the problem of the "unconscious" and the "conscious" in the study of the unconscious.',
'in this paper we present a new approach to the problem of the "unconscious" and the "conscious" in the study of the unconscious.',
'in this paper we present a new approach to the problem of the "unconscious" and the "conscious" in the study of the unconscious.',
'we are trying to get a new version of the game to work on the PC.',
'we are trying to get a new version of the game to work on the PC.',
'we are trying to get a new version of the game to work on the PC.',
'we are trying to get a new version of the game to work on the PC.',
'we are trying to get a new version of the game to work on the PC.',
'The purpose of this workshop is to check whether we can make a difference in the lives of people who are struggling with mental illness.',
'The purpose of this workshop is to check whether we can make a difference in the lives of people who are struggling with mental illness.',
'The purpose of this workshop is to check whether we can make a difference in the lives of people who are struggling with mental illness.',
'The purpose of this workshop is to check whether we can make a difference in the lives of people who are struggling with mental illness.',
'The purpose of this workshop is to check whether we can make a difference in the lives of people who are struggling with mental illness.']
```<|||||>@faiazrahman By "not working" I mean I would pass in padded prompts and masks and the model would generate as if the mask was not there. So the padded prompts were like
```
<|startoftext|>Hello there<|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|>
```
(I padded with `<|endoftext|>` but it shouldn't matter as long as the attention mask is working)
And then the output would see the multiple `<|endoftext|>` padding tokens and start generating `<|startoftext|>` instead of continuing from the prompts!
Hmm I only generated 1 sequence for each input. But I just tried to generate multiple outputs as a test. I run into the same repetition issue as you with top-k but not with top-p. <|||||>I believe Alexandra meant to tag @fabrahman :) <|||||>> @faiazrahman By "not working" I mean I would pass in padded prompts and masks and the model would generate as if the mask was not there. So the padded prompts were like
>
> ```
> <|startoftext|>Hello there<|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|>
> ```
>
> (I padded with `<|endoftext|>` but it shouldn't matter as long as the attention mask is working)
> And then the output would see the multiple `<|endoftext|>` padding tokens and start generating `<|startoftext|>` instead of continuing from the prompts!
>
> Hmm I only generated 1 sequence for each input. But I just tried to generate multiple outputs as a test. I run into the same repetition issue as you with top-k but not with top-p.
@AADeLucia I actually found the issue. It is because I am passing both `top_p=0` and `top_k=10`. When I removed top_p in case of topk_sampling the problem resolved. I updated my code snippet.
BTW my transformer version is `2.11.0` in case you wanted to try.
@patrickvonplaten Would you please confirm if [this](https://github.com/huggingface/transformers/issues/3021#issuecomment-669511291) is the right approach and doesn't crash anything?<|||||>@fabrahman,
I did not use generate() method but batch inference works for me like the following way:
(1) Get input_ids and attention_mask from tokenizer.batch_encode_plus directly. The padding strategy does not matter.
```
position_ids = (attention_mask.long().cumsum(-1) - 1)
position_ids.masked_fill_(position_ids < 0, 0)
past = None
```
(2) Use model to do inference and get outputs including past. For example, we can construct new inputs like:
* update past tensor from the outputs
* input_ids is the generated tokens with shape (batch_size, 1)
```
position_ids = (position_ids[:,-1] + 1).reshape(batch_size,1)
attention_mask = torch.cat([attention_mask, torch.ones([self.batch_size, 1]).type_as(attention_mask)], 1).to(device)
```
Loop this step until exit condition is satisfied.
I have a [notebook](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/notebooks/Inference_GPT2_with_OnnxRuntime_on_CPU.ipynb) shows example of batch generation<|||||>Sorry for the wrong tag! And @fabrahman , glad you found the bug!<|||||>For GPT2LMHeadModel, I think we can do this:
```python
def prepare_inputs_for_generation(self, input_ids, past=None, **kwargs):
# only last token for inputs_ids if past is defined in kwargs
if past:
input_ids = input_ids[:, -1].unsqueeze(-1)
attention_mask = kwargs.get("attention_mask", None)
if attention_mask is not None:
position_ids = (attention_mask.long().cumsum(-1) - 1)
position_ids.masked_fill_(attention_mask==0, 0) # can be filled with anything >= 0
if past:
position_ids = position_ids[:, -1].unsqueeze(-1)
else:
position_ids = None
return {
"input_ids": input_ids,
"past_key_values": past,
"use_cache": kwargs.get("use_cache"),
"position_ids": position_ids,
"attention_mask": attention_mask, # I forgot to add this line and it took me hours debugging.
}
```
here:
https://github.com/huggingface/transformers/blob/4bd7be9a4268221d2a0000c7e8033aaeb365c03b/src/transformers/modeling_gpt2.py#L665-L674
So we don't need to care about position ids in `generate()`, since it calls`prepare_inputs_for_generation`.
https://github.com/huggingface/transformers/blob/4bd7be9a4268221d2a0000c7e8033aaeb365c03b/src/transformers/generation_utils.py#L534-L536
And in `examples/text-generation/run_generation.py`,
use `tokenizer.padding_side = "left"` to avoid this:
> ```python
> for step in range(num_tokens_to_produce):
> outputs = model(input_ids, attention_mask=attn_mask, position_ids=position_ids)
>
> # in the first decoding step, we want to use the 'real' last position for each sentence
> if step == 0:
> next_token_logits = outputs[0].gather(1, start_idx).squeeze(1)
> else:
> next_token_logits = outputs[0][:, -1, :]
>
> next_tokens = torch.argmax(next_token_logits, dim=-1)
> ```
and use `tokenizer.batch_encode_plus` to get attention_mask and pass to `generate()`.
@patrickvonplaten What do you think? I see you are working on this. 😃
<|||||>@cccntu have you tested this changes? do they work?
If it works, I think would be very useful to many folks out there (i.e., including me 😊). If so, maybe just send a pull request.<|||||>@andreamad8 I haven't tried it.😅 Maybe I will try it next week, idk. Feel free to try it yourself, and let me know the results! 😃 <|||||>I hope to be able to tackle the problem of batch generation soon. @cccntu your approach looks very interesting. Before we will add this feature, I think the generate function needs a bigger refactoring though...=> see https://github.com/huggingface/transformers/pull/6949<|||||>@andreamad8 I tried it and after some debugging, it seems to work! The code above is updated.
@patrickvonplaten My approach does not involve `generation_utils.py`, so I guess I will submit a pr later this week.
note: I only tested it using greedy search to verify that the results are the same for batch size = 1, 2.<|||||>Hi @cccntu . Do you have a pull request you could share with the above working?
Also, out of curiosity, did you find that being able to do batch inference sped things up a lot for you? I'm wondering what batch size you managed to fit in memory.
Thanks!<|||||>Hi @daphnei . My code above works, but I think I need a minimal example to show how it works, so I haven't submit a pr yet.
I am able to fit 16~64 lines (probably depends on the lengths) in 11 GB 2080ti (model_name="gpt2"), with at least 20x seed-up with batch size = 64, did not calculate the speed up specifically, just some rough calculation base on my memory.<|||||>Hi, thank you so much for your solution for batch inference in GPT-2 Model @XinyuHua @patrickvonplaten.
After reading your codes, I find the main idea of the solution is to use the `attention_mask` to ignore the `[PAD]` tokens during generation.
Before knowing your solutions in this issue, I also use a similar way to do the batch inference for myself. But I am not sure about it. Can you guys help me to check it?
The main idea of my solution is to pad in front of the sequences instead of the end of the sequences, for example:
```python
sentences = [
'I have a dog',
'My dog is very cute and good looking',
'A good boy'
]
tokens_after_padding = [
[0, 0, 0, 0, 1045, 2031, 1037, 3899],
[2026, 3899, 2003, 2200, 10140, 1998, 2204, 2559],
[0, 0, 0, 0, 0, 1037, 2204, 2879],
]
attention_mask = [
[0, 0, 0, 0, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 1, 1, 1]
]
```
After the processing, all the sentences have the same length, and the batch inference is the same as the batch training. Besides, I think this way is easier than yours. During my testing, I found this way is just okay, but I am not sure about it. Can you guys give me some suggestions?<|||||>Thanks @cccntu for the response! I will try it out soon.
Would be super if something like this was default eventually.<|||||>Thanks for this answer! It was very helpful for batching with variable sequence lengths.
Reopening with the question:
Can GPT2LMHeadModel do batch inference with variable sentence lengths **AND with usage of past_key_values?**
IE:
batch item 1 inputs:
input_ids = [1, 2, 3]
past_key_values = None
batch item 2 inputs:
input_ids = [1]
past_key_values = (tensor of shape (2, batch_size = 1, num_heads, sequence_length, embed_size_per_head))
As you explained above (thanks), input ids for batching would combine to be:
[[1, 2, 3],
[1, 0, 0]]
and attention_mask:
[[1, 1, 1],
[1, 0, 0]]
Is it possible to combine past_key_values in the same way?
Or is batching only possible with same-sized past_key_values?
Two considerations to torch.cat() the individual past_key_values together:
1) representing the None past_key_values as a tensor
2) padding all past_key_values so that they all have the same sequence_length (dim=3) size
I searched deeply through the source code and couldn't find what None becomes represented as.
Thanks for the help!<|||||>CC @patrickvonplaten is there anything obvious I can do to make the above" batching with variable pasts" inference work?
If I get something functional I could add it in a PR for the prepare_inputs_for_generation() function<|||||>Hey @erik-dunteman could you add a code snippet that currently fails, describing your problem in more detail?<|||||>@JulesGM Right padding will not work when using HF's inbuilt generate() function because it samples the last token id in the sequence by default. The last token could be a padding token when using right padding.
https://github.com/huggingface/transformers/blob/de8548ebf3242305d0f9792dacb6f86b196a3a33/src/transformers/generation_utils.py#L1725
As a result, using left padding for GPT2 is needed as it ensures that the logit from the last token is a non-padded token. <|||||>@JulesGM left-padding should work just as expected since the position ids are moved correctly. Or did you run into any problem with left-padding?<|||||>Sorry but isn't left-padding altering the result of the generation?<|||||>> @JulesGM left-padding should work just as expected since the position ids are moved correctly. Or did you run into any problem with left-padding?
The gpt2 tips section https://huggingface.co/docs/transformers/model_doc/gpt2 still mentions that
>GPT-2 is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left.
should i update it then?<|||||>@patrickvonplaten I think that people don't realize that you need to left pad to do batch generation with causal models. I think that that's likely a (silent) huge issue. <|||||>left pad *<|||||>tagging @thejaminator as well. Ofc you need to pad to the left because otherwise some of the next token predictions will be done over a padding token, which is untrained/undefined behavior. There should be a warning / exception when this is done imho, when a causalt model does batch generation with right padding.<|||||>that's pretty important<|||||>I agree here @JulesGM!
Given the importance of decoder-only models and batched inference and the difficulty to create good docs for this, I think we should add a warning to `generate` if we detect that batched generation is done that was not left padded.
We could do something like the following:
```python
if self.__class__ in AutoModelForCausalLM:
if is_input_batched and (input_ids[:, -1] == self.pad_token_id).any():
logger.warn(<throw very comprehensive warning that explains that the user should use left-padding in the tokenizer>)
```
@gante wdyt?<|||||>Hey @patrickvonplaten @JulesGM 👋
We do have precisely that warning in place ([here](https://github.com/huggingface/transformers/blob/f2a2616b7462c2f213dbc93332ddf81cae2ef874/src/transformers/generation/utils.py#L1255)), since v4.25. Sadly, users ignore most warnings :D
@thejaminator At train time, though, we should use right-padding UNLESS you pass a manually crafted `position_ids`, in which case you can use any type of padding. If `position_ids` is not passed, [the forward creates that tensor for you, assuming right-padding](https://github.com/huggingface/transformers/blob/f2a2616b7462c2f213dbc93332ddf81cae2ef874/src/transformers/models/gpt2/modeling_gpt2.py#L800). So the advice makes sense for fine-tuning the model. Nevertheless, the docs should be enhanced to reflect all these details!<|||||>Could it somehow break if generation is about to use predictions made from
a padding token as input, unless the eos token has been reached.
This is never correct. If really needed there can be a switch to turn this
off.
On Mon., Mar. 6, 2023, 12:32 Joao Gante, ***@***.***> wrote:
> Hey @patrickvonplaten <https://github.com/patrickvonplaten> @JulesGM
> <https://github.com/JulesGM> 👋
>
> We do have precisely that warning in place (here
> <https://github.com/huggingface/transformers/blob/f2a2616b7462c2f213dbc93332ddf81cae2ef874/src/transformers/generation/utils.py#L1255>),
> since v4.25. Sadly, users ignore most warnings :D
>
> @thejaminator <https://github.com/thejaminator> At train time, though, we
> should use right-padding UNLESS you pass a manually crafted position_ids,
> in which case you can use any type of padding. If position_ids is not
> passed, the forward creates that tensor for you, assuming right-padding
> <https://github.com/huggingface/transformers/blob/f2a2616b7462c2f213dbc93332ddf81cae2ef874/src/transformers/models/gpt2/modeling_gpt2.py#L800>.
> So the advice makes sense for fine-tuning the model. Nevertheless, the docs
> should be enhanced to reflect all these details!
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/3021#issuecomment-1456593215>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AAYU34KNPN3I6WNGBRVV3BLW2YNTRANCNFSM4K3T3JHA>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
<|||||>I'm really preoccupied by the invalid scientific contributions that this
will have otherwise. I work at a big lab (Yoshua Bengio's Mila, for senior
member Jackie Cheung) in the NLP group, & I've had to educate a lot of
people on this.
& now with more and more people coming to the field because of ChatGPT, I
really think making this more new user friendly is crucial to good science.
On Mon., Mar. 6, 2023, 13:27 Jules Gagnon-Marchand, <
***@***.***> wrote:
> Could it somehow break if generation is about to use predictions made from
> a padding token as input, unless the eos token has been reached.
>
> This is never correct. If really needed there can be a switch to turn this
> off.
>
> On Mon., Mar. 6, 2023, 12:32 Joao Gante, ***@***.***> wrote:
>
>> Hey @patrickvonplaten <https://github.com/patrickvonplaten> @JulesGM
>> <https://github.com/JulesGM> 👋
>>
>> We do have precisely that warning in place (here
>> <https://github.com/huggingface/transformers/blob/f2a2616b7462c2f213dbc93332ddf81cae2ef874/src/transformers/generation/utils.py#L1255>),
>> since v4.25. Sadly, users ignore most warnings :D
>>
>> @thejaminator <https://github.com/thejaminator> At train time, though,
>> we should use right-padding UNLESS you pass a manually crafted
>> position_ids, in which case you can use any type of padding. If
>> position_ids is not passed, the forward creates that tensor for you,
>> assuming right-padding
>> <https://github.com/huggingface/transformers/blob/f2a2616b7462c2f213dbc93332ddf81cae2ef874/src/transformers/models/gpt2/modeling_gpt2.py#L800>.
>> So the advice makes sense for fine-tuning the model. Nevertheless, the docs
>> should be enhanced to reflect all these details!
>>
>> —
>> Reply to this email directly, view it on GitHub
>> <https://github.com/huggingface/transformers/issues/3021#issuecomment-1456593215>,
>> or unsubscribe
>> <https://github.com/notifications/unsubscribe-auth/AAYU34KNPN3I6WNGBRVV3BLW2YNTRANCNFSM4K3T3JHA>
>> .
>> You are receiving this because you were mentioned.Message ID:
>> ***@***.***>
>>
>
<|||||>@JulesGM I share your concern.
This is why we push `pipeline()` and not `.generate()` to entry-level users on our [tasks page](https://huggingface.co/tasks) -- `pipeline()` does all the right parameterization for the particular inference use-case. On the other hand, `.generate()` must stay simultaneously compatible with decoder-only LLMs, encoder-decoder LLMs, image-to-text models, speech-to-text models, and others. Raising the severity from warning to exception is a no-go, as users might be relying on the no-exception behavior (as you mentioned).
Other than more/better documentation and argument validation (to the extent that is possible), we're low on ideas. Lets us know if you have suggestions :)<|||||>The people I'm talking about are pro nlp researchers at one of the top AI
institutions.
I definitely think that it breaking for people who unintentionally have
this mistake is more important than the people who somehow are relying on
generating from pad tokens?
I literally can't think of a situation where that's ok. Flipping the flag
in the *very* rare cases where that's needed would make more sense to me.
Huggingface detects if the model is seq2seq or causal and has different
behavior in multiple places doesn't it? So I don't think that's the issue.
On Mon, Mar 6, 2023 at 2:29 PM Joao Gante ***@***.***> wrote:
> @JulesGM <https://github.com/JulesGM> I share your concern.
>
> This is why we push pipeline() and not .generate() to entry-level users
> on our tasks page <https://huggingface.co/tasks> -- pipeline() does all
> the right parameterization for the particular inference use-case. On the
> other hand, .generate() must stay simultaneously compatible with
> decoder-only LLMs, encoder-decoder LLMs, image-to-text models,
> speech-to-text models, and others. Raising the severity from warning to
> exception is a no-go, as users might be relying on the no-exception
> behavior (as you mentioned).
>
> Other than more/better documentation and argument validation (to the
> extent that is possible), we're low on ideas. Lets us know if you have
> suggestions :)
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/3021#issuecomment-1456827933>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AAYU34P3G67YRRCFYAXCQDLW2Y3JNANCNFSM4K3T3JHA>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
<|||||>@JulesGM one of the most important aspects for us, as developers of an open-source tool, is to keep a consistent API and behavior. Especially regarding features that have been in for a while -- [Hyrum's law](https://www.hyrumslaw.com/) tells us that someone likely depends on this particular behavior. This means we can't simply start raising exceptions for an out-of-distribution model use (which is what generating after a pad token typically is).
As I've written above, if improper use of the tool is a concern, `pipeline()` is a preferred interface to `.generate()`. `.generate()` is intentionally flexible to account for all sorts of projects using autoregressive token generation, with minimal guard rails.<|||||>I would argue that the behavior was a silent error that we're just making not silent anymore. This is common practice in the open source works. There's no reason to generate taking the output of a pad token at input! none. That's undefined behavior.
At the opposite end, the silent error is everywhere, and silently breaking a lot of stuff, even with professional users.
@patrickvonplaten curious about your view on this<|||||>> Hey @patrickvonplaten @JulesGM 👋
>
> We do have precisely that warning in place ([here](https://github.com/huggingface/transformers/blob/f2a2616b7462c2f213dbc93332ddf81cae2ef874/src/transformers/generation/utils.py#L1255)), since v4.25. Sadly, users ignore most warnings :D
>
> @thejaminator At train time, though, we should use right-padding UNLESS you pass a manually crafted `position_ids`, in which case you can use any type of padding. If `position_ids` is not passed, [the forward creates that tensor for you, assuming right-padding](https://github.com/huggingface/transformers/blob/f2a2616b7462c2f213dbc93332ddf81cae2ef874/src/transformers/models/gpt2/modeling_gpt2.py#L800). So the advice makes sense for fine-tuning the model. Nevertheless, the docs should be enhanced to reflect all these details!
So at train time it should be right padded. But in generation it should be left padded? I can see why it'll get confusing.
Could I make an MR so that at train time we'll at least use the attention mask, and account for them in the position_ids?
It would be similar to what is done during generation https://github.com/huggingface/transformers/pull/7552/files
Otherwise the left pad tokens will use up the position_ids. E.g. they'll take up the position_ids [1,2,3,4]. which offsets the rest of the tokens position_ids. We'll encode this as a dummy position_id like what the library currently does during `generate`. And make sure the padding tokens don't cause the normal token to get offset.
This would work for properly for right padding too, since it'll get encoded as dummy position_ids, and get masked out by the attention mask anyways.
<|||||>@JulesGM I agree this issue brings confusion frequently. I'm just stating that raising an exception for a long-standing behavior is not the path we can take as maintainers of a library that is used in production. It is our responsibility to nudge the users in the right direction, but not to force them -- exceptions are terrible in production, especially when caused by a sudden change in library behavior. This means that maintaining an open-source library sometimes gets in the way of safeguarding textbook correctness :)
I can share two opposite examples of the importance of avoiding exceptions and changing behavior:
1. TensorFlow used to have a policy as you are suggesting -- see problems, fix problems. As a consequence, everyone started to hate TensorFlow releases;
2. PyTorch, Numpy, and TensorFlow allow undefined operations such as division by 0 to happen. Do they lead to silent errors? Definitely. Is it better than an exception? Also yes.
You might be wondering -- so what sort of exceptions do we raise, in lower-level tools like `.generate()`?
1. Reraise exceptions that would happen anyways, but with a more detailed message. For instance, you can't generate if your model doesn't have a language modeling head, so we raise an exception enumerating valid alternative classes.
2. Incompatible parameterization. For instance, you can't set a minimum length larger than the maximum length.
3. Hardware-specific quirks. For instance, in TF, the embedding layer on GPU doesn't complain if you access an index that doesn't exist. On CPU, TF raises the exception.
Pretty much anything else, including bad tensor values and numerical problems, is up to the user or the abstraction layers above :) |
transformers | 3,020 | closed | [ci] Run all tests on (self-hosted) GPU | 02-25-2020 21:12:03 | 02-25-2020 21:12:03 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3020?src=pr&el=h1) Report
> Merging [#3020](https://codecov.io/gh/huggingface/transformers/pull/3020?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e8ce63ff2163259276fc0a4a2f35b836fe9f4aa0?src=pr&el=desc) will **decrease** coverage by `0.03%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3020?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3020 +/- ##
==========================================
- Coverage 77.25% 77.21% -0.04%
==========================================
Files 98 98
Lines 16040 16040
==========================================
- Hits 12392 12386 -6
- Misses 3648 3654 +6
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3020?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3020/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.71% <0%> (-0.86%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3020/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.38% <0%> (ø)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3020?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3020?src=pr&el=footer). Last update [e8ce63f...2a48145](https://codecov.io/gh/huggingface/transformers/pull/3020?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>**Update:**
The remaining failing tests:
- [ ] Test_doc_samples: multilingual.rst and modeling_flaubert.py (@LysandreJik)
- [ ] test_modeling_auto.py::AutoModelTest::test_model_for_pretraining_from_pretrained (@thomwolf https://github.com/huggingface/transformers/commit/0e31e06a75b0022171056e51c8b5d53078ac5170)
- [x] test_modeling_roberta.py::RobertaModelIntegrationTest::test_inference_masked_lm (@julien-c https://github.com/huggingface/transformers/commit/9d0603148bc34255fad0cad73ce438ecd7306322)
- [ ] test_tokenization_roberta.py::RobertaTokenizationTest::test_sequence_builders (@LysandreJik https://github.com/huggingface/transformers/commit/634a3172d869e2ff772b2e0813169641ca9e6cc5)
- [ ] test_tokenization_xlm_roberta.py:: XLMRobertaTokenizationIntegrationTest::test_tokenization_base_hard_symbols (@patrickvonplaten https://github.com/huggingface/transformers/commit/c913eb9c3894b4031dc059d22b42e38a5fcef989)<|||||>
> * test_tokenization_xlm_roberta.py::XLMRobertaTokenizationIntegrationTest::test_tokenization_base_hard_symbols (@patrickvonplaten [c913eb9](https://github.com/huggingface/transformers/commit/c913eb9c3894b4031dc059d22b42e38a5fcef989))
I think this test currently fails because there is a problem with the xlm_roberta_tokenizer . In general so far there are no tests at all for the xlm_roberta_tokenizer . I can try to add those (will prob need a bit help from @mfuntowicz and @LysandreJik)
<|||||>Update on my failing unit (integration) test on RoBERTa: the max absolute diff between expected and actual output is `0.004` whereas it used to be under `1e-3` (both CPU and cuda) – should I dive in to why this changed, or should i just lazily bump up to tolerance?
(The same integration test without the maskedLM head still passes with `1e-5` abs diff)<|||||>Could this be due to the bias? https://github.com/huggingface/transformers/pull/2958
It could have been computed twice when creating the integration test.<|||||>I'm not sure how to exactly get the expected logits from the fairseq **LM** Models. It's very easy to get the last embeddings for verification of the no **LM** model via:
```
import torch
roberta = torch.hub.load('pytorch/fairseq', 'roberta.large')
roberta.eval() # disable dropout (or leave in train mode to finetune)
last_layer_embeddings = roberta.extract_features(input_ids) # shape [1, seq_len, 1024]
```
The integration test that were implemented for Roberta LMHeadModels did they correspond to the original weights?
<|||||>@patrickvonplaten https://github.com/huggingface/transformers/blob/53ce3854a16ad2a715bc6ac8af3e30c18b5a1d11/tests/test_modeling_roberta.py#L324 is the test (I think)<|||||>Ok found a resolution strategy with @LysandreJik, will push fix soon.<|||||>Getting closer:
> ===== 6 failed, 939 passed, 32 skipped, 84 warnings in 1508.72s (0:25:08) ======
|
|
transformers | 3,019 | closed | Delete Model2Model | - the quickstart code doesn't work
- the tests don't test a forward pass
if you need it `git checkout e8ce63ff` | 02-25-2020 20:49:43 | 02-25-2020 20:49:43 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3019?src=pr&el=h1) Report
> Merging [#3019](https://codecov.io/gh/huggingface/transformers/pull/3019?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e8ce63ff2163259276fc0a4a2f35b836fe9f4aa0?src=pr&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3019?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3019 +/- ##
==========================================
+ Coverage 77.25% 77.27% +0.01%
==========================================
Files 98 98
Lines 16040 16030 -10
==========================================
- Hits 12392 12387 -5
+ Misses 3648 3643 -5
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3019?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/3019/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `21.05% <ø> (-4.33%)` | :arrow_down: |
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/3019/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.91% <100%> (ø)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3019?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3019?src=pr&el=footer). Last update [e8ce63f...835a807](https://codecov.io/gh/huggingface/transformers/pull/3019?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Hi, why is the Model2Model deleted? I am using it and it works great for me. <|||||>It was not super well documented or tested and we didn't want to maintain it. If you're interested in sending a PR with working quickstart code and tests (could start by reverting this) we would definitely be happy to add it back! Sorry! |
transformers | 3,018 | closed | [WIP] Updates to simplify PLT example and use new features | Some changes to PLT to support our workflow. Prelim support for TPUs.
Still testing. | 02-25-2020 20:10:49 | 02-25-2020 20:10:49 | I can't seem to find who is pushing the commits, but can whoever is doing that please add commit messages? This is really messy and impossible to go over as currently written.<|||||>Sorry, @BramVanroy . I didn't realize people were reviewing this branch. This is a WIP We are still trying to work this out with new pytorch-lightning / TPU changes. (Closing for now and will switch to a new review branch when it is working). <|||||>@srush No worries! Wasn't really reviewing, I was just curious which changes were being made - but then I saw those commit messages and I didn't know what to expect. 😄 |
transformers | 3,017 | closed | [WIP] Update to use new pytorch-lightning features | Updates to utilize a bunch of new features the PLT wrote to support us. Also support for TPU. Still in testing phase. | 02-25-2020 20:08:45 | 02-25-2020 20:08:45 | |
transformers | 3,016 | closed | Use my own pretrained BERT model | How can I use my own pretrained BERT model in SQUAD finetuing? How to do this in the code? Can anyone provide any instructions? | 02-25-2020 19:45:47 | 02-25-2020 19:45:47 | Hi, where did you obtain this model? Is it using the official google-research implementation or using huggingface/transformers?<|||||>@LysandreJik Thank you for your reply! I have pretrained BERT base using my own pytorch code and generated checkpoint for BERT base. Then I want to do squad finetuning using my own pretrained BERT base model but in huggingface/transformers framework. I saw that huggingface/transformers by default download a pretrained BERT base model from amazon aws. How can I change this to use my own checkpoint? Thanks a lot!<|||||>Are you able to load your model using `model = BertModel.from_pretrained("model_dir")`? The architecture of the model would need to be the same, and have the same layer names so that the torch state dict may be loaded onto one of our architectures.<|||||>Thank you! Let me try that.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,015 | closed | Latest version of transformers available via conda? | # 🚀 Feature request
I notice that the version of transformers available via conda-forge is quite outdated (v2.1.1). Could you make later versions available there too? Thanks!
| 02-25-2020 18:33:16 | 02-25-2020 18:33:16 | It would be nice if it had it's own channel like PyTorch has, as put a distribution of the packages (transformers, tokenizers, etc.) there.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Any update on this request?
The latest version in conda forge is 2.1.1.
Thanks. |
transformers | 3,014 | closed | Add integration tests for xlm roberta modelling and xlm roberta tokenzier | It's quite easy to get real numbers for the XLM-R moder from [fairseq](https://github.com/pytorch/fairseq/tree/master/examples/xlmr), so I added integration tests for `xlm_roberta_modeling.py` and `xlm_roberta_tokenization.py`
Since `XLMRobertaModel` is the same model as `RobertaModel`, I think intergration tests for `xlm_roberta_modeling.py` are enough.
Regarding `XLMRobertaTokenizer` there were no tests so far, so in this file there should definitely be more "fast" tests. I would need some help on those (@mfuntowicz ?).
Regarding the results of the integration tests:
The tests for `xlm_roberta_modeling.py` all pass.
One of the two tests (called hard_token_symbols) for `xlm_roberta_tokenization.py' fails. @LysandreJik @mfuntowicz | 02-25-2020 17:42:40 | 02-25-2020 17:42:40 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3014?src=pr&el=h1) Report
> Merging [#3014](https://codecov.io/gh/huggingface/transformers/pull/3014?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e693cd1e877aa191d3317faed33e87d1558c9406?src=pr&el=desc) will **decrease** coverage by `1.03%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3014?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3014 +/- ##
==========================================
- Coverage 77.25% 76.22% -1.04%
==========================================
Files 98 98
Lines 16040 16040
==========================================
- Hits 12392 12226 -166
- Misses 3648 3814 +166
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3014?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.03% <0%> (-2.21%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.22% <0%> (-0.17%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3014?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3014?src=pr&el=footer). Last update [e693cd1...fc5fe85](https://codecov.io/gh/huggingface/transformers/pull/3014?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,013 | closed | Generation with gpt-2 | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Hi @julien-c and all,
we are developing a training with GPT-2 language model based on our custom dataset.
Firstly, we train a custom Tokenizer with tokenizers ( as described [here](https://github.com/huggingface/tokenizers/issues/166) ) .
Now I have a dummy GPT-2 model trained on 1 epoch but my problem is on inference.
The result is always the same when I feed the model with the starting text.
Code (is the same at link )
```
from transformers import GPT2LMHeadModel, GPT2Tokenizer
import torch
tokenizer = GPT2Tokenizer.from_pretrained("my-gpt2-tokenizer")
model = GPT2LMHeadModel.from_pretrained('my-gpt2-model')
generated = tokenizer.encode("The Manhattan bridge")
context = torch.tensor([generated])
past = None
for i in range(100):
print(i)
output, past = model(context, past=past)
token = torch.argmax(output[..., -1, :])
generated += [token.tolist()]
context = token.unsqueeze(0)
sequence = tokenizer.decode(generated)
print(sequence)
```
Using this code, the output is repeated:
```
The Manhattan bridge<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit
```
I think that problems are two basically:
1. the first is that for every inference we take only the argument that maximizes probability, and I don't use any type of sampler based on some type of distribution.
2. the second is that the content fed after the first step is always the next word predicted , so it goes in loop.
I don't have any other ideas. Is there an example of inference with huggingface lib?
Thanks
| 02-25-2020 17:03:26 | 02-25-2020 17:03:26 | There is an example: https://github.com/huggingface/transformers/blob/master/examples/run_generation.py<|||||>Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,012 | closed | batch_encode_plus with pad_to_max_length is not padding the output | # 🐛 Bug
## Information
Model I am using (BertTokenizer,):
Language I am using the model on: English
The problem arises when using:
* [ ] the official example scripts.
The tasks I am working on is:
* [ ] Simple batch tokenization task
## To reproduce
```
from transformers import BertTokenizer, BertForQuestionAnswering
import torch
import numpy as np
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForQuestionAnswering.from_pretrained('bert-large-uncased-whole-word-masking-finetuned-squad')
qt1 = ('What is the name of the repository?', 'pipeline have been included in the huggingface/transformers repository')
qt2 = ('What is the name of the repository?', 'What can be done with this???')
inp_raw = [qt1, qt2]
tck_temp = tokenizer.batch_encode_plus(inp_raw, max_length=20, pad_to_max_length=True)
print(tck_temp)
inp_ids = tck_temp['input_ids']
tck_type_ids = tck_temp['token_type_ids']
print(len(inp_ids[0]) == len(inp_ids[1]))
## This is coming false
```
## Expected behavior
The code snippet should print True
But it prints False
## Environment info
- `transformers` version: 2.3
- Platform: Linux
- Python version: 3.6
- PyTorch version (1.4CPU)
[EDIT] : correct transformers version | 02-25-2020 16:59:18 | 02-25-2020 16:59:18 | The code snippet prints `True` for me.
I have the versions:
transformers version: 2.5.1
Platform: Linux
Python version: 3.6.9
PyTorch version: 1.4.0+cpu
Can you try `pip install --upgrade transformers` and see whether the error is still there?
(Posted on behalf of @patrickvonplaten who's having issues with Github)<|||||>Thanks @TevenLeScao it seems the issue was only for transformers ~2.3 |
transformers | 3,011 | closed | Add models special tokens to its pretrained configs | I think the token_ids for each specific model should also be added to their pretrain configs. This would also make the function generate() much easier to use for the user.
If ok, I can also add these tokens for models not having a LMHead. | 02-25-2020 15:41:04 | 02-25-2020 15:41:04 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3011?src=pr&el=h1) Report
> Merging [#3011](https://codecov.io/gh/huggingface/transformers/pull/3011?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e693cd1e877aa191d3317faed33e87d1558c9406?src=pr&el=desc) will **decrease** coverage by `1.02%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3011?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3011 +/- ##
==========================================
- Coverage 77.25% 76.22% -1.03%
==========================================
Files 98 98
Lines 16040 16048 +8
==========================================
- Hits 12392 12233 -159
- Misses 3648 3815 +167
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3011?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3011/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.05% <ø> (-0.33%)` | :arrow_down: |
| [src/transformers/configuration\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/3011/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3hsbS5weQ==) | `96.36% <100%> (+0.13%)` | :arrow_up: |
| [src/transformers/configuration\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/3011/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `97.22% <100%> (+0.16%)` | :arrow_up: |
| [src/transformers/configuration\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/3011/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3RyYW5zZm9feGwucHk=) | `92.59% <100%> (+0.13%)` | :arrow_up: |
| [src/transformers/configuration\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3011/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3hsbmV0LnB5) | `93.87% <100%> (+0.39%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3011/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3011/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3011/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3011/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.03% <0%> (-2.21%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3011/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |
| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/3011/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3011?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3011?src=pr&el=footer). Last update [e693cd1...f5b50c6](https://codecov.io/gh/huggingface/transformers/pull/3011?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Isn’t this duplicating info that’s already in the tokenizers?<|||||>@julien-c yeah it's duplicating information.
The main reason I added them is because when calling the function `model.generate()` one always has to put the special token ids into the function with:
`model.generate(pad_token_id=tokenizer.pad_token_id, ...)`
The second reason was that every pretrained model already has the attributes `pad_token_id`, `eos_token_id` and `bos_token_id` which are just set to None no matter which pretrained model is loaded.
Or we could just delete the attributes pad_token_id from the config - I think the generate() function is anyways the only function using self.config.pad_token_id
<|||||>maybe the `generate()` method should actually be a pipeline? cf. the `FillMaskPipeline`?<|||||>Ok, I see your point! Should we maybe then also delete the self.pad_token_id, self.eos_token_id and self.bos_token_id in https://github.com/huggingface/transformers/blob/5bc99e7f33c83b23b88740877283098ef7964b73/src/transformers/configuration_utils.py#L78
So that it is clear that the tokens are no attribute of the models at all. @julien-c & @LysandreJik <|||||>Yes, I think that would make sense. Are these used anywhere?<|||||>They are only used in the generate() function, but since there is also a pad_token_id argument to the function, they are not needed at all. I will open a new PR to delete them. <|||||>As I mentioned before, I feel like we should go toward having a single configuration file for both models and tokenizers (at least for pretrained model, for newly initialized model this may imply forcing the user to supply a configuration object when creating the model/tokenizer).
In this case I don't see any problem with having token_id attributes in this configuration file that the model could use, this doesn't means we are gathering tokenizer and model, just that they are depending on a single configuration object.
I do agree that we need to think this carefully though. |
transformers | 3,010 | closed | Possible improvement of padding logic in generate | I wanted to try out a approach to the padding logic (previously discussed with @thomwolf and @mfuntowicz) which is **different** to what is currently done (see #2885 description).
Instead of setting the `pad_token_id` to `eos_token_id` if `pad_token_id` is not defined and `eos_token_id` is defined, the following logic could be applied:
If there is no `pad_token_id`, the user has to add the `pad_token_id` to the tokenizer and resize the model token embedding matrix to add an additional token vector to the weight matrix.
During my PR today, I encountered two problems:
1. Adding a token embedding vector causes some pretrained model (`gpt2` and `openai-gpt`) often generate the newly added token embedding vector (since input & output token embeddings are tied). A remedy for this is to simply always set the produced logits of the pad_token_id to -Inf. This works as it doesn't generate the `pad_token`, but it changes the internal behavior of `gpt2` and `openai-gpt` however.
2. TransfoXL uses an adaptive token embedding matrix, which means that the function `resize_embedding_matrix` produces an error when used with `TransfoXLHeadModel`. This means the `token_embedding_matrix` of TransfoXL cannot be changed! Therefore, this approach doesn't work at all for TransfoXL (which is a pity as TransfoXL generates very good language).
In the current PR, some tests fail (TransfoXL and the slow tests of gpt2 and openai-gpt). I wanted to hear your opinion on this logic vs. the logic that is implemented now (see #2885) @thomwolf, @LysandreJik and @mfuntowicz
| 02-25-2020 15:23:39 | 02-25-2020 15:23:39 | |
transformers | 3,009 | closed | add crf layer to BERT, RoBERTa, XLMR | Building on pull request [ bert(+lstm)+crf #2249 ], I make it more generic and similar to run_ner.py . There is a CRF layer adapted from https://github.com/jiesutd/LatticeLSTM , which does not have additional package requirements, and was published in ACL 2018. | 02-25-2020 11:09:18 | 02-25-2020 11:09:18 | What do you think about just using https://github.com/harvardnlp/pytorch-struct (I'm biased)? It should remove the need for most of this code.
Also we now have a pytorch-lightning NER example which should make this much simpler.
Would also be helpful to know if any of this improves prediction accuracy.<|||||>@srush thanks for the comment, I'll definitely check out the repo. Regarding accuracy, there are studies that suggest using CRF yields better results in some languages, for example in Portugese and Slavic languages:
article{souza2019portuguese,
title={Portuguese Named Entity Recognition using BERT-CRF},
author={Souza, F{\'a}bio and Nogueira, Rodrigo and Lotufo, Roberto},
journal={arXiv preprint arXiv:1909.10649},
year={2019}
}
article{arkhipov2019tuning,
title={Tuning Multilingual Transformers for Named Entity Recognition on Slavic Languages},
author={Arkhipov, Mikhail and Trofimova, Maria and Kuratov, Yuri and Sorokin, Alexey},
journal={BSNLP’2019},
pages={89},
year={2019}
}
<|||||>this looks promising. is this going to be merged?<|||||>So as per my above comment, I'm not going to merge this example which feels too complex to me. But I will start an issue to put together some work `crf/` examples for ner/chunking/parsing that people can build on. |
transformers | 3,008 | closed | [WIP] Remove tokenizers dependency | In v2.5.1 the tokenizers don't default to the fast implementations (as introduced in v2.5.0), which I think is a good thing when looking at the issues that have arisen from it. Even though the performance of the new tokenizers is phenomenal (and I complement everyone who has been working on it!), it seems a bit premature to make `tokenizers` a dependency. (In addition, see for instance this topic concerning installation issues: https://github.com/huggingface/transformers/issues/2980.)
Even though the fast implementation isn't the default any more, it is still part of the dependencies in setup.py. This PR removes `tokenizers` from the dependency list but indicates in the documentation that having `tokenizers` installed and using `use_fast` can result in great performance improvements.
Generally, I am not satisfied with how this PR has been implemented (quite some duplication across the different tokenizers), but I don't really see another way. Alternative ideas are definitely welcome. If, on the other hand, you decide to keep the dependency, that is fine too.
Note: my flake8 keeps failing with an obscure error so can't do the manual code checking now. Might try again later.
Note: tests will fail for the fast tokenizers (and perhaps on some other imports). I'll look further into this if you decide that this PR is welcome. | 02-25-2020 10:40:46 | 02-25-2020 10:40:46 | |
transformers | 3,007 | closed | [Benchmark] Pipeline for question answering | # 🖥 Benchmarking `transformers`
## Benchmark
I'm trying to benchmark QA model with `bert-large-uncased-whole-word-masking-finetuned-squad`. But it seems it is extremely slow e.g. 3-4 sec for 1 question with 2 contexts.
I feel there is something I'm missing in pipeline.
## Sample Code:
```
def answer(self, contexts:List[str], question:str, **kwargs):
## tokenizer, model, pipeline all are cached in actual implementation
## via [reify](https://docs.pylonsproject.org/projects/pyramid/en/latest/api/decorator.html)
## so model loading is not a problem.
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', max_len=500)
model = BertForQuestionAnswering.from_pretrained('bert-large-uncased-whole-word-masking-
finetuned-squad')
pipeline('question-answering', model=model, tokenizer=tokenizer)
pipeline_input = []
for c in contexts:
pipeline_input.append({
'question' : question,
'context' : c
})
answers = pipeline(pipeline_input)
```
## Set-up
CPU: Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz
memory: 16GB
| 02-25-2020 09:47:00 | 02-25-2020 09:47:00 | same experience with a 2080TI. It s like it s not batched...<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>As @pommedeterresautee mentioned, the samples are not batched. I've made a [simple modification](https://gist.github.com/vinicius-cleves/1b2d79a9665d165ac22451c225b2f8b0) to do batch inference on pytorch, but it didn`t seem to help much with the total processing time.
|
transformers | 3,006 | closed | [FIX] not training when epoch is small | Closes https://github.com/huggingface/transformers/issues/2995
Solving bug where for small epochs and large gradient_accumulation_steps we never train.
Explained further here: https://github.com/huggingface/transformers/issues/2995 | 02-25-2020 08:10:01 | 02-25-2020 08:10:01 | Cool, thanks @mataney! Do you mind running `make style` at the root of the repository to reformat the file according to `black` and `isort`, as indicated in our [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests)?
This is so the `check_code_quality` passes.<|||||>@LysandreJik Thanks for your reply.
Recommited with the black formatting.
It is now failing because something else. Do you think you can rerun the tests? I believe it has something to do with the Indeterminism of the tests.
<|||||>Can I have your help here @LysandreJik |
transformers | 3,005 | closed | Output of pipeline feature extraction | Hi,
I am using the new pipeline feature of transformers for feature extraction and I have to say it's amazing. However I would like to alter the output of the pipeline slightly but I am not sure how to and I was hoping some people of the community could point me into the right direction.
I am using the following code snippet in my script:
```
nlp = pipeline('feature-extraction', model=args.model, config=args.config, tokenizer=args.tokenizer, device=args.device)
features = nlp(value)
features = np.squeeze(features)
features = features.astype('float32')
h5f = h5py.File(args.output_path, 'a')
h5f.create_dataset(key, data=features)
h5f.close()
```
The output for every value I put into the pipeline has the shape of (512, 768) no matter the length of the sentence I put into the pipeline.
I understand the shape of 512 is caused by padding and 768 because of the number of hidden states.
However I would like to have the output to be (15, 768) or (312, 768) for example depending on the input instead of always (512, 768). I know this is not standard but for my purpose I need this format.
Could someone please point me to the right direction how to achieve this?
Thanks a lot! I'm really at a loss here.
| 02-25-2020 07:39:31 | 02-25-2020 07:39:31 | I cannot reproduce this. Can you post a verifiable example with full code that I can just copy and paste?
Here you see that I just get 7 tokens back:
```python
import numpy as np
from transformers import AutoTokenizer, AutoModel, pipeline
model = AutoModel.from_pretrained('distilbert-base-uncased')
tokenizer = AutoTokenizer.from_pretrained('distilbert-base-uncased')
nlp = pipeline('feature-extraction', model=model, tokenizer=tokenizer)
features = nlp('Do you like cookies ?')
print(features)
features = np.squeeze(features)
print(features.shape)
# (7, 768)
```<|||||>@BramVanroy Hi thanks for the quick reply.
I have used the code example you provided and get the same output again:
```
import numpy as np
from transformers import AutoTokenizer, AutoModel, pipeline
model = AutoModel.from_pretrained('distilbert-base-uncased')
tokenizer = AutoTokenizer.from_pretrained('distilbert-base-uncased')
nlp = pipeline('feature-extraction', model=model, tokenizer=tokenizer)
features = nlp('Do you like cookies ?')
features = np.squeeze(features)
print(feature.shape)
# (512, 768)
```
Yesterday I deleted my transformers directory and cloned the github repo of transformers again and used pip --install . to set everything up so I should be on the most recent version with no differences.<|||||>Hm, that is odd. I have tested with 2.5.0 and 2.5.1 and both give me 7 tokens back.
Can you run `python transformers-cli env` and paste the result here?<|||||>I actually have a similar issue, but with the Fast Tokenizer for the `bert-base-uncased` model
```
model = BertModel.from_pretrained('bert-base-uncased')
tokenizer_fast = BertTokenizerFast.from_pretrained('bert-base-uncased',
add_special_tokens=False)
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased',
add_special_tokens=False)
sentence = 'We are very happy to include pipeline into the transformers repository.'
nlp = pipeline('feature-extraction', model=model, tokenizer=tokenizer, device=0)
nlp2 = pipeline('feature-extraction', model=model, tokenizer=tokenizer_fast, device=0)
tnsor = np.squeeze(nlp(sentence))
# (14, 768)
tnsor = np.squeeze(nlp2(sentence))
# (512, 768)
```
The "slow" tokenizer gives me the expected 14 tokens (which is strange too because I set `add_special_tokens=False` but not relevant for this question) but the fast tokenizer gives me the padded 512 tokens.
Any ideas? Thanks!<|||||>cc @mfuntowicz Users in this thread report that the behaviour of the fast tokenizer differs from the slow ones with respect to the pipeline. When the pipeline is used for feature extraction, the fast tokenizers return the fully padded output (512) instead of the expected output (number of subtokens). Not sure if related to https://github.com/huggingface/transformers/pull/2961<|||||>@BramVanroy I decided to clone and rebuild transformers again to make 100% sure I'm on the most recent version and have a clean working environment. After doing so I got the expected result of shape (<512, 768).
In the end I'm not sure what the problem was. Should I close this issue or keep it open for @mabergerx ?
@mabergerx Try cloning the code into a new directory and rebuild from source. This ended up fixing the problem for me.<|||||>You can keep it open for now, because it seems to indicate some inconsistencies between versions or even commits. I will have a closer look when I find the time.<|||||>Regarding `add_special_tokens` behaviour, this is one of the major difference between fast and slow tokenizers. Fast requires the parameter to be defined at creation time (as you did), for the slow ones, it should be provided while calling methods like `encode`, `encode_plus`, `batch_encode_plus`.
For the initial padding issue, we fixed some stuff related to padding on transformers 2.5.1 which might also have resolved your issue. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,004 | closed | BART : How can I train and evaluate BART on CNN/DM dataset | # ❓ Questions & Help
@sshleifer
I found examples for text summarization on CNN/DM dataset using BERT, but I couldn't find any example using BART.
**Are you going to add it later, or update the existing example to add BART, or it's not scheduled ?** | 02-25-2020 07:36:09 | 02-25-2020 07:36:09 | BART has only just been implemented and is not part of release code yet. I'm sure that examples will be added later on, you just need a bit more patience.<|||||>How is this going? I would like to train BART for text summarization with my own dataset and I have no idea of what preprocessing is needed or what inputs the model needs. I would appreciate the help @sshleifer.<|||||>No training code yet, but we have an eval example. Its very new obviously, so make an issue and tag me if it breaks :)<|||||>https://github.com/huggingface/transformers/blob/b3e0a1cf0573a909f1769b5f1c2b7273faf36cf4/examples/summarization/bart/evaluate_cnn.py<|||||>The associated PR I opened has the training code. Just in case you want to test it out and run some experiments/give feedback. I based it on [this colab](https://colab.research.google.com/drive/1C4jEf0fnLiz6Xdx4TDz1OoO4BRCjCx1m) that I wrote. <|||||>I couldn't reach the accuracy in the paper with CNNDM dataset using the pre-trained model (facebook/bart-large-cnn). Can anybody reproduce the accuracy properly? |
transformers | 3,003 | closed | Test correct tokenizers after default switch | 02-24-2020 23:38:43 | 02-24-2020 23:38:43 | ||
transformers | 3,002 | closed | Tokenizer Fast False by default | 02-24-2020 23:29:32 | 02-24-2020 23:29:32 | ||
transformers | 3,001 | closed | Changing the loss function in BERT | # ❓ Questions & Help
Hello,
I'm trying to replace the _CrossEntropyLoss_ with the _KLDivLoss_ for a sentence classification task using some golden logits and the logits from the BERT model.
```
golden_logits = …
outputs = model(**inputs)
# the logits from BERT
logits = outputs[1]
loss_fct = KLDivLoss(reduction='sum')
loss = loss_fct(F.log_softmax(logits, dim=-1), F.softmax(golden_logits, dim=-1))
```
Am I doing this correctly ? | 02-24-2020 21:40:15 | 02-24-2020 21:40:15 | This is a very general PyTorch question. Please post it on Stack Overflow. https://stackoverflow.com/
About the BERT part: `outputs` is a tuple. `[0]` is the last hidden state (sequence output), `[1]` is the pooled output.
|
transformers | 3,000 | closed | unk_token not set when loading TransformerXLTokenizer.from_pretrained() from a save_pretrained() | Using `TransfomerXLTokenizer.from_pretrained()` when loading from `.save_pretrained()` generated files.
Python complains about missing <unknown> token:
```
line 175, in _build_from_file
raise ValueError("No <unkown> token in vocabulary")
ValueError: No <unkown> token in vocabulary
``` | 02-24-2020 21:27:52 | 02-24-2020 21:27:52 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,999 | closed | Create README.md for the new model fine tuned for Spanish POS tagging | 02-24-2020 19:32:07 | 02-24-2020 19:32:07 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2999?src=pr&el=h1) Report
> Merging [#2999](https://codecov.io/gh/huggingface/transformers/pull/2999?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8194df8e0cff8e5866ec2bcbda34e3892f10eb39?src=pr&el=desc) will **decrease** coverage by `1.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2999?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2999 +/- ##
==========================================
- Coverage 77.16% 76.14% -1.02%
==========================================
Files 98 98
Lines 15999 15999
==========================================
- Hits 12345 12182 -163
- Misses 3654 3817 +163
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2999?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.03% <0%> (-2.21%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.53% <0%> (+0.32%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2999?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2999?src=pr&el=footer). Last update [8194df8...b00f716](https://codecov.io/gh/huggingface/transformers/pull/2999?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 2,998 | closed | kwargs are passed to both model and configuration in AutoModels | AutoModel doc says it passes arguments to config, but it actually also passes them to the models which makes them crash. | 02-24-2020 19:19:21 | 02-24-2020 19:19:21 | |
transformers | 2,997 | closed | add explaining example to XLNet LM modeling | 1) adds example explaining how XLNet could be used for standard auto-regressive modelling.
2) puts add_special_tokens=True for simple example to make sure no <sep> and <cls> are added to input. The are used for special training (similar to BERT) and might be confusing to user | 02-24-2020 19:05:10 | 02-24-2020 19:05:10 | > Cool! Indeed, if I remember the paper correctly there is no need to shift the labels. Maybe we could update the [associated docstring](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_xlnet.py#L980-L985) to make sure no one gets confused again?
Done :-) <|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2997?src=pr&el=h1) Report
> Merging [#2997](https://codecov.io/gh/huggingface/transformers/pull/2997?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/21d8b6a33ebf96680b6a0aabd27fa7eaa068da93?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2997?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2997 +/- ##
=======================================
Coverage 77.19% 77.19%
=======================================
Files 98 98
Lines 16013 16013
=======================================
Hits 12361 12361
Misses 3652 3652
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2997?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2997/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.77% <ø> (ø)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2997?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2997?src=pr&el=footer). Last update [21d8b6a...1868b91](https://codecov.io/gh/huggingface/transformers/pull/2997?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 2,996 | closed | Trying to Use AlbertTokenizer With my own custom Vocab file | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Good Afternoon All,
I am trying to used AlbertTokenizer to tokenize my corpus with a custom vocab file. I tried the command : **AlbertTokenizer("my-custom-vocab", do_lower_case=True, keep_accents=False, bos_token='[CLS]', eos_token='[SEP]', unk_token='<unk>', sep_token='[SEP]', pad_token='<pad>', cls_token='[CLS]', mask_token='[MASK]', )** but it gives an error : " **_RuntimeError: Internal: /sentencepiece/src/sentencepiece_processor.cc(73) [model_proto->ParseFromArray(serialized.data(), serialized.size())]"_** . a subser of my Vocab file is attached, what am I doing wrong?
Regards,
[vocab.txt](https://github.com/huggingface/transformers/files/4246094/vocab.txt)
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | 02-24-2020 18:24:18 | 02-24-2020 18:24:18 | Hi, our Albert implementation only handles SentencePiece vocabulary.<|||||>Thank you for your response. I tried generating a sentencepiece vocab model via ..**_spm.SentencePieceTrainer.Train('--input=ALBERT_PEP3V.txt
--model_prefix=albertv1 --vocab_size=10000 --hard_vocab_limit=False')_**....and a model is successfully generated, however when I run your tokenization all sentences get the exact same encoding; for example **_" I am running" will be [ 0, 3, 0, 3]_** and .....**_" the dog jumps" will be the same [0,3, 0, 3]_**. Any idea why this is happening?
Thanks again.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Can we use higgingface for sentence pair classification?
Ché Martin Ph.D
T: 917-684-0864
E: [email protected]
LinkedIn: https://www.linkedin.com/in/chemartin
________________________________
From: stale[bot] <[email protected]>
Sent: Wednesday, April 29, 2020 10:09:51 AM
To: huggingface/transformers <[email protected]>
Cc: Che Martin <[email protected]>; Author <[email protected]>
Subject: Re: [huggingface/transformers] Trying to Use AlbertTokenizer With my own custom Vocab file (#2996)
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub<https://github.com/huggingface/transformers/issues/2996#issuecomment-621235452>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AOUU5C3VU3LKUY5A4X5URUTRPAYK7ANCNFSM4K2ODLBA>.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,995 | closed | No optimizer steps when gradient_accumulation_steps smaller than epoch_iterator length | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Doesn't matter.
Language I am using the model on (English, Chinese ...): Doesn't matter.
The problem arises when using:
* [x] the official example scripts
* [ ] my own modified scripts
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: Any of the tasks.
* [ ] my own task or dataset
## To reproduce
Steps to reproduce the behavior:
1. Run `run_glue.py` where number of batches for each epoch is smaller than gradient_accumulation_steps.
In https://github.com/huggingface/transformers/blob/8194df8e0cff8e5866ec2bcbda34e3892f10eb39/examples/run_glue.py#L233
`step` is at most `len(train_dataloader)/batch_size`. If the latter is small, then we would never enter the if condition and never call `optimizer.step()` and so on.
I know it's the user responsibility to be aware of this, but this can be easily solved by by altering the condition to be:
```
if (step + 1) % args.gradient_accumulation_steps == 0 or ((step + 1) < args.gradient_accumulation_steps and (step + 1) == len(epoch_iterator)):
```
Should I create a small PR? | 02-24-2020 15:32:01 | 02-24-2020 15:32:01 | In this scenario, the user would do a single optimizer step over the whole batch?
Sure, we would welcome a PR! Could you add a comment as well, so that the purpose of this line may be clear for the user?<|||||>After the change we would do a single optimizer step over each *epoch*.
Added a comment and created a PR.
https://github.com/huggingface/transformers/pull/3006
Cheers. |
transformers | 2,994 | closed | Tf qa pipelines test | 02-24-2020 15:12:53 | 02-24-2020 15:12:53 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2994?src=pr&el=h1) Report
> Merging [#2994](https://codecov.io/gh/huggingface/transformers/pull/2994?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8194df8e0cff8e5866ec2bcbda34e3892f10eb39?src=pr&el=desc) will **increase** coverage by `0.06%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2994?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2994 +/- ##
==========================================
+ Coverage 77.16% 77.22% +0.06%
==========================================
Files 98 98
Lines 15999 15999
==========================================
+ Hits 12345 12355 +10
+ Misses 3654 3644 -10
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2994?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2994/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.37% <0%> (+0.16%)` | :arrow_up: |
| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/2994/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `71.64% <0%> (+0.75%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2994/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `74.5% <0%> (+5.88%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2994?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2994?src=pr&el=footer). Last update [8194df8...0e7d609](https://codecov.io/gh/huggingface/transformers/pull/2994?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>@julien-c the TF_QA pipeline test currently fails with a segmentation error, and no way to debug it. We think it might be hardware related, hence the reduction in concurrency.<|||||>I think you inadvertently re-added `run_all_tests_torch_and_tf` (which should not be here anymore)<|||||>Maybe just mark the flaky test as `@slow`, and we'll see if it works more reliably on our own CI?<|||||>oops, indeed, my bad. Alright, that works as well. |
|
transformers | 2,993 | closed | Too many bugs in Version 2.5.0 | 1. It cannot be installed on MacOS. By runing `pip install -U transformers`, I got the following errors:
> Building wheels for collected packages: tokenizers
> Building wheel for tokenizers (PEP 517) ... error
> ERROR: Command errored out with exit status 1:
> command: /anaconda/bin/python /anaconda/lib/python3.7/site-packages/pip/_vendor/pep517/_in_process.py build_wheel /var/folders/5h/fr2vhgsx4jd8wz4bphzt22_8p1v0bf/T/tmpfh6km7na
> cwd: /private/var/folders/5h/fr2vhgsx4jd8wz4bphzt22_8p1v0bf/T/pip-install-fog09t3h/tokenizers
> Complete output (36 lines):
> running bdist_wheel
> running build
> running build_py
> creating build
> creating build/lib
> creating build/lib/tokenizers
> copying tokenizers/__init__.py -> build/lib/tokenizers
> creating build/lib/tokenizers/models
> copying tokenizers/models/__init__.py -> build/lib/tokenizers/models
> creating build/lib/tokenizers/decoders
> copying tokenizers/decoders/__init__.py -> build/lib/tokenizers/decoders
> creating build/lib/tokenizers/normalizers
> copying tokenizers/normalizers/__init__.py -> build/lib/tokenizers/normalizers
> creating build/lib/tokenizers/pre_tokenizers
> copying tokenizers/pre_tokenizers/__init__.py -> build/lib/tokenizers/pre_tokenizers
> creating build/lib/tokenizers/processors
> copying tokenizers/processors/__init__.py -> build/lib/tokenizers/processors
> creating build/lib/tokenizers/trainers
> copying tokenizers/trainers/__init__.py -> build/lib/tokenizers/trainers
> creating build/lib/tokenizers/implementations
> copying tokenizers/implementations/byte_level_bpe.py -> build/lib/tokenizers/implementations
> copying tokenizers/implementations/sentencepiece_bpe.py -> build/lib/tokenizers/implementations
> copying tokenizers/implementations/base_tokenizer.py -> build/lib/tokenizers/implementations
> copying tokenizers/implementations/__init__.py -> build/lib/tokenizers/implementations
> copying tokenizers/implementations/char_level_bpe.py -> build/lib/tokenizers/implementations
> copying tokenizers/implementations/bert_wordpiece.py -> build/lib/tokenizers/implementations
> copying tokenizers/__init__.pyi -> build/lib/tokenizers
> copying tokenizers/models/__init__.pyi -> build/lib/tokenizers/models
> copying tokenizers/decoders/__init__.pyi -> build/lib/tokenizers/decoders
> copying tokenizers/normalizers/__init__.pyi -> build/lib/tokenizers/normalizers
> copying tokenizers/pre_tokenizers/__init__.pyi -> build/lib/tokenizers/pre_tokenizers
> copying tokenizers/processors/__init__.pyi -> build/lib/tokenizers/processors
> copying tokenizers/trainers/__init__.pyi -> build/lib/tokenizers/trainers
> running build_ext
> running build_rust
> error: Can not find Rust compiler
> ----------------------------------------
> ERROR: Failed building wheel for tokenizers
> Running setup.py clean for tokenizers
> Failed to build tokenizers
> ERROR: Could not build wheels for tokenizers which use PEP 517 and cannot be installed directly
2. On Linux, it can be installed, but failed with the following code:
> import transformers
> transformers.AutoTokenizer.from_pretrained("bert-base-cased").save_pretrained("./")
> transformers.AutoModel.from_pretrained("bert-base-cased").save_pretrained("./")
> transformers.AutoTokenizer.from_pretrained("./")
> transformers.AutoModel.from_pretrained("./")
Actually, it is the second line that generates the following errors:
> Traceback (most recent call last):
> File "<stdin>", line 1, in <module>
> File "/anaconda/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 587, in save_pretrained
> return vocab_files + (special_tokens_map_file, added_tokens_file)
> TypeError: unsupported operand type(s) for +: 'NoneType' and 'tuple'
3. The vocabulary size of xlm-roberta is wrong, so it failed with the following code, (this bug also exist in Version 2.4.1):
> import transformers
> tokenizer = transformers.AutoTokenizer.from_pretrained("xlm-roberta-base")
> tokenizer.convert_ids_to_tokens(range(tokenizer.vocab_size))
The error is actually caused by the wrong vocab size:
> [libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1506] CHECK failed: (index) < (current_size_):
> terminate called after throwing an instance of 'google::protobuf::FatalException'
> what(): CHECK failed: (index) < (current_size_):
> zsh: abort python | 02-24-2020 13:44:38 | 02-24-2020 13:44:38 | Hi! Indeed, there have been a few issues as this was the first release incorporating `tokenizers` by default. A new version of `tokenizers` and `transformers` will be available either today or tomorrow and should fix most of these.<|||||>For future reference, when you say that some code "fails", please also provide the stack trace. This helps greatly when debugging.<|||||>Thanks, stack trace provided...
I just noticed that in Version 2.5.0, `AutoTokenizer.from_pretrained()` takes a new argument `use_fast`, and defaults it to `True`. This seems to be the reason for the error, because when I set it to `False`, the loaded model can be correctly saved by `save_pretrained()`.
I wonder why this `use_fast` argument is added, and why it is default to `True`? <|||||>`use_fast` uses the `tokenizers` library which is a new, extremely fast implementation of different tokenizers. I agree that for the first few releases it might've been better to expose the argument but setting it to False by default as to catch errors only by early adopters. Now many errors are reported that could've otherwise been avoided. In the meantime, you can explicitly set it to False.<|||||>For Tokenizers library:
1, Where is the document about how to install and use it? The Readme is too brief...
2, I understand that it is designed as a combination of various tokenizers. But to use a pre-trained model, is it better to use the original tokenizer to avoid subtle differences like special tokens? If so, the Transformers library should not use the tokenizers from Tokenizers library by default...<|||||>`tokenizers` sits in its own repository. You can find it [here](https://github.com/huggingface/tokenizers) and its [Python](https://github.com/huggingface/tokenizers/tree/master/bindings/python) bindings here.
I think that the fast tokenizers are tested to get the exact same output as the other ones.<|||||>Thanks...
It seems that `tokenizers` has been installed together with `transformers` by `pip install transformers`?
In the future, will the tokenizer classes (e.g. BertTokenizer, AutoTokenizer, etc.) still be kept in the `transformers` library? Or they will be deprecated? <|||||>I cannot answer that, I don't know what the roadmap looks like.<|||||>Install Python 64-bit instead of 32-bit solved my same issue.<|||||>Which issue did you solved?
I think 64-bit Python is almost used by everyone...<|||||>1) This issue should be opened on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) as it is an installation issue with the `huggingface/tokenizers` library.
2) This issue is solved in the current master (and 2.5.1) as well.
3) This is fixed in https://github.com/huggingface/transformers/pull/3198 which will be merged in a bit.<|||||>i still have this prob, is anyone can tell me how to solve it?<|||||>which problem?<|||||>Still seeing the error
```
[libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1506] CHECK failed: (index) < (current_size_):
terminate called after throwing an instance of 'google::protobuf::FatalException'
what(): CHECK failed: (index) < (current_size_):
```
how do I work around this?<|||||>Hi @catyeo18, please provide the code that gets you this error, alongside the different versions of software you're using. Here's the [template](https://github.com/huggingface/transformers/issues/new?assignees=&labels=&template=bug-report.md&title=) for bug reports. Thank you. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.