repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 3,392 | closed | Add comparison table with older brother in family | 03-23-2020 11:27:53 | 03-23-2020 11:27:53 | ||
transformers | 3,391 | closed | Create card for the model | 03-23-2020 11:18:26 | 03-23-2020 11:18:26 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3391?src=pr&el=h1) Report
> Merging [#3391](https://codecov.io/gh/huggingface/transformers/pull/3391?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cf72479bf11bf7fbc499a518896dfd3cafdd0b21&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3391?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3391 +/- ##
==========================================
+ Coverage 77.55% 77.56% +0.01%
==========================================
Files 100 100
Lines 16970 16970
==========================================
+ Hits 13161 13163 +2
+ Misses 3809 3807 -2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3391?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3391/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.72% <0.00%> (+0.13%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3391/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.09% <0.00%> (+0.17%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3391?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3391?src=pr&el=footer). Last update [cf72479...c941e45](https://codecov.io/gh/huggingface/transformers/pull/3391?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 3,390 | closed | adding --fp16 to run_language_modeling and increase batch size but cuda out of memory error | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | 03-23-2020 11:00:33 | 03-23-2020 11:00:33 | Hi all
I am using colab , 1 GPU , Tesla P100-PCIE-16GB
code below ran OK
!python /content/transformers/examples/run_language_modeling.py \
--output_dir=/content/outputs \
--model_type=bert \
--model_name_or_path=bert-base-cased \
--num_train_epochs 1\
--do_train \
--do_eval \
--per_gpu_train_batch_size 152\
--train_data_file=/content/input_data/trn.txt \
--eval_data_file=/content/input_data/val.txt \
--evaluate_during_training \
--learning_rate 1e-4\
--overwrite_output_dir\
--tokenizer_name /content/token/ \
--block_size 64\
--mlm
(and batch_size 152 was max num i was able to run without cuda out of memory )
then installing apex by
%%writefile setup.sh
export CUDA_HOME=/usr/local/cuda-10.1
git clone https://github.com/NVIDIA/apex
pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./apex
!sh setup.sh
then adding " --fp16\" to code but i was not able to increase batch size , even abit
do you know that ?
@thomwolf , @VictorSanh , @aaugustin , @BramVanroy , @julien-c , @LysandreJik<|||||>is it also the case with GTX 1080 , any one tried ?
<|||||>and one more thing ,
does any function in those scripts , concatenate the short lines to each other ?
in order not to be enforced to pad each line so much <|||||>Please don't mass-tag people — thanks.<|||||>solve :
that was because i was using p100 |
transformers | 3,389 | closed | 🐛Bugs in run_tf_ner.py | Found a bug in [run_tf_ner.py](https://github.com/huggingface/transformers/blob/master/examples/ner/run_tf_ner.py) at line 170 and 325:
```python
loss_fct = tf.keras.losses.SparseCategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE)
```
refer to [tf.keras.losses.SparseCategoricalCrossentropy](https://www.tensorflow.org/api_docs/python/tf/keras/losses/SparseCategoricalCrossentropy)
```python
tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=False, reduction=losses_utils.ReductionV2.AUTO,
name='sparse_categorical_crossentropy'
)
```
> `from_logits`: Whether y_pred is expected to be a logits tensor. By default, we assume that y_pred encodes a probability distribution. Note: Using from_logits=True may be more numerically stable.
So I think `loss_fct` should be init with `from_logits=True` if the `TFBertForTokenClassification` just return pure logits rather than softmax output. | 03-23-2020 10:50:16 | 03-23-2020 10:50:16 | Hey @jia-zhuang,
As you can see [here](https://github.com/huggingface/transformers/blob/master/examples/ner/run_tf_ner.py#L525) I update the classification layer to add a softmax as activation, then the `from_logits=True` is not necessary.<|||||>@jplu Thanks for your reply! I learn a lot from your code. |
transformers | 3,388 | closed | Lazy text dataset loading for language modelling with PyTorch | #3083
Added a lazy text dataset using linecache to run_language_modeling.py. Slightly refactored collate_fn construction to accommodate the different collate functions needed for a lazy dataset vs a in-memory dataset. | 03-23-2020 10:40:11 | 03-23-2020 10:40:11 | Can anyone advise on the failed tests? Seems to failing in parts of the code-base I haven't touched.<|||||>@GCHQResearcher92457 Yes, the failures are unrelated.<|||||>Most recent failures are unrelated.<|||||>Quick question in passing because I am working on something close, did you run some benchmark to see how this behaves speedwise?<|||||>> Quick question in passing because I am working on something close, did you run some benchmark to see how this behaves speedwise?
I've found in practice so far that training iterations are the same speed using this method as the previous methods, i.e. the bottleneck seems to be later on. Just did some crude tests on a file of 100 lines to test only the data loading performance.
Instantiating lazy dataset: 39.1 µs ± 408 ns
Instantiating cached dataset: 17.4 ms ± 41.6 µs
Random access to single item (lazy): 3.34 µs ± 171
Random access to single item (cached): 3.18 µs ± 86.3 ns
Creating a single tokenized batch (lazy): 1.33 ms ± 3.5 µs
Creating a single tokenized batch (cached): 81.5 µs ± 5.15 µs<|||||>I tested out the LazyLineByLineTextDataset and quickly ran out of memory.
It looks like linecache isn't capable of efficiently indexing into large files. My ~6GB training data causes linecache to stall & use up 7+ GB of RAM.
Saw a similar issue [here](https://stackoverflow.com/questions/620367/how-to-jump-to-a-particular-line-in-a-huge-text-file/2727585). Might be better to use a system similar to the second answer in that post where you create a map of line breaks in the file and seek to them.
```
class LineSeekableFile:
def __init__(self, seekable):
self.fin = seekable
self.line_map = list() # Map from line index -> file position.
self.line_map.append(0)
while seekable.readline():
self.line_map.append(seekable.tell())
def __getitem__(self, index):
# NOTE: This assumes that you're not reading the file sequentially.
# For that, just use 'for line in file'.
self.fin.seek(self.line_map[Index])
return self.fin.readline()
```<|||||>> I tested out the LazyLineByLineTextDataset and quickly ran out of memory.
>
> It looks like linecache isn't capable of efficiently indexing into large files. My ~6GB training data causes linecache to stall & use up 7+ GB of RAM.
>
> Saw a similar issue [here](https://stackoverflow.com/questions/620367/how-to-jump-to-a-particular-line-in-a-huge-text-file/2727585). Might be better to use a system similar to the second answer in that post where you create a map of line breaks in the file and seek to them.
>
> ```
> class LineSeekableFile:
> def __init__(self, seekable):
> self.fin = seekable
> self.line_map = list() # Map from line index -> file position.
> self.line_map.append(0)
> while seekable.readline():
> self.line_map.append(seekable.tell())
>
> def __getitem__(self, index):
> # NOTE: This assumes that you're not reading the file sequentially.
> # For that, just use 'for line in file'.
> self.fin.seek(self.line_map[Index])
> return self.fin.readline()
> ```
Did you run into out-of-memory issues, or did the process simply use a lot of memory? It is likely to be the latter, and that is exactly what line_cache_ is supposed to do: it reads as much of the file into memory as it can for quick access as much as possible (considering the available memory), and then does its work.
LineSeekableFile can be an alternative but definitely not a good replacement imo (it'll be slower, and expects a file handle to always be open which you often would not want).<|||||>> Did you run into out-of-memory issues, or did the process simply use a lot of memory? It is likely to be the latter, and that is exactly what line_cache_ is supposed to do: it reads as much of the file into memory as it can for quick access as much as possible (considering the available memory), and then does its work.
>
> LineSeekableFile can be an alternative but definitely not a good replacement imo (it'll be slower, and expects a file handle to always be open which you often would not want).
I ran the code on a GCP VM instance with 13 GB of RAM. My RAM quickly went to 0 and I was kicked out of SSH. I was forced to restart the instance in order to regain access.
From what I'm seeing, it seems like linecache is primarily designed to be used on Python source files, not large text files. From what I can tell, the [source code](https://github.com/python/cpython/blob/10dabbf8d2c1c929f6ac395e19c64b361bd58fdd/Lib/linecache.py#L82) reads all the lines in the file into memory, without any consideration for available memory. <|||||>@ceremonious I tested this locally with a 50+GB file on my 32GB RAM system and it works as expected. Memory usage goes up to around 95% and stays there. Reproducible code:
```python
import linecache
import random
def get_n_lines(fin, size=65536):
# borrowed from https://stackoverflow.com/a/9631635/1150683
def blocks(files):
while True:
b = files.read(size)
if not b:
break
yield b
with open(fin, encoding="utf-8") as fhin:
n_lines = sum(bl.count("\n") for bl in blocks(fhin))
return n_lines
def main(fin):
n_lines = get_n_lines(fin)
while True:
idx = random.randint(1, n_lines+1)
line = linecache.getline(fin, idx)
print(line)
if __name__ == '__main__':
f = r'path/to/huge/file.txt'
main(f)
```
I haven't dug into the source code (though I do see a MemoryError check in it), but I have used this for many projects on our own servers and I can tell you that it works (it will utilise as much RAM as it can but won't throw OOM errors). It is good to know that this won't work well with GCP, though! A note should be included in the class's docstring.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3388?src=pr&el=h1) Report
> Merging [#3388](https://codecov.io/gh/huggingface/transformers/pull/3388?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d22894dfd40d5c858e8398e2783545103d191b47&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3388?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3388 +/- ##
=======================================
Coverage 78.26% 78.26%
=======================================
Files 106 106
Lines 17964 17964
=======================================
Hits 14060 14060
Misses 3904 3904
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3388?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3388?src=pr&el=footer). Last update [d22894d...1ead846](https://codecov.io/gh/huggingface/transformers/pull/3388?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>The merge conflicts are a bit of a mess because of datasets and collation being moved outside the main script in `master`. I've opened a new PR where the code used here has been slotted-in to the the more modular format of this script. Please see PR #4009.<|||||>Any updates on this PR? Lazy loading sounds like an important functionality for massive datasets.<|||||>@misrasaurabh1 This PR is closed. See https://github.com/huggingface/transformers/pull/4009 for the continuation. |
transformers | 3,387 | closed | Finetuning of T5 on SQuAD 1.1 including code examples | Hi, I am implementing T5 model on the SQuAD 1.1 dataset.
When I fine-tune the model with the Adam or AdaFactor optimizier, validation accuracy goes down.
But, the training accuracy goes up.
Could you give to me any advice for me?
I feed into the model as below:
input_ids = ['(QUESTION)', Q_W1, Q_W2, ..., '(CONTEXT)', C_W1, C_W2, ..., '(PAD)', ...]
attention_masks = [1, 1, 1, ..., 1, 1, 1, ..., 0, ...]
decoder_input_ids = ['(PAD)', W1, W2, ..., '(PAD)', '(PAD)', ...]
decoder_attention_masks = [1, 1, 1, ..., 0, 0, ...]
lm_labels = [W1, W2, ..., '(EOS)', '(PAD)', ...]
I matched the shape between 'decoder_input_ids' and 'lm_labels'. (Shift doesn't be used.)
And, '(PAD)' in 'lm_labels' is converted into -100 in a process of the loss calculation.
For the generation, 'decoder_input_ids' is generated from the decoder except for an initial token '(PAD)'.
The result from the 'T5-Small' pretrained weights on the dataset is
{
"exact": 71.03122043519394,
"f1": 81.08158598580584,
"total": 10570,
"HasAns_exact": 71.03122043519394,
"HasAns_f1": 81.08158598580584,
"HasAns_total": 10570
} | 03-23-2020 09:48:15 | 03-23-2020 09:48:15 | **Update:**
When I pre-process a token not applied lower case, its EM from the initial weights was almost 76.85. (I couldn't remember exact score.)
In the TensorFlow version, its initial weights outputs 76.30 EM.
However, their pre-processing has been applied to lower case for a question, document, and answer.
I got a EM 74.02 with the lower case at the initial step.
And, I used an answer as a target inputs and outputs instead of using the answer from a spanning.
If an answer is in a extracted document, it will be an example in the training step.
But serious problem is to decrease the validation performance after a few training steps.
When I validated my trained model at 100 steps, its score went down almost 62.XX.
I think that a problem is one of them for the batch size or wrong pre-processing or bugs in the T5 model.
For the optimizer, I tested for AdaFactor and Adam optimizer but both results are same.
I didn't understand one thing when implementing it.
The thing is that loaded pre-trained weights don't have a weight for the 'lm_head' layer.
I guess that it is for an user who wants to implement own vocabulary.
But I think this is a reason for that validation accuracy is lower than the TensorFlow version at the initial step. (lm_head layer should be randomly initialized.)
When I applied a mask for the outputs and inputs_embeds for the encoder and decoder, the validation accuracy goes up. [Value * (mask == 1).float().unsqueeze(2), features of (PAD) should be zero.]
But I have to train the T5 model more for proving whether correct or not.
And, low learning rate is better than a learning rate from the original paper. (Original paper: 1e-3, but I used 5e-5 mentioned in the BERT.)
Last, in the previous comment I forgot to write something for my inputs.
input_ids = ['(QUESTION)', Q_W1, Q_W2, ..., '(CONTEXT)', C_W1, C_W2, (EOS), '(PAD)', ...]
attention_masks = [1, 1, 1, ..., 1, 1, 1, 1, ..., 0, ...]
decoder_input_ids = ['(PAD)', W1, W2, ..., '(PAD)', '(PAD)', ...]
decoder_attention_masks = [1, 1, 1, ..., 0, 0, ...]
lm_labels = [W1, W2, ..., '(EOS)', '(PAD)', ...]
EOS token should be added in a context.
And, tokens of the input_ids are [question : Q_W1, Q_W2, ..., context : 'C_W1, C_W2, (EOS), (PAD)', ...]
I hope it will be helpful for someone who is implementing it.
And, I will write more about it when I finish to train my model.<|||||>Hi @h19920918,
Thanks for the in-detail report. Could you quickly post your environment information here as well?
You can simply run `python transformers-cli env` in the root folder of your cloned transformers repo and copy paste it below.
And do you use T5 with Tensorflow or PyTorch?
Also it would be great if you could copy paste your code for the above experiment here :-) <|||||>@patrickvonplaten Thank you for your answer.
Unfortunately, I didn't use all codes in yours.
I partly used your code to implement it.
First, my environment is below:
Python == 3.6.4
Pytorch == 1.4.0+cu92
CUDA == 9.2
CuDNN == 6 or 7? (I don't know exactly.)
Transformer == 2.5.1
Actually, I solved the problem.
Paper:
T5-Small: EM: 79.10 || F1: 87.24
Own:
T5-Small: EM: 79.03 || F1: 87.35
I suspected four things:
1. Batch size
Original paper used 128 batch size to train the model, but I trained with small number of batch size due to the insufficient resources. In my training process, I trained my model with 72 batch size.
2. Learning rate
I adjust a learning rate from 1e-3 to 1e-4 with the AdaFactor optimizer.
3. Masking for the 'inputs_embeds', 'encoder_outputs', and 'decoder_outputs'
I masked for three things with [Value * (mask == 1).float().unsqueeze(2)].
4. Loss scale
Originally, a loss is calculated by dividing the number of tokens.
But I changed this to diving the number of batch size.
Additionally, I changed pre-processing part to usage an example if an answer is in an extracted document.
However, it can be a problem since some of the documents have an answer but it is not reasonable answer.
A reason to do this, some of spanned answers are a little bit different with original answer. (e.g. answer, != answer)
And, some of spanned answers are converted into (UNK) tokens. (I'm not sure it is fixed right thing after changing my pre-process code.)
I will upload my code on the GitHub as soon as possible.<|||||>Great, happy that you solved it :-)
I think this will be very useful for others. If you could link your uploaded GitHub code to this issue this would be very helpful :-) <|||||>I upload my Github, you can see the code in https://github.com/h19920918/T5_SQuAD.
But it is quite dirty code..
So, I recommend which part you watch.
Almost implementation came from your code.
1. Mask
https://github.com/h19920918/T5_SQuAD/blob/c75d44544c3f18b87a4d8d09ed320742f9aaab36/models/modeling_t5.py#L556
https://github.com/h19920918/T5_SQuAD/blob/c75d44544c3f18b87a4d8d09ed320742f9aaab36/models/modeling_t5.py#L659
2. Loss
https://github.com/h19920918/T5_SQuAD/blob/c75d44544c3f18b87a4d8d09ed320742f9aaab36/models/modeling_t5.py#L941
https://github.com/h19920918/T5_SQuAD/blob/c75d44544c3f18b87a4d8d09ed320742f9aaab36/models/modeling_t5.py#L943
3. Pre-processing
https://github.com/h19920918/T5_SQuAD/blob/c75d44544c3f18b87a4d8d09ed320742f9aaab36/datasets/squad.py#L101
Above links are the modification by me.
With these modification, my model could be trained.
I'm sorry not to provide clean code since I'm working on something in this code..
I hope it will be helpful for someone.
p.s. I have to do ablation study for which part is the real problem.
@patrickvonplaten I have a question.
Are the T5 checkpoints pre-trained by TensorFlow version or yours?
It means I want to know whether the checkpoints are converted from somewhere or not.
I forgot to write something.
The results from the initial checkpoint are same.
However, I don't understand since the 'lm_head' layer should be initialized randomly. (I used different seed for each result.)<|||||>Thanks for linking your code! I think especially the pre-processing code can be very useful for others!
The T5 checkpoints are the official Google checkpoints pre-trained by the T5 team: https://github.com/google-research/text-to-text-transfer-transformer#released-model-checkpoints .
These checkpoints were attained by pretraining on both the unsupervised C4 dataset (using a denoising objective) and the mixed multi-task supervised dataset (see [paper](https://arxiv.org/abs/1910.10683)). The PyTorch weights were retrieved by conversion from these weights but correspond 1-to-1 to the same values as the original TF weights.
Does that make sense? <|||||>Thank you for your detail answer.
Still, I don't understand the pre-trained weights output the same results with different seeds.
As I know, the 'lm_head' layer should be used in the inference process for generating tokens.
However, the layer is initialized randomly since it is not in the pre-trained weights.
I guess one thing about it, where the pre-trained weights dominate all features, therefore, the outputs are same regardless of the 'lm_head' layer.
Is my inference correct?<|||||>The `lm_head` layer corresponds to the "inverse" token embeddings. It is tied to the input embeddings. It should not be randomly initialized when loading weights from the pretrained models.<|||||>Thank you for your answer.
Sorry, it is my mistake. |
transformers | 3,386 | closed | Model conversion from PyTorch to TF2 doesn't work properly for ALBERT | # 🐛 Bug
## Information
The model conversion script `convert_pytorch_checkpoint_to_tf2.py` seems to be not working properly for ALBERT models.
It fails on the pre-trained models officially released by Google which are converted to PyTorch models with `convert_albert_original_tf_checkpoint_to_pytorch.py`.
## To reproduce
```
$ wget https://storage.googleapis.com/albert_models/albert_base_v2.tar.gz
$ tar xzf albert_base_v2.tar.gz
$ cd albert_base/
$ python -m transformers.convert_albert_original_tf_checkpoint_to_pytorch --tf_checkpoint_path model.ckpt-best --pytorch_dump_path ./pytorch_model.bin --albert_config_file albert_config.json
$ python -m transformers.convert_pytorch_checkpoint_to_tf2 --tf_dump_path ./ --model_type albert --pytorch_checkpoint_path ./pytorch_model.bin --config_file albert_config.json --compare_with_pt_model
...
Max absolute difference between models outputs 17.709423065185547
Traceback (most recent call last):
File "/home/m-suzuki/.pyenv/versions/3.7.4/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/m-suzuki/.pyenv/versions/3.7.4/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/m-suzuki/.pyenv/versions/Python-3.7.4/lib/python3.7/site-packages/transformers/convert_pytorch_checkpoint_to_tf2.py", line 499, in <module>
only_convert_finetuned_models=args.only_convert_finetuned_models,
File "/home/m-suzuki/.pyenv/versions/Python-3.7.4/lib/python3.7/site-packages/transformers/convert_pytorch_checkpoint_to_tf2.py", line 428, in convert_all_pt_checkpoints_to_tf
compare_with_pt_model=compare_with_pt_model,
File "/home/m-suzuki/.pyenv/versions/Python-3.7.4/lib/python3.7/site-packages/transformers/convert_pytorch_checkpoint_to_tf2.py", line 351, in convert_pt_checkpoint_to_tf
assert diff <= 2e-2, "Error, model absolute difference is >2e-2: {}".format(diff)
AssertionError: Error, model absolute difference is >2e-2: 17.709423065185547
```
Same error for ALBERT v1 models.
## Expected behavior
Max absolute difference between models outputs should be <= 2e-2
## Environment info
Observed on both of the following environments:
- `transformers` version: 2.5.1
- Platform: Darwin-19.3.0-x86_64-i386-64bit
- Python version: 3.7.6
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?): 2.1.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
<!-- -->
- `transformers` version: 2.5.1
- Platform: Linux-4.15.0-58-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.4
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?): 2.1.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 03-23-2020 07:22:37 | 03-23-2020 07:22:37 | Proposed a fix that didn't work out-of-the-box with official ALBERT models. Still looking into it, will keep you posted.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Closing since it seems solved in #4076. Thank you! |
transformers | 3,385 | closed | minor website fix | On this page [https://huggingface.co/transformers/notebooks.html](https://huggingface.co/transformers/notebooks.html)
The first link is fine. The others give 404. Just thought you would like to know | 03-23-2020 01:25:12 | 03-23-2020 01:25:12 | Indeed, those notebooks are not up-to-date and should be deprecated.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,384 | closed | gpt2 - convert examples to features(tensorflow 2) | I'm trying to fine-tune GPT2 to generate shakespeare text.
I have variable "train_examples", which is list of InputExamples:
```
>> print(train_examples)
<__main__.InputExample at 0x7f55e0fafd68>,
<__main__.InputExample at 0x7f55e0fafda0>,
<__main__.InputExample at 0x7f55e0fafef0>,
<__main__.InputExample at 0x7f55e0fafeb8>,
<__main__.InputExample at 0x7f55e0f6aeb8>,
```
I created the examples using the following function:
```
def _create_examples(self, lines, set_type):
"""Creates examples for the training and dev sets."""
examples = []
for (i, line) in enumerate(lines):
guid = "%s-%s" % (set_type, i)
#guid = i
text_a = line[1]
examples.append(
InputExample(guid=guid, text_a=text_a))
return examples
class InputExample(object):
def __init__(self, guid, text_a):
self.guid = guid
self.text_a = text_a
```
As I understand I need to convert the type to 'features', before I call the fit function. But how can I convert the examples to features? I saw many cases for BERT, but I couldn't find for gpt2.
I tried:
```
from transformers import glue_convert_examples_to_features
input_train_tensor_data = glue_convert_examples_to_features(train_examples, gpt2_tokenizer, max_length=128, task='mrpc')
```
But got:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-118-d04221b84b4f> in <module>()
1 from transformers import glue_convert_examples_to_features
2
----> 3 input_train_tensor_data = glue_convert_examples_to_features(train_examples, gpt2_tokenizer, max_length=128, task='mrpc')
/usr/local/lib/python3.6/dist-packages/transformers/data/processors/glue.py in glue_convert_examples_to_features(examples, tokenizer, max_length, task, label_list, output_mode, pad_on_left, pad_token, pad_token_segment_id, mask_padding_with_zero)
120
121 if output_mode == "classification":
--> 122 label = label_map[example.label]
123 elif output_mode == "regression":
124 label = float(example.label)
KeyError: None
``` | 03-23-2020 00:54:18 | 03-23-2020 00:54:18 | Hi @yagelardan ,
In order to fine-tune gpt2 you should be able to use this example [script](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py)
Also, you can refer to this issue https://github.com/huggingface/transformers/issues/1407 where people already trained gpt2 on different languages.
And this issue https://github.com/huggingface/transformers/issues/2008 could help you out :-) |
transformers | 3,383 | closed | Clean Encoder-Decoder models with Bart/T5-like API and add generate possibility | Bert-Bert Encoder-Decoder models can now be used as is shown in the test cases:
`tests/test_modeling_encoder_decoder.py`.
Tests include:
- forward `input_ids` and `decoder_input_ids` for Bert-Bert
- backprop using masked language model loss for Bert-Bert
- backprop using "conventional" language model loss for Bert-Bert
- using the `generate()` fn with Bart-Bart
- saving and loading of Encoder-Decoder models.
Before merging a couple of things have to be agreed on as mentioned in the comments below.
UPDATE:
This branch is IMO now fully functional for Bert-2-Bert models.
I will finish the PR (clean the code, make a pretty docstring, etc...) once we agreed on the issues I mentioned further down. Would be very happy if you can review @thomwolf @LysandreJik @sshleifer @julien-c @yjernite | 03-22-2020 23:58:59 | 03-22-2020 23:58:59 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3383?src=pr&el=h1) Report
> Merging [#3383](https://codecov.io/gh/huggingface/transformers/pull/3383?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/857ccdb259b7e46c60cf86c58b7ab038c63e4d4e&el=desc) will **increase** coverage by `0.29%`.
> The diff coverage is `88.46%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3383?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3383 +/- ##
==========================================
+ Coverage 78.61% 78.90% +0.29%
==========================================
Files 106 105 -1
Lines 17953 17973 +20
==========================================
+ Hits 14114 14182 +68
+ Misses 3839 3791 -48
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3383?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/3383/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `89.01% <75.00%> (+0.60%)` | :arrow_up: |
| [src/transformers/modeling\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/3383/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `84.41% <92.30%> (+63.36%)` | :arrow_up: |
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/3383/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.98% <100.00%> (ø)` | |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3383/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <0.00%> (+0.12%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3383/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.76% <0.00%> (+0.16%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3383?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3383?src=pr&el=footer). Last update [857ccdb...83f3d10](https://codecov.io/gh/huggingface/transformers/pull/3383?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>UPDATE:
I really liked the idea of adding a encoder_decoder config (@thomwolf ) and having for both the encoder_decoder config and model 2 `from_pretrained` fn:
1. The standard one which is used thanks to inheritence to `PretrainedConfig` and `PretrainedModel`
2. a `from_encoder_decoder_pretrained` fn (@sshleifer)
To understand how to use the encoder decoder class please confer to the added tests.<|||||>Code is cleaned: added type hints, cleaned the docstring and added a encoder-decoder model page.
Just need to resolve the issue with importing Bert's model tester. @sshleifer found a solution. If everybody is fine with it - I'll go for it :-) <|||||>LGTM!<|||||>Ok Good to merge for me! If @sshleifer it's ok for you I will use your PR #4027 for the new test proposition.<|||||>Yes! |
transformers | 3,382 | closed | When I used the add_special_tokens function in the BertTokenizer, it assigns 2 different tokens with the same ID. Is this done on purpose? | 03-22-2020 23:36:43 | 03-22-2020 23:36:43 | We would need a bit more information to understand the issue. A reproducible code example would be even better.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
|
transformers | 3,381 | closed | [BART] test_dummy_inputs fails on GPU | ```
RuntimeError: Expected object of device type cuda but got device type cpu for argument #3 'index' in call to _th_index_select
```
Easy fix, but putting here so I don't forget! | 03-22-2020 16:48:22 | 03-22-2020 16:48:22 | |
transformers | 3,380 | closed | Can't save DistilBert model. | Model:
```
input_layer = tf.keras.layers.Input(shape = (attention_mask.shape[1],), dtype='int64')
bert = TFDistilBertModel.from_pretrained("distilbert-base-cased")(input_layer)
bert = bert[0][:,0,:]
bert = tf.keras.layers.Dense(units=20, activation='relu')(bert)
classifier = tf.keras.layers.Dense(units=train_y.shape[1], activation='softmax')(bert)
model = tf.keras.models.Model(inputs=input_layer, outputs=classifier)
model.summary()
```
After training when I try to save the model using:
```
# serialize model to JSON
model_json = model.to_json()
with open("model.json", "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
model.save_weights("model.h5")
print("Saved model to disk")
```
I am getting this error:
```
File "train_DistilBERT_model.py", line 138, in <module>
model_json = model.to_json()
File "/home/v-mohit.saini/anaconda3/envs/flask/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/network.py", line 1254, in to_json
model_config = self._updated_config()
File "/home/v-mohit.saini/anaconda3/envs/flask/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/network.py", line 1232, in _updated_config
config = self.get_config()
File "/home/v-mohit.saini/anaconda3/envs/flask/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/network.py", line 918, in get_config
return copy.deepcopy(get_network_config(self))
File "/home/v-mohit.saini/anaconda3/envs/flask/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/network.py", line 1993, in get_network_config
layer_config = serialize_layer_fn(layer)
File "/home/v-mohit.saini/anaconda3/envs/flask/lib/python3.6/site-packages/tensorflow_core/python/keras/utils/generic_utils.py", line 198, in serialize_keras_object
config = instance.get_config()
File "/home/v-mohit.saini/anaconda3/envs/flask/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/network.py", line 917, in get_config
raise NotImplementedError
NotImplementedError
```
How to fix this? | 03-22-2020 15:42:06 | 03-22-2020 15:42:06 | Could you provide all the information related to your environment as the bug template recommends?
Can you try installing from master? In the latest pypi version, we only handled the `save_pretrained` method, not the `save`/`save_weights` methods. This should have been changed with #3103.<|||||>The changes in #3103 only address serialization of `TF*MainLayer` classes, used within a general Functional/Sequential API Keras model (which was my use case). Looking at [the Network docstring](https://github.com/tensorflow/tensorflow/blob/5c4931bbf69e0f006f210c6382a234e83dd4dc8e/tensorflow/python/keras/engine/network.py#L89-L97) it seems like the `TF*Model` classes, being “subclass models”, need more work in order to support Keras serialization.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,379 | closed | Data Processor should not include in the package | # 🚀 Feature request
For a clearly, flexible packages, I think there is no need including data processor for certain data.
## Motivation
In the past, it was easy using transformers to do research, since the module is explicit and clear.
When reading example, it was easy tracing the code to apply on a new dataset.
However, recently transformers merged some unrelated module into the package like Data Processor, making hard to modify the preprocess stage. | 03-22-2020 15:40:36 | 03-22-2020 15:40:36 | Agree and I follow you. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,378 | closed | test_resize_tokens_embeddings does not inspect `get_output_embeddings` | Here is the existing logic: copied below for convenience
https://github.com/huggingface/transformers/blob/bbf26c4e619cf42106163e1e2cd5ff98b936ff93/tests/test_modeling_common.py#L489)
```python
model_embed = model.resize_token_embeddings(config.vocab_size)
cloned_embeddings = model_embed.weight.clone()
# Check that resizing the token embeddings with a larger vocab size increases the model's vocab size
model_embed = model.resize_token_embeddings(model_vocab_size + 10)
self.assertEqual(model.config.vocab_size, model_vocab_size + 10)
self.assertEqual(model_embed.weight.shape[0], cloned_embeddings.shape[0] + 10)
```
Since we never test the return value of `get_output_embeddings` after resizing, the model can avoid setting it to the new vocab size.
BART did this by overwriting `tie_weights` to do nothing (fix proposed in https://github.com/huggingface/transformers/pull/3323)
| 03-22-2020 14:56:23 | 03-22-2020 14:56:23 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,377 | closed | RobertaTokenizer doesn't have 'batch_encode_plus' | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): RoBERTa
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
Following the tutorial on how to train your own RoBERTa model in this [link](https://huggingface.co/blog/how-to-train)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
Fill mask
## To reproduce
Steps to reproduce the behavior:
1. I pretrained my own tokenizer and roberta model
2. The tokenizer and model are loaded fine and both seems to have the training information
3. However, when I join them together in the pipeline step as in:
`tokenizer = RobertaTokenizer.from_pretrained('./eo_data')`
`rmodel = RobertaForMaskedLM.from_pretrained('./output_dir')`
`fill_mask = pipeline(
"fill-mask",
model=rmodel,
tokenizer= tokenizer
)
`
I get the following error:
`
AttributeError: 'RobertaTokenizer' object has no attribute 'batch_encode_plus'
`
It seems that RobertaTokenizer doesn't have the batch_encode_plus function as BertTokenizer
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.1
- Platform: Ubuntu
- Python version: 3.6.8
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
Thank you
| 03-22-2020 14:15:09 | 03-22-2020 14:15:09 | Hi, this is probably because you're using an old version of `transformers`. Version 2.8 doesn't exist, the latest is 2.5.1 ...<|||||>Sorry, I accidently put 2.8. The version I have is: 2.5.1
I am going to edit the post. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Started getting the same issue today. Is there a known solution? |
transformers | 3,376 | closed | Added scibert-nli model card | 03-22-2020 13:26:17 | 03-22-2020 13:26:17 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3376?src=pr&el=h1) Report
> Merging [#3376](https://codecov.io/gh/huggingface/transformers/pull/3376?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cf72479bf11bf7fbc499a518896dfd3cafdd0b21&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3376?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3376 +/- ##
=======================================
Coverage 77.55% 77.56%
=======================================
Files 100 100
Lines 16970 16970
=======================================
+ Hits 13161 13162 +1
+ Misses 3809 3808 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3376?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3376/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.09% <0.00%> (+0.17%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3376?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3376?src=pr&el=footer). Last update [cf72479...0c19c77](https://codecov.io/gh/huggingface/transformers/pull/3376?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 3,375 | closed | Add camembert integration tests | Add integration tests for camembert comparing results to original fairseq code. | 03-22-2020 12:39:00 | 03-22-2020 12:39:00 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3375?src=pr&el=h1) Report
> Merging [#3375](https://codecov.io/gh/huggingface/transformers/pull/3375?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8becb732931bbab5dd75cca5f5e7c75b2516d10b&el=desc) will **increase** coverage by `0.06%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3375?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3375 +/- ##
==========================================
+ Coverage 77.64% 77.71% +0.06%
==========================================
Files 100 100
Lines 16979 16979
==========================================
+ Hits 13184 13195 +11
+ Misses 3795 3784 -11
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3375?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3375/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.50% <0.00%> (+0.13%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3375/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.52% <0.00%> (+1.73%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3375?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3375?src=pr&el=footer). Last update [8becb73...74e09c3](https://codecov.io/gh/huggingface/transformers/pull/3375?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,374 | closed | closed | closed | 03-22-2020 04:27:40 | 03-22-2020 04:27:40 | |
transformers | 3,373 | closed | Add example code for CRF heads |
# Add example `crf/` with heads that force structural output dependencies.
(Mostly a note to myself as a side project)
## Model description
As requested in: https://github.com/huggingface/transformers/pull/3009 there are some tasks and languages where it is useful to have final layer structural dependencies.
Using https://github.com/harvardnlp/pytorch-struct/ we can add these with minimal changes to the code and no new model parameters.
Target:
* example code for ner / parsing (sota).
| 03-22-2020 04:23:53 | 03-22-2020 04:23:53 | Also https://github.com/huggingface/transformers/pull/2249/<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,372 | closed | BERT pretrained checkpoints | # ❓ Questions & Help
Did you pretrain the BERT cased checkpoints with huggingface or convert it from google's checkpoints? | 03-22-2020 04:12:28 | 03-22-2020 04:12:28 | We converted them from Google's checkpoints. |
transformers | 3,371 | closed | [Bart/Memory] Two separate, smaller decoder attention masks | ### Background
The bart decoder requires two masks: one to ignore padding tokens, the other (`causal_mask`), to avoid attending to future tokens during training.
Previously, `_prepare_bart_decoder_inputs` combined these two masks into one float_mask of shape `(bsz, 1, tgt_len, tgt_len)` filled with -inf for tokens that should be ignored. This mask was subsequently added to the attention activations.
Now, we return the two masks separately:
`decoder_padding_mask`: shape `(bs, tgt_len)`, `bool`
`causal_mask`: shape `(tgt_len, tgt_len)`, `float`
### Impact
saves 800 MB for bs=6, tgt_len=1024, with negligible speed impact.
### Notes
- The distinct data types (bool and float) are used to minimize code change. | 03-22-2020 02:47:04 | 03-22-2020 02:47:04 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3371?src=pr&el=h1) Report
> Merging [#3371](https://codecov.io/gh/huggingface/transformers/pull/3371?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cf72479bf11bf7fbc499a518896dfd3cafdd0b21&el=desc) will **decrease** coverage by `0.02%`.
> The diff coverage is `93.75%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3371?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3371 +/- ##
==========================================
- Coverage 77.55% 77.52% -0.03%
==========================================
Files 100 100
Lines 16970 16957 -13
==========================================
- Hits 13161 13146 -15
- Misses 3809 3811 +2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3371?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3371/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `97.59% <93.75%> (-0.50%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3371?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3371?src=pr&el=footer). Last update [cf72479...8db65c1](https://codecov.io/gh/huggingface/transformers/pull/3371?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,370 | closed | [Seq2Seq Generation] Call encoder before expanding input_ids | Proposing to call model.encoder before expanding `input_ids` to `effective_batch_size*num_beams`.
For Bart, this saves 1.5 GB of GPU mem on batch_size=6. Savings probably similar for T5 (untested).
Requires knowing which index of the encoder_outputs is associated with the batch dim (we need to expand this dimension), which is different between `Bart` and `T5`. This difference is encoded in the `self.encoder_outputs_batch_idx` variable.
This PR is WIP because `encoder_outputs_batch_idx` could be avoided if we transposed Bart's encoder_outputs, which I haven't tried.
| 03-21-2020 20:45:10 | 03-21-2020 20:45:10 | Like the change a lot!
One question I asked myself: With this change the `encoder_outputs` which are the same point to the same memory address -> could that lead to problems? Probably not because the `encoder_outputs` are never changed, right?
I'd just propose some renaming. |
transformers | 3,369 | closed | [Bart/Memory] SelfAttention only returns weights if config.output_attentions | **Previously**, `SelfAttention` would always return `attn_weights`, and then `BartDecoder` and `BartEncoder` would decide whether to return them to the user.
The `attn_weights` tensor is fairly large, with shape = `(bs, num_heads, tgt_len, src_len)`
This meant that the memory allocated for `attn_weights` could not be freed until after the forward pass of `BartDecoder`.
Now: `SelfAttention` returns (output, None) if `config.output_attentions=False` and the memory can be freed
Impact: memory can be freed after SelfAttention returns. -600MB peak GPU consumption for batch_size=6, tgt_len=src_len=1024, num_heads=16
Speed impact: negligible | 03-21-2020 19:53:38 | 03-21-2020 19:53:38 | |
transformers | 3,368 | closed | Why does huggingface bert pooler hack make mixed precission training stable? | # ❓ Questions & Help
## Details
Huggigface BERT implementation has a hack to remove the pooler from optimizer.
https://github.com/huggingface/transformers/blob/b832d5bb8a6dfc5965015b828e577677eace601e/examples/run_squad.py#L927
```
# hack to remove pooler, which is not used
# thus it produce None grad that break apex
param_optimizer = [n for n in param_optimizer if 'pooler' not in n[0]]
```
We are trying to run pretraining on huggingface bert models. The code always diverges later during the training if this pooler hack is not applied. Everytime the reason is apex loss scaler becomes zero.
After using the above hack, there is no divergence issue seen.
The pooler layer is a FFN with tanh activation
```
class BertPooler(nn.Module):
def __init__(self, config):
super().__init__()
self.dense = nn.Linear(config.hidden_size, config.hidden_size)
self.activation = nn.Tanh()
def forward(self, hidden_states):
# We "pool" the model by simply taking the hidden state corresponding
# to the first token.
first_token_tensor = hidden_states[:, 0]
pooled_output = self.dense(first_token_tensor)
pooled_output = self.activation(pooled_output)
return pooled_output
```
I even tried replacing the tanh acrivation with GELU and adding layer norm in the pooler layer. But the loss scaler became zero even faster.
My question is why this pooler hack solves numeric instability?
**https://stackoverflow.com/questions/60743907/why-does-huggingface-bert-pooler-hack-make-mixed-precission-training-stable**: | 03-21-2020 19:33:57 | 03-21-2020 19:33:57 | Any update on this?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,367 | closed | [Generate] Add bad words list argument to the generate function | The `bad_words_ids` argument allows to insert a list of lists of `input_ids` that cannot be generated, *e.g.* bad words.
That's a proposed feature request (I think there were actually multiple ones):
#3061
Also adds tests for all language models to verify behavior. | 03-21-2020 14:25:13 | 03-21-2020 14:25:13 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3367?src=pr&el=h1) Report
> Merging [#3367](https://codecov.io/gh/huggingface/transformers/pull/3367?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ae6834e028ecdf7fdbe886c1f86d0e02d5fef6f0&el=desc) will **increase** coverage by `0.06%`.
> The diff coverage is `91.30%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3367?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3367 +/- ##
==========================================
+ Coverage 77.80% 77.87% +0.06%
==========================================
Files 100 100
Lines 17064 17127 +63
==========================================
+ Hits 13277 13338 +61
- Misses 3787 3789 +2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3367?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3367/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.98% <87.50%> (+0.17%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3367/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.63% <94.44%> (+0.47%)` | :arrow_up: |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3367/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.99% <100.00%> (+0.02%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3367?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3367?src=pr&el=footer). Last update [ae6834e...19d6acd](https://codecov.io/gh/huggingface/transformers/pull/3367?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Sadly, TF tensorflow test seems flaky see: https://github.com/huggingface/transformers/commit/b38d552a92a0a201c005afae0e1b861ae6de9ce0
Might need to revert the commit. |
transformers | 3,366 | closed | GPT2TokenizerFast does not preserve special tokens' ids after a save and load. | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): GPT2 Fast Tokenizer
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
This problem only happens with `GPT2TokenizerFast` and not `GPT2Tokenizer`
## To reproduce
Steps to reproduce the behavior:
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```python
from transformers import GPT2TokenizerFast
tokenizer = GPT2TokenizerFast.from_pretrained('gpt2')
print('special tokens: ', tokenizer.additional_special_tokens, tokenizer.additional_special_tokens_ids)
tokenizer.add_special_tokens({'additional_special_tokens': ['<special_token>'], 'pad_token': '<pad>'})
print('special tokens: ', tokenizer.additional_special_tokens, tokenizer.additional_special_tokens_ids)
print(tokenizer.pad_token, tokenizer.convert_tokens_to_ids(tokenizer.pad_token))
tokenizer.save_pretrained('./save_dir/')
tokenizer = GPT2TokenizerFast.from_pretrained('./save_dir/')
print('special tokens: ', tokenizer.additional_special_tokens, tokenizer.additional_special_tokens_ids)
print(tokenizer.pad_token, tokenizer.convert_tokens_to_ids(tokenizer.pad_token))
```
It outputs
```
special tokens: [] []
special tokens: ['<special_token>'] [50257]
<pad> 50258
special tokens: ['<special_token>'] [50258]
<pad> 50257
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
```
special tokens: [] []
special tokens: ['<special_token>'] [50257]
<pad> 50258
special tokens: ['<special_token>'] [50257]
<pad> 50258
```
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.1
- Platform: Linux-5.4.19-100.fc30.x86_64-x86_64-with-fedora-30-Thirty
- Python version: 3.7.6
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.1.0 (True)
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
| 03-21-2020 00:00:57 | 03-21-2020 00:00:57 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,365 | closed | fixes lr_scheduler warning | For more details, see https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate | 03-20-2020 21:41:43 | 03-20-2020 21:41:43 | Not sure why we missed this one. Thanks! |
transformers | 3,364 | closed | Generate all possible sentences using a fine-tuned GPT-2 model | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Is there a way to generate all possible sentences using a fine-tuned GPT-2 model given a certain sampling technique? For some reason I wan to exhaust all possible combination of tokens given a fine-tuned GPT-2 model with certain sampling technique. Is it doable? If it is not, how do we get an estimate of how many possible sentences are there in the latent space?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | 03-20-2020 21:33:29 | 03-20-2020 21:33:29 | What do you mean exactly be all possible sentences?
The space of possible sentence that could be generated grows exponentially with the length of the sentences. Having **V** words in your vocabulary, there are **V^N** possible sentences of length **N** that could be sampled. If **V ~ 50,000** and **N = 10**, you are already at > 10^40 possibilities which is intractable.<|||||>Thanks for the reply. By all possible sentences I meant all possible sentences under some certain sampling technique. For example, if I want all possible sentences with top-k=1, there would be just 1 in the space. I can control the desired number of possible sentences by choosing sampling techniques at a certain level of strictness. However, I don't want to do a sampling, I want to find all of them by some BFS or DFS. What I plan to do is find some desired sampling technique so that there are say 1 million unique sentences in the space, then find out all of them. The provided util in the package only does sampling which could generate duplicate sentences. Does that make sense? |
transformers | 3,363 | closed | Added total_save_limit feature similar to run_langauge_modeling.py | Added args.total_save_limit in order to save only the last specific checkpoints similar to the feature in run_langauge_modeling.py. This might be helpful for a student like me who has limited space storage quota on the school's remote server. | 03-20-2020 20:23:43 | 03-20-2020 20:23:43 | I feel its not necessary |
transformers | 3,362 | closed | New model, new model cards | Trained another squad model! Added details in card. | 03-20-2020 16:57:46 | 03-20-2020 16:57:46 | |
transformers | 3,361 | closed | TF Camembert not improving over epochs | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
`jplu/tf-camembert-base`
Language I am using the model on (English, Chinese ...):
French
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Get a custom multi-class dataset with imbalanced data
2. Train TFCamembertForSequenceClassification on this dataset
3. Try with and without `class_weight` or under-sample biggest classes (accuracy and loss change but still don't improve over epochs)
```python
import tensorflow as tf
from transformers import TFCamembertForSequenceClassification, CamembertTokenizer
model = TFCamembertForSequenceClassification.from_pretrained("jplu/tf-camembert-base", num_labels=len(labels))
tokenizer = CamembertTokenizer.from_pretrained("jplu/tf-camembert-base")
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0),
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=["accuracy"],
)
model.fit(
custom_generator(), # generator yield encoded sample (by tokenizer) and encoded label (by OneHotEncoder)
epochs=10,
max_queue_size=2,
steps_per_epoch=25,
#class_weight=class_weights,
validation_data=custom_generator(),
validation_steps=4
)
```
## Expected behavior
The classifier should improve over each epochs. In this case it stay at the same accuracy and loss, it just vary with more or less 5% accuracy.
To compare, I tried to run the same code but with `TFFlaubertForSequenceClassification.from_pretrained("jplu/tf-flaubert-base-cased")` and it worked as expected.
## Environment info
- `transformers` version: 2.5.1
- Platform: Linux-4.9.0-12-amd64-x86_64-with-debian-9.12 (Google AI Platform)
- Python version: 3.7.6
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.1.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
For information, I already posted this problem on [Stack Overflow](https://stackoverflow.com/questions/60761761/hugging-face-transformer-classifier-fail-on-imbalance-dataset) which lead me here. | 03-20-2020 15:40:19 | 03-20-2020 15:40:19 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>@bourrel
Why do you use `jplu/tf-flaubert-base-cased`? (https://huggingface.co/jplu/tf-flaubert-base-cased)
Any particular reason not to use `flaubert/flaubert_base_cased`? (https://huggingface.co/flaubert/flaubert_base_cased)<|||||>It was 2 years ago, I don't remember sorry 😅 |
transformers | 3,360 | closed | RuntimeError: CUDA out of memory. Tried to allocate 786.00 MiB (GPU 0; 14.73 GiB total capacity; 13.33 GiB already allocated; 575.88 MiB free; 13.38 GiB reserved in total by PyTorch) | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): GPT2
Language I am using the model on (English, Chinese ...): eng
**The tasks I am working on is:**
!python /content/transformers/examples/run_language_modeling.py --train_data_file=shakespeare.txt --model_type=gpt2 --model_name_or_path=gpt2 --output_dir=output --do_train
## To reproduce
Steps to reproduce the behavior:
import os
import requests
file_name = "shakespeare.txt"
if not os.path.isfile(file_name):
url = "https://raw.githubusercontent.com/karpathy/char-rnn/master/data/tinyshakespeare/input.txt"
data = requests.get(url)
with open(file_name, 'w') as f:
f.write(data.text)
!python /content/transformers/examples/run_language_modeling.py --train_data_file=shakespeare.txt --model_type=gpt2 --model_name_or_path=gpt2 --output_dir=output --do_train
03/20/2020 13:36:52 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 1, distributed training: False, 16-bits training: False
03/20/2020 13:36:53 - INFO - transformers.configuration_utils - loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-config.json from cache at /root/.cache/torch/transformers/4be02c5697d91738003fb1685c9872f284166aa32e061576bbe6aaeb95649fcf.699bbd1c449e9861456f359d6daa51bd523ac085b4b531ab0aad5a55d091e942
03/20/2020 13:36:53 - INFO - transformers.configuration_utils - Model config GPT2Config {
"architectures": [
"GPT2LMHeadModel"
],
"attn_pdrop": 0.1,
"bos_token_id": null,
"do_sample": false,
"embd_pdrop": 0.1,
"eos_token_ids": null,
"finetuning_task": null,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"is_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_epsilon": 1e-05,
"length_penalty": 1.0,
"max_length": 20,
"model_type": "gpt2",
"n_ctx": 1024,
"n_embd": 768,
"n_head": 12,
"n_layer": 12,
"n_positions": 1024,
"num_beams": 1,
"num_labels": 2,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pad_token_id": null,
"pruned_heads": {},
"repetition_penalty": 1.0,
"resid_pdrop": 0.1,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"temperature": 1.0,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"use_bfloat16": false,
"vocab_size": 50257
}
03/20/2020 13:36:54 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-vocab.json from cache at /root/.cache/torch/transformers/f2808208f9bec2320371a9f5f891c184ae0b674ef866b79c58177067d15732dd.1512018be4ba4e8726e41b9145129dc30651ea4fec86aa61f4b9f40bf94eac71
03/20/2020 13:36:54 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-merges.txt from cache at /root/.cache/torch/transformers/d629f792e430b3c76a1291bb2766b0a047e36fae0588f9dbc1ae51decdff691b.70bec105b4158ed9a1747fea67a43f5dee97855c64d62b6ec3742f4cfdb5feda
03/20/2020 13:36:54 - INFO - transformers.modeling_utils - loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-pytorch_model.bin from cache at /root/.cache/torch/transformers/4295d67f022061768f4adc386234dbdb781c814c39662dd1662221c309962c55.778cf36f5c4e5d94c8cd9cefcf2a580c8643570eb327f0d4a1f007fab2acbdf1
03/20/2020 13:37:02 - INFO - __main__ - Training/evaluation parameters Namespace(adam_epsilon=1e-08, block_size=1024, cache_dir=None, config_name=None, device=device(type='cuda'), do_eval=False, do_train=True, eval_all_checkpoints=False, eval_data_file=None, evaluate_during_training=False, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=1, learning_rate=5e-05, line_by_line=False, local_rank=-1, logging_steps=500, max_grad_norm=1.0, max_steps=-1, mlm=False, mlm_probability=0.15, model_name_or_path='gpt2', model_type='gpt2', n_gpu=1, no_cuda=False, num_train_epochs=1.0, output_dir='output', overwrite_cache=False, overwrite_output_dir=False, per_gpu_eval_batch_size=4, per_gpu_train_batch_size=4, save_steps=500, save_total_limit=None, seed=42, server_ip='', server_port='', should_continue=False, tokenizer_name=None, train_data_file='shakespeare.txt', warmup_steps=0, weight_decay=0.0)
03/20/2020 13:37:02 - INFO - __main__ - Loading features from cached file gpt2_cached_lm_1024_shakespeare.txt
03/20/2020 13:37:02 - INFO - __main__ - ***** Running training *****
03/20/2020 13:37:02 - INFO - __main__ - Num examples = 330
03/20/2020 13:37:02 - INFO - __main__ - Num Epochs = 1
03/20/2020 13:37:02 - INFO - __main__ - Instantaneous batch size per GPU = 4
03/20/2020 13:37:02 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 4
03/20/2020 13:37:02 - INFO - __main__ - Gradient Accumulation steps = 1
03/20/2020 13:37:02 - INFO - __main__ - Total optimization steps = 83
Epoch: 0% 0/1 [00:00<?, ?it/s]
Iteration: 0% 0/83 [00:00<?, ?it/s]Traceback (most recent call last):
File "/content/transformers/examples/run_language_modeling.py", line 799, in <module>
main()
File "/content/transformers/examples/run_language_modeling.py", line 749, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "/content/transformers/examples/run_language_modeling.py", line 353, in train
outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 602, in forward
shift_logits = lm_logits[..., :-1, :].contiguous()
RuntimeError: CUDA out of memory. Tried to allocate 786.00 MiB (GPU 0; 14.73 GiB total capacity; 13.33 GiB already allocated; 575.88 MiB free; 13.38 GiB reserved in total by PyTorch)
Epoch: 0% 0/1 [00:00<?, ?it/s]
Iteration: 0% 0/83 [00:00<?, ?it/s]
| 03-20-2020 13:40:24 | 03-20-2020 13:40:24 | @mariamabarham didn't you have a similar issue? <|||||>Yes I encountered the same issue. I solved it by adding --fp16(need to install apex first). You can also reduce the block_size to 512. Both worked out for me.<|||||>You should probably set the `per_gpu_train_batch_size` to 1. That is the default behavior for `gpt-2-simple` to prevent OOM. (I am not a fan of the default batch_size of 4 in `run_language_modeling.py`)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>You can also try using gradient accumulation steps.
Basically, if you want a batch_size of 32, but your GPU can only fit 16.
So you make two passes of 16 batches each, accumulate your gradients, and then do the backward pass after 2 batches.
|
transformers | 3,359 | closed | Some community models are broken and can't be downloaded | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Community Models
Language I am using the model on (English, Chinese ...): Multiple different ones
Quite some community models can't be loaded. The stats are here:
## Stats
1. **68** can't load either their config (n)or their tokenizer:
- a) **34** models can't even load their config file. The reasons for this are either:
- i. **11/34**: Model identifier is wrong, e.g. `albert-large` does not exist anymore, it seems like it was renamed to `albert-large-v1`. These models have saved the another name online than how it is saved on AWS.
- ii. **23/34**: There is an unrecognized `model_type` in the config.json, `e.g.`
> "Error: Message: Unrecognized model in hfl/rbtl3. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: t5, distilbert, albert, camembert, xlm-roberta, bart, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl
> "
- b) **33** models can load their config, but cannot load their tokenizers. The error message is almost always the same:
> TOK ERROR: clue/roberta_chinese_base tokenizer can not be loaded
> Message: Model name 'clue/roberta_chinese_base' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'clue/roberta_chinese_base' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.
- i. Here: the model has neither of:
- `vocab_file`
- `added_tokens_file`
- `special_tokens_map_file`
- `tokenizer_config_file`
2. **79** currently have wrong `pad_token_id`, `eos_token_id`, `bos_token_id` in their configs. IMPORTANT: The reason for this is that we used to have the wrong defaults saved in `PretrainedConfig()` - see e.g. [here](https://github.com/huggingface/transformers/pull/2885/commits/77d958ac7f0b008df17656e3652246f602aef095)
the default value for **any** model for `pad_token_id` was 0. People trained a model with the lib, saved it and the resulting config.json now had a `pad_token_id = 0` saved. This was then uploaded. But it's wrong and should be corrected.
3. For **162** models everything is fine!
Here the full analysis log [here](https://github.com/patrickvonplaten/files_to_link_to/blob/master/results.txt)
Here the code that created this log (simple comparison of loaded tokenizer and config with default config): [here](https://github.com/patrickvonplaten/files_to_link_to/blob/master/test_all_community_models.py)
### HOW-TO-FIX-STEPS (in the following order):
- [x] Fix 1 a) i. first: All models that have a wrong model identifier path should get the correct one. Need to update some model identifier paths on `https://huggingface.co/models` like changing `bertabs-finetuned-xsum-extractive-abstractive-summarization` to `remi/bertabs-finetuned-xsum-extractive-abstractive-summarization`. Some of those errors are very weird, see #3358
- [ ] Fix 1 a) ii. shoud be quite easy to add the correct `model_type` to the config.json
- [ ] Fix 1 b) Not sure how to fix the lacking tokenizer files most efficiently @julien-c
- [x] Fix 2) Create automated script that:
- 1. `If tokenizer.pad_token_id != default_config.pad_token_id` -> `config.pad_token_id = tokenizer.pad_token_id else` remove `pad_token_id`.
- 2. Removes all `eos_token_ids` -> they don't exist anymore
| 03-20-2020 13:11:55 | 03-20-2020 13:11:55 | - Item `1) a) i.` is fixed (list of model ids below for reference)
- For models which don't have a tokenizer, or an auto-detected model type, we'll add a notice on their model page (and remove the code sample which is misleading as it lists AutoModel and AutoConfig)
```
albert-base
albert-large
albert-xlarge
albert-xxlarge
bert-base-multilingual-cased-finetuned-conll03-dutch
bert-base-multilingual-cased-finetuned-conll03-spanish
mlm-100-1280
mlm-17-1280
bertabs-finetuned-cnndm-extractive-abstractive-summarization
bertabs-finetuned-extractive-abstractive-summarization
bertabs-finetuned-xsum-extractive-abstractive-summarization
```<|||||>## UPDATE:
### Stats
1. **61** can't load either their config (n)or their tokenizer:
- a) **23** models can't load their config file. The reasons for this are as follows:There is an unrecognized `model_type` in the config.json, `e.g.`
> "Error: Message: Unrecognized model in hfl/rbtl3. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: t5, distilbert, albert, camembert, xlm-roberta, bart, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl
> "
- b) **38** models can load their config, but cannot load their tokenizers. The error message is always the same:
> TOK ERROR: clue/roberta_chinese_base tokenizer can not be loaded
> Message: Model name 'clue/roberta_chinese_base' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'clue/roberta_chinese_base' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.
- Here: the model has neither of:
- `vocab_file`
- `added_tokens_file`
- `special_tokens_map_file`
- `tokenizer_config_file`
2. For **254** models everything is fine!
Here the full analysis log [here](https://github.com/patrickvonplaten/files_to_link_to/blob/master/results.txt)
Here the code that created this log (simple comparison of loaded tokenizer and config with default config): [here](https://github.com/patrickvonplaten/files_to_link_to/blob/master/test_all_community_models.py)
## NEXT STEPS
1a) and 1b) cannot really be fixed by us because for 1a) we don't know which `model_type` is used and for 1b) if the tokenizer does not work or does not exist it should be fixed or uploaded by the author. These **61** models can probably still be used if the correct model class is used instead of `AutoModel.from_pretrained(...)`
We could contact the authors or add a `warning` sign to the model page. <|||||>the problem of denpa92/bert-base-cantonese is not solved.
<|||||>hey @liuchenbaidu , I'd recommend contacting the author of the model in this case.<|||||>When I use ernie model pretained by BaiDu, I had the same problem.
My solution is to add "model_type":"bert" to the configuration file, It worked, but I don't know if it's reasonable.<|||||>> When I use ernie model pretained by BaiDu, I had the same problem.
> My solution is to add "model_type":"bert" to the configuration file, It worked, but I don't know if it's reasonable.
Hi, @XiangQinYu. I'm a bit of a newbie with Huggingface. Can you say more about how you did this? I guess you mean adding "model_type":"bert" to a file like [this](https://huggingface.co/adamlin/ClinicalBert_all_notes/blob/main/config.json). But how did you edit the file? Did you download the whole model repository, and edit and run it locally?
EDIT: Nevermind, figured it out with help of a commenter on [a question I asked on SO](https://stackoverflow.com/questions/68682786/is-it-possible-to-use-the-allennlp-semantic-role-labeler-with-bert-large-instead?noredirect=1#comment121759210_68682786). |
transformers | 3,358 | closed | Downloading mlm-17-1280 community model | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): mlm-17-1280
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
```
from transformers import AutoConfig
conf = AutoConfig.from_pretrained('mlm-17-1280')
```
## Expected behavior
The config should be loaded correctly.
All files exist are seem to be correct. There seems to be a problem with the etag.
When debugging, the call jumps into this statement:
https://github.com/huggingface/transformers/blob/8becb732931bbab5dd75cca5f5e7c75b2516d10b/src/transformers/file_utils.py#L449
and never manages to store the `config.json` file. Not sure what's going on here.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.1
- Platform: Linux-5.3.0-40-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0+cpu (False)
- Tensorflow version (GPU?): 2.1.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 03-20-2020 12:46:40 | 03-20-2020 12:46:40 | @julien-c <|||||>This was a bogus model file, rm'ed it. |
transformers | 3,357 | closed | License information by model | Hi,
First of all, thanks for the good work. Very useful.
Would it be possible to add the license information for each model listed on https://huggingface.co/transformers/pretrained_models.html?
The reason is that for production, I need to know which models can be bundled in my app. Some licenses do not allow bundling...
I may have missed it, but I could not find licensing information in the doc or code.
If that information is not centralized, I am happy to do the research myself (and share results!). I would be interested in hints if you have some.
Cheers,
Alex | 03-20-2020 10:31:05 | 03-20-2020 10:31:05 | Good question. As far as I can tell (I don't think there's a definitive a.k.a legally tested answer to that question) a model is not considered as a derivative work from the dataset(s) it was trained on, so the person who trained the model can choose whatever licensing option they want.
For instance, the original [BERT weights](https://github.com/google-research/bert) mention:
> We will not be able to release the pre-processed datasets used in the paper.
> [...]
> These models are all released under the same license as the source code (Apache 2.0).
Please share your findings if you conduct more extensive research.<|||||>Thanks. That's what I assumed as well. Based on the linked you shared, it seems that all BERT models are under the same license. If that's the case for other model architectures, the investigation should be simple. I will look at the ~15 architectures supported and share my findings this week.
I have a separate question on language support by model, but I will submit it as a separate issue.
Have a great day,
Alex<|||||>Hi @julien-c,
I really liked your suggestion on #3397 to add it to model cards.
Could I add license information in the same way, using tags on the model card?
Cheers,
Alex<|||||>Yes @alexcombessie feel free to do some research and open a PR. You can add a `license: x` tag to the metadata, where `x` is an identifier found in https://help.github.com/en/github/creating-cloning-and-archiving-repositories/licensing-a-repository
A few additional data points:
- Camembert: MIT (source: https://camembert-model.fr/), trained on Oscar (https://traces1.inria.fr/oscar/) whose license is `cc0`
- same for all models from Fairseq: https://github.com/pytorch/fairseq#license<|||||>FYI @MobiusLooper<|||||>The Distil* models trained at Hugging Face are released under Apache 2.0. |
transformers | 3,356 | closed | Update run_language_modeling.py to handle writes on networked filesystem better | In the case of multi-node distributed training, reads and writes typically happen to a common networked filesystem.
In the current version of the `run_language_modeling.py` script, processes that have `local_rank` as 0 perform the writes to disk (tensorboard, dataset cache and model checkpointing). In the case of multi-node distributed training, there ends up being one process per node having `local_rank` as 0, hence multiple processes try writing to the filesystem at one point, resulting on errors being thrown depending on the filesystem.
This pull request updates the script such that only the process having a `global_rank` of 0 does the writing. `global_rank` isn't a variable directly accessible in the script, it is obtained by calling `torch.distributed.get_rank()`.
I've tested the script in 4 different cases and they work without any error in these cases: multi-node training with DDP, single-node training with DDP, single-node training with DP and single gpu training. | 03-20-2020 05:49:14 | 03-20-2020 05:49:14 | For some reference, check out https://github.com/pytorch/pytorch/issues/12042 and https://github.com/facebookresearch/maskrcnn-benchmark/pull/40. These address the same issue.
Also, one of the checks that failed, `check_code_quality`, would fail for the existing version of the script as well. There's a check for a line length of 119, and there are already many lines exceeding that.<|||||>I did think about the other scripts. Are those already setup with `DistributedDataParallel` cause one would theorize that those tasks aren't that heavy and wouldn't benefit much from running across multiple GPUs.
Also, I have one or 2 more fixes along the lines of this one for distributed training. I was wondering if I should rename this PR and add those in, or create a new one for each of those fixes. One of them is about loading checkpoints (of the optimizer and scheduler) while resuming training.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Closing this as `run_language_modeling.py` is now based on the trainer. Thank you for your contribution!! |
transformers | 3,355 | closed | Bug? NaN loss after training for a while using for BERT Encoded sentences. | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I have a model which is taken from the HF Examples and slightly modified.
```
from transformers import TFBertModel, TFBertForSequenceClassification, BertTokenizer
# configuration = BertConfig()
def build_bert(batch_size=1, use_logits=True):
num_labels = max(max(train_label), max(test_label))
print(f"Number of labels: {num_labels}")
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-5, epsilon=1e-06, clipnorm=1.0)
if num_labels == 1:
loss = tf.keras.losses.MeanSquaredError()
else:
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=use_logits)
print(f"loss used: {loss}")
macc = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')
vacc = tf.keras.metrics.SparseCategoricalAccuracy('val_accuracy')
config = BertConfig.from_pretrained("bert-base-cased", num_labels=num_labels, batch_size=batch_size)
bert_model = TFBertForSequenceClassification.from_pretrained("bert-base-cased", config=config)
bert_model.compile(optimizer=optimizer, loss=loss, metrics=[macc, vacc])
return bert_model
```
I am training on data which is encoded by BERT into 48 token arrays and encoded with the HF Bert Encoder.
```bert_model = build_bert(1000, False)
bert_model.fit([encodings[30000:40000], train_attn_mask[30000:40000]], classes[30000:40000],
epochs=1, validation_split=.1, shuffle=False)
```

My model will train for a while (Please pardon the output... I have no idea why jupyter lab does this.)

Then at some point (different every run) the loss drops to NaN.

<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
The solution for this problem on SO is varied. I tried changing the optimizer learning rates as well as altering the epsilon. I have validated that my data does not contain nan values, negative classifications, or invalid encodings. I have removed all non unicode characters.
My concern is that this has uncovered a bug with in the framework.
| 03-19-2020 21:31:19 | 03-19-2020 21:31:19 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Do you solve the problem? |
transformers | 3,354 | closed | Export ALBERT main layer in TensorFlow | closes #3262 | 03-19-2020 17:12:01 | 03-19-2020 17:12:01 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3354?src=pr&el=h1) Report
> Merging [#3354](https://codecov.io/gh/huggingface/transformers/pull/3354?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3bedfd334763cb5676c2fe92705390ac57d8de5f&el=desc) will **increase** coverage by `0.07%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3354?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3354 +/- ##
==========================================
+ Coverage 77.61% 77.68% +0.07%
==========================================
Files 100 100
Lines 16938 16938
==========================================
+ Hits 13146 13159 +13
+ Misses 3792 3779 -13
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3354?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/3354/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.92% <ø> (ø)` | |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3354/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.55% <0.00%> (+2.32%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3354?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3354?src=pr&el=footer). Last update [3bedfd3...77a2a4c](https://codecov.io/gh/huggingface/transformers/pull/3354?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,353 | closed | Handle pinned version of isort | The CONTRIBUTING file pins to a specific version of isort, so we might as well install that in `dev` . This makes it easier for contributors so they don't have to manually install the specific commit. | 03-19-2020 13:30:03 | 03-19-2020 13:30:03 | I didn't know that worked, thanks @BramVanroy. @aaugustin what do you think?<|||||>I'm not familiar with the syntax, but if it works, go for it. I really hope we have a release of sort and we can remove this soon.<|||||>Works on my machine so I'll merge :)
Thanks @BramVanroy, this will simplify @LysandreJik and @thomwolf's lives a lot! |
transformers | 3,352 | closed | Add model cards for huseinzol05/bert-base-bahasa-cased | 03-19-2020 12:35:13 | 03-19-2020 12:35:13 | Looks good! [**Model page**](https://huggingface.co/huseinzol05/bert-base-bahasa-cased)
I've also added a filter for the Malay language here:
<img width="792" alt="Screenshot 2020-03-19 at 15 28 42" src="https://user-images.githubusercontent.com/326577/77106980-58beec00-69f6-11ea-8145-4d273d605693.png">
|
|
transformers | 3,351 | closed | Reformer | ## Add the Reformer
Paper: (https://arxiv.org/pdf/2001.04451.pdf)
### First steps to take:
- [x] Copy Bert PT code to Reformer PT file.
- [x] Replace self-attention with LSH attention
- [x] Make forward pass work for Bert Layer
### Forward-Pass: Get 1-to-1 same outputs as original Flax code for forward pass
- [x] for LSH attention layer
- [x] for Bert Layer with RevNet
- [x] for different attention masks
- [x] for feed forward chunking layer
- [x] for whole Reformer model
- [x] for sinusoidal position encodings
- [x] for axial position encodings
- [x] for local blocking attention (chunked attention)
- [x] for pretrained weights from official reformer model: ReformerLM model was trained using https://colab.research.google.com/github/google/trax/blob/master/trax/models/reformer/text_generation.ipynb and weights were loaded into https://huggingface.co/patrickvonplaten/reformer-crime-and-punish and checked that a single forward pass is identical. `predict_mem_len` had to be adapted to make functions equal.
- [x] Add optional attention mask
- [x] Add support for fp16
- [ ] Speed up incremental generation. This is needed for generation and will not be trivial since the buckets have to ordered correctly and there is a chunk length parameter.
### Backpropagation:
- [x] Make backpropagation work
- [x] Check that backpropagation works with chunked feed forward layers
- [x] Implement RevResLayers for backprop
- [x] Implement code using and
- [x] Get identical results for forward pass
- [x] Make sure backprop works
- [x] Implement bucket caching
- [x] Implement random seed caching to have deterministic dropout for backward pass: https://github.com/RobinBruegger/RevTorch/pull/4
- [ ] Make rev resnet work for multi-gpu training
- [x] Check that RevReslayers backprop works on CPU
- [x] Check that RevReslayers backprop works on GPU
- [x] Get same gradients as original trax code
- [x] Train model on crime-and-punishment text and check that model performs reasonable afterwards
### Tokenizer
- [x] Copy sentence piece tokenizer from T5
- [x] Add vanilla sentence piece tokenizer for crime-and-punishment pretrained tokenizer: https://console.cloud.google.com/storage/browser/_details/trax-ml/reformer/cp.320.model
- [ ] Check how many tokenizers are needed
- [ ] Get pretrained tokenizers
### Optimize time and memory efficiency
- [x] Compare memory & time complexity to standard Bert: check https://github.com/huggingface/transformers/pull/3186
- [x] Check and improve memory and speed when training
- [ ] Move "on-the-fly" created masks in LSHSelfAttention to using them as an input
- [ ] Optimizie all unnecessary calculations
### Pretrained Models
- [ ] Check if pretrained model on C4 is added soon: https://github.com/google/trax/commit/b1f0c176a281d35e285137a45ff117b8c5495173
- [ ] Add Reformer / Bert in trax
Useful code resources:
- Original trax code: https://github.com/google/trax/tree/master/trax/models/reformer
- Working trax notebook: https://github.com/google/trax/blob/master/trax/models/reformer/machine_translation.ipynb
- Working PyTorch implementation: https://github.com/lucidrains/reformer-pytorch
- Great to implement for backprop. https://github.com/lucidrains/reformer-pytorch/blob/master/reformer_pytorch/reversible.py
- Pretrained weights: https://console.cloud.google.com/storage/browser/trax-ml/reformer
Useful blog/paper resources:
- Original paper: https://arxiv.org/pdf/2001.04451.pdf
- Google AI blog: https://ai.googleblog.com/2020/01/reformer-efficient-transformer.html
- Good blog post 1: https://www.pragmatic.ml/reformer-deep-dive/
- Good blog post 2: https://towardsdatascience.com/illustrating-the-reformer-393575ac6ba0
Previous Discussions:
- #2341
## Update
The code is clean and ready for review now.
Small ToDos before merging:
- [x] Fill in TODOs in docs
- [ ] Check whether more pre-trained weights can be used
- [ ] Train on fp16 once
- [ ] Update notebook showing how to use Reformer
### Review
I added quite some docstrings to explain the new methods introduced by the Reformer (Axial Position Encoding, LSH Attention, Local Attention, Feed Forward chunking), so it might be better to first go through the doctsrings. Docstrings are easier to read when switching to this branch and creating the docs locally. | 03-19-2020 11:37:06 | 03-19-2020 11:37:06 | Memory complexity ReformerLayer
vs BertLayer:

<|||||>Time complexity ReformerLayer vs. BertLayer:

<|||||>## Experiment
I tested training the Reformer model on 0.5M tokens per sample on the novel "Crime and Punishment" using conventional LM training. I essentially translated the official trax notebook: https://colab.research.google.com/github/google/trax/blob/master/trax/models/reformer/text_generation.ipynb into hugging face code: https://colab.research.google.com/drive/1jR6hA2CQXDbucJXdiDXhmxmyoQmM2Pws
The only differences to the official notebook are:
- The gradient is accumulated over 8 samples and then updated whereas in the official notebook 8 TPUs are used and the gradient is calculated in parallel and then averaged together.
- The learning rate is 0.005 instead of 0.01 (because already at 0.005, the gradient seems to become too big).
## Results
My training starts similarly around **6.2** and goes down smoothly in the beginning.
At some point though the gradient seem to explode and the loss goes up again and that even at a learning rate of "only" 0.05.
The attached plots are here:
### Loss

### Accuracy

### Learning rate (cosine scheduler)

When lowering the learning rate more, to **0.0005** e.g. the loss keeps going down but only reaches something around 2.3 in the end.
**Comparison**
The training in the official trax notebook is very smooth.
Loss starts at **6.2** something and goes down smoothly to **0.8** while the accuracy reaches **>80%** in the end for a learning rate of **0.01**.
## Analysis
- It is confirmed that the forward pass is identical with the trax implementation thanks to integration tests. Things that are not fully tested for the backward pass are:
- **Dropout**: the dropout used in the official trax library does not seem to correspond to the "usual" `nn.Dropout` used in PyTorch but sometimes drop specific dimensions only or whole matrices. It is tested though that the dropout used here is deterministic for both the "normal" forward pass and the forward pass used in the backward pass to recalculate the activations, by means of setting the random seed used for the first forward pass. Nevertheless, there could still be small bugs.
- **Reversible Layers**: Because Reformer uses reversible layers, I had to fiddle with a customized backward function here. This is IMO quite prone to errors. I checked multiple times that from a logical point of view everything is correct and compared my code with: https://github.com/RobinBruegger/RevTorch and https://github.com/lucidrains/reformer-pytorch which do similar / the same architecture. IMO, it quite hard to test this for correctness. One could also write the whole code without having reversible layers and then see whether the gradient is the same (Seems actually not like a bad idea to me).
- **Attention mask**: The official trax code does not seem to use a user-specific attention mask for the LSH Attn Layer, but only for the Local Attn Layer. I tested that the attn mask is correct for the local attention task by integration tests and checked that the attn mask for the LSH layer works correctly (input with mask gives the same result as input without mask), but maybe the LSH Attn mask has to be removed. But don't really see a reason why ?!
- **Initialization**: The initialization scheme used in the trax library is different from what is normally done in `transformers`, so there are small changes in my code. But I doubt that this is an issue, especially since the training looks very similar in the beginning.
- **Training parameters**: It might also be simply due to different training / optimization parameters. Maybe there are some under-the-hood training parameters that I didn't notice (special gradient clipping, ...)<|||||>> https://colab.research.google.com/drive/1jR6hA2CQXDbucJXdiDXhmxmyoQmM2Pws
Tried to train model over longer time, but getting [error](http://prntscr.com/s79e04)
> Forward got unexcepted keyword "lm_labels" after calling trainer.train()
P: Fixed the typo. I will change the model into half-precision soon so that the memory will be sufficient :-) <|||||>I get some good results with the following parameters: https://gist.github.com/flozi00/b491b41a9865733e5f8bb4032c313540
the best eval loss is about 1.654, but is increasing now again the same as yours
will have a look in a few hours again

<|||||>> I get some good results with the following parameters: https://gist.github.com/flozi00/b491b41a9865733e5f8bb4032c313540
>
> the best eval loss is about 1.654, but is increasing now again the same as yours
> will have a look in a few hours again
>
> 
Awesome that's already much better than what I got! If you manage to get it under 1 (loss) / >75% (accuracy) that would be great. Also feel free to change the hyper-parameters as you wish! Especially the adam betas and co.
I also added support for fp16 - so the notebook now only needs 8GB of RAM.
(You might have to reset the environment and re-install the github branch though)<|||||>Sounds very great.
Trying to decrease sequence length, cause while increasing number of hashes or heads getting memory error.
Training on 24GB GPU
Read that 4 hashes are good and 8 brings the best quality.
Trained on some configurations now and everytime the loss goes to ~1 but then increases to 4 very fast and keeps on there for minimum 1000 steps.
Any idea about it ?<|||||>> Sounds very great.
> Trying to decrease sequence length, cause while increasing number of hashes or heads getting memory error.
> Training on 24GB GPU
>
> Read that 4 hashes are good and 8 brings the best quality.
>
> Trained on some configurations now and everytime the loss goes to ~1 but then increases to 4 very fast and keeps on there for minimum 1000 steps.
> Any idea about it ?
My guess is that since it's such a small dataset (0.5M tokens is tiny) the model needs very well-calibrated hyperparameter tuning. When the learning rate is low enough, this actually does not happen anymore but also the loss only gets to about ~2. But I ran very few experiments and didn't do any hyperparameter search.
Also, I use slightly different dropouts, then were used in the official code so maybe using weight decay instead of dropout could work better.
Will check that the gradients are correct in the next days and then hopefully be ready soon. <|||||>@patrickvonplaten I'm excited to see a lot of progress here!
The loss curves above could be due to poor hyperparameter choice, but they're also very similar to what you see when the reverse pass of the network doesn't match the forward pass. For example, failing to cache hash bucket assignments (for exact re-use in the backward pass) leads to a failure mode with loss rebounds very similar to the figures you posted above. I also once had a bug where the wrong random seed was used for dropout in the backward pass, which IIRC manifested itself in the same way.<|||||>> @patrickvonplaten I'm excited to see a lot of progress here!
>
> The loss curves above could be due to poor hyperparameter choice, but they're also very similar to what you see when the reverse pass of the network doesn't match the forward pass. For example, failing to cache hash bucket assignments (for exact re-use in the backward pass) leads to a failure mode with loss rebounds very similar to the figures you posted above. I also once had a bug where the wrong random seed was used for dropout in the backward pass, which IIRC manifested itself in the same way.
Thanks for taking a look @nkitaev. I just found a bug in the `backward()`. I now have 1-to-1 the same gradients as your trax code. Will retrain tonight and should get better results :-) <|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3351?src=pr&el=h1) Report
> Merging [#3351](https://codecov.io/gh/huggingface/transformers/pull/3351?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7b5bec373ca2b442a7ac8ac46f8eac6e8003e2ae&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3351?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3351 +/- ##
=======================================
Coverage 79.13% 79.13%
=======================================
Files 117 117
Lines 19517 19517
=======================================
Hits 15444 15444
Misses 4073 4073
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3351?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3351?src=pr&el=footer). Last update [7b5bec3...7b5bec3](https://codecov.io/gh/huggingface/transformers/pull/3351?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Training looks good now on Crime and Punishment. To verify that training works, the model was trained on over 1200 steps and with little regularization.
### Eval loss

### Eval accuracy

<|||||>https://gist.github.com/flozi00/b491b41a9865733e5f8bb4032c313540
This gist contains two notebooks, one of them with trainings batch = 2 --> error
in the other I tried to train model with pre-configured parameters, sequence length 4096 --> error
Is it mistake by me ?<|||||>> https://gist.github.com/flozi00/b491b41a9865733e5f8bb4032c313540
>
> This gist contains two notebooks, one of them with trainings batch = 2 --> error
> in the other I tried to train model with pre-configured parameters, sequence length 4096 --> error
>
> Is it mistake by me ?
how did you get past
```
# get a pretrained tokenizer
tokenizer = ReformerTokenizer.from_pretrained("patrickvonplaten/reformer-crime-and-punish")
```<|||||>@lapolonio
I just used the notebook posted by patrickvonplaten four days ago.
https://colab.research.google.com/drive/1jR6hA2CQXDbucJXdiDXhmxmyoQmM2Pws<|||||>The notebook is still under construction, so I would not waste too much time playing around with it at the moment @lapolonio @flozi00.
Thanks a lot for your good comments and remarks @lapolonio and @flozi00 :-) <|||||>This is looking awesome, thanks! is there plans to add an encoder-decoder version? <|||||>> This is looking awesome, thanks! is there plans to add an encoder-decoder version?
Yes, this should soon be possible with the encoder-decoder framework<|||||>@patrickvonplaten awesome! is there an issue or a PR I can follow for that? <|||||>> @patrickvonplaten awesome! is there an issue or a PR I can follow for that?
Not yet, this will probably still need 1,2 weeks :-) <|||||>Is it possible or are there any plans to implement reformer for question answering too ?
seq2seq and QA could be very great tasks for it<|||||>> Is it possible or are there any plans to implement reformer for question answering too ?
> seq2seq and QA could be very great tasks for it
Yeah, I will add a cross attention layer in another PR and then the Reformer can be used as a seq-2-seq model with our Encoder-Decoder framework: https://huggingface.co/transformers/model_doc/encoderdecoder.html<|||||>I'm not familiar with the Encoder-Decoder framework after the cross attention layer is added can the decoder be BertForSequenceClassification? Where do I ask questions like this?
<|||||>@patrickvonplaten Based on your merge, it seems like the input size for each batch is fixed in order to match the product of axial position embedding size? I am correct?<|||||>> @patrickvonplaten Based on your merge, it seems like the input size for each batch is fixed in order to match the product of axial position embedding size? I am correct?
For training, yes that's correct. For inference the input_size can also be smaller. Also check out: https://huggingface.co/transformers/model_doc/reformer.html<|||||>@patrickvonplaten , I wanted to train a language model for reformers on a custom dataset.
What are the steps, and any sample notebooks available for the same<|||||>Hi @prajwal-PHAI, there are a lot of [community notebooks covering T5 finetuning](https://github.com/huggingface/transformers/tree/master/notebooks#community-notebooks).<|||||>Thanks @LysandreJik
I was running into error loading other datasets, which were not there in the nlp library.<|||||>hey. thanks for your amazing work!
I'm running into error while trying the colab example:
https://colab.research.google.com/drive/1jR6hA2CQXDbucJXdiDXhmxmyoQmM2Pws#scrollTo=WskGtnXsnWdu
the problem is that it doesn't recognize the apex package:
ImportError Traceback (most recent call last)
<ipython-input-29-30584d4c4987> in <module>()
11
12 # train
---> 13 trainer.train()
/usr/local/lib/python3.6/dist-packages/transformers/trainer.py in train(self, model_path)
384 if self.args.fp16:
385 if not is_apex_available():
--> 386 raise ImportError("Please install apex from https://www.github.com/nvidia/apex to use fp16 training.")
387 model, optimizer = amp.initialize(model, optimizer, opt_level=self.args.fp16_opt_level)
388
ImportError: Please install apex from https://www.github.com/nvidia/apex to use fp16 training.
though I installed it...anyone know what to do?
<|||||>Linking a related git issue #16972. cc @patrickvonplaten |
transformers | 3,350 | closed | Reproducing SQuAD v1.1 with xlnet-base cased? | Hi, first of all, thanks for the great library you guys are providing. I'm currently using the latest version of huggingface/transformers, and I'm trying to get a score for the SQuAD V1.1 with XLNET base-cased. However, it seems that the performance I get is only 0.10 for EM and 0.64 for F1.
When getting the score with BERT base-cased, the score comes out appropriately. (F1 about 88.5)
Is there any bugs or sth else I should be aware of? | 03-19-2020 11:13:20 | 03-19-2020 11:13:20 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,349 | closed | Create model card for bert-small-finetuned-squadv2 | 03-19-2020 10:51:12 | 03-19-2020 10:51:12 | ||
transformers | 3,348 | closed | Create card for BERT-Mini finetuned on SQuAD v2 | 03-19-2020 10:40:04 | 03-19-2020 10:40:04 | ||
transformers | 3,347 | closed | Create card for BERT-Tiny fine-tuned on SQuAD v2 | - Only 17MB of Model weights!!
- The smalles model fine-tuned on SQUaD v2? | 03-19-2020 10:20:16 | 03-19-2020 10:20:16 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3347?src=pr&el=h1) Report
> Merging [#3347](https://codecov.io/gh/huggingface/transformers/pull/3347?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cec3cdda1599541b033e07a9838386189a5d0010&el=desc) will **increase** coverage by `1.15%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3347?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3347 +/- ##
==========================================
+ Coverage 76.46% 77.61% +1.15%
==========================================
Files 100 100
Lines 16948 16948
==========================================
+ Hits 12960 13155 +195
+ Misses 3988 3793 -195
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3347?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3347/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.54% <0.00%> (+1.34%)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3347/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.77% <0.00%> (+2.29%)` | :arrow_up: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3347/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.24% <0.00%> (+2.63%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3347/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.65% <0.00%> (+5.18%)` | :arrow_up: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3347/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <0.00%> (+10.00%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3347/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0.00%> (+81.20%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3347?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3347?src=pr&el=footer). Last update [cec3cdd...ed378f0](https://codecov.io/gh/huggingface/transformers/pull/3347?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,346 | closed | Created card for spanbert-finetuned-squadv1 | 03-19-2020 09:46:25 | 03-19-2020 09:46:25 | ||
transformers | 3,345 | closed | Fix input ids can be none attn mask | Make sure `batch_size` is correct for gpt2 and ctrl - these models need a slightly different behavior since the shape of `input_ids` can change depending on whether the past variable is inserted or not.
See also:
PR: https://github.com/huggingface/transformers/pull/3033
and its issue:
https://github.com/huggingface/transformers/issues/3031 | 03-19-2020 08:43:25 | 03-19-2020 08:43:25 | Thanks for pointing out @lazarevskiVsg and @julien-c !<|||||>@julien-c , will merge right away - small change!<|||||>Why don’t you just use input_shape (which is always defined), to be consistent with other models?<|||||>> Why don’t you just use input_shape (which is always defined), to be consistent with other models?
The problem is that GPT2 and CTRL have a different behavior (and the `input_ids` shape changes) when the `past` variable is inserted which previously led to problem when the attention_mask is inserted as well:
#3031
Therefore this slightly weird implementation.<|||||>But in your code in this PR, batch_size is always input_shape[0] anyways, no?<|||||>> But in your code in this PR, batch_size is always `input_shape[0]` anyways, no?
I think in the case of CTRL and GPT2, it's actually a bigger inconsistency:
Let's say we have an input_ids tensor of shape `[batch_size, sequence_length] = [5, 4]`.
We call `GPT2Model` and save the last `output embeddings = outputs[0][:, -1, :]` **and** the `past` key/value states to speed up decoding = `outputs[1]`
Now if we want to use `past` GPT expects the `input_ids` to be of shape `[batch_size, 1] .squeezed(-1) = [batch_size]`. Therefore we have to adapt the attention mask here differently than in other models. Which is weird (and a bit suboptimal in my opinion in GPT's and CTLR's API) is that the shape of `input_ids` differs depending on whether `past` is None or not.
@julien-c |
transformers | 3,344 | closed | Fix wrong link for the notebook file | For the tutorial of "How to generate text", the URL link was wrong (it was linked to the tutorial of "How to train a language model").
I fixed the URL. | 03-19-2020 08:10:28 | 03-19-2020 08:10:28 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3344?src=pr&el=h1) Report
> Merging [#3344](https://codecov.io/gh/huggingface/transformers/pull/3344?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f6d813aaaa96cc43fcf55f255b9439ebc22a31a0&el=desc) will **not change** coverage by `%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3344?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3344 +/- ##
=======================================
Coverage 77.63% 77.63%
=======================================
Files 100 100
Lines 16943 16943
=======================================
Hits 13154 13154
Misses 3789 3789
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3344?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3344?src=pr&el=footer). Last update [f6d813a...f78b5f0](https://codecov.io/gh/huggingface/transformers/pull/3344?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks a lot for pointing this out @rudvlf0413 and @julien-c |
transformers | 3,343 | closed | Update 01-training-tokenizers.ipynb (typo issue) | I found there are two grammar errors or typo issues in the explanation of the encoding properties.
The original sentences:
- **If your was** made of multiple \"parts\" such as (question, context), then this would be a vector with for each token the segment it belongs to
- **If your has** been truncated into multiple subparts because of a length limit (for BERT for example the sequence length is limited to 512), this will contain all the remaining overflowing parts.
I think "**input**" should be inserted after the phrase "If your". | 03-19-2020 07:34:47 | 03-19-2020 07:34:47 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3343?src=pr&el=h1) Report
> Merging [#3343](https://codecov.io/gh/huggingface/transformers/pull/3343?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f6d813aaaa96cc43fcf55f255b9439ebc22a31a0&el=desc) will **decrease** coverage by `0.08%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3343?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3343 +/- ##
==========================================
- Coverage 77.63% 77.54% -0.09%
==========================================
Files 100 100
Lines 16943 16943
==========================================
- Hits 13154 13139 -15
- Misses 3789 3804 +15
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3343?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3343/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.68% <0.00%> (-2.69%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3343?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3343?src=pr&el=footer). Last update [f6d813a...9ecfde1](https://codecov.io/gh/huggingface/transformers/pull/3343?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks! |
transformers | 3,342 | closed | No Module named Transformers | # 🐛 Bug
No module found transformers
## Information
Package Version
------------------------ ----------
absl-py 0.9.0
astor 0.8.1
boto3 1.12.22
botocore 1.15.22
cachetools 4.0.0
certifi 2019.11.28
chardet 3.0.4
click 7.1.1
docutils 0.15.2
filelock 3.0.12
gast 0.2.2
google-auth 1.11.3
google-auth-oauthlib 0.4.1
google-pasta 0.2.0
grpcio 1.27.2
h5py 2.10.0
idna 2.9
jmespath 0.9.5
joblib 0.14.1
Keras-Applications 1.0.8
Keras-Preprocessing 1.1.0
Markdown 3.2.1
numpy 1.18.1
oauthlib 3.1.0
opt-einsum 3.2.0
pandas 1.0.2
Pillow 7.0.0
pip 20.0.2
protobuf 3.11.3
pyasn1 0.4.8
pyasn1-modules 0.2.8
python-dateutil 2.8.1
pytorch-transformers 1.2.0
pytz 2019.3
pywin32 227
regex 2020.2.20
requests 2.23.0
requests-oauthlib 1.3.0
rsa 4.0
s3transfer 0.3.3
sacremoses 0.0.38
scipy 1.4.1
sentencepiece 0.1.85
setuptools 41.2.0
six 1.14.0
tensorboard 2.1.1
tensorflow 2.1.0
tensorflow-estimator 2.1.0
tensorflow-gpu 2.1.0
tensorflow-gpu-estimator 2.1.0
termcolor 1.1.0
tokenizers 0.5.2
torch 1.4.0
torchvision 0.5.0
tqdm 4.43.0
transformers 2.5.1
urllib3 1.25.8
Werkzeug 1.0.0
wget 3.2
wheel 0.34.2
wrapt 1.12.1
Using Bert on English language
## To reproduce
Steps to reproduce the behavior:
I just run the following code.
from transformers import BertTokenizer
# Load the BERT tokenizer.
print('Loading BERT tokenizer...')
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-1-ff68f42f17c9> in <module>
----> 1 from transformers import BertTokenizer
2
3 # Load the BERT tokenizer.
4 print('Loading BERT tokenizer...')
5 tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
ModuleNotFoundError: No module named 'transformers'
## Expected behavior
Do the tokenization.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
C:\Users\David\anaconda3\python.exe: can't open file 'transformers-cli': [Errno 2] No such file or directory
- `transformers` version:transformers 2.5.1
- Platform: Windows 10
- Python version: 3.7.3b
- PyTorch version (GPU?):1.4
- Tensorflow version (GPU?):2.1
- Using GPU in script?:yes
- Using distributed or parallel set-up in script?:distributed
| 03-19-2020 04:58:41 | 03-19-2020 04:58:41 | Well this just indicates that you didn't correctly install the library. Try creating a new environment and installing from scratch.
<|||||>You have to install the library first to use any module from it
First type `pip install transformers` in your terminal and then you can import the necessary modules<|||||>I fixed it had to uninstall it and reinstale from source. I dont know why
pip versión didnt work
On Sat, Mar 21, 2020, 8:44 AM Tanmay Pandey <[email protected]>
wrote:
> You have to install the library first to use any module from it
> First type pip install transformers in your terminal and then you can
> import the necessary modules
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/3342#issuecomment-602054984>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ACNYAIIOLEBX2E52BF5B4Q3RITHD3ANCNFSM4LPAPAWA>
> .
>
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>The error still occurs I have reinstalled it from the source, still it's not working
ENV details :
Windows 10
Anaconda
Pytorch <|||||>I don't think `transformers` can be installed using anaconda.
In any case, please open a new issue **with the filled-in issue template** for us to properly help you.<|||||>> I don't think `transformers` can be installed using anaconda.
> In any case, please open a new issue **with the filled-in issue template** for us to properly help you.
So how do install it on my local system ??<|||||>https://github.com/huggingface/transformers#installation<|||||>I had to downgrade to an older version to have this working frankly did not find a solution for some reason.<|||||>Hi dacidotor, I am having the same issue which version you downgrade?
I tried upgrading tensorflow and pytorch and then installing all again and it did not work.
<|||||>try this:
from transformers.models.bert.modeling_bert import BertEmbeddings |
transformers | 3,341 | closed | Simpler Error message when loading config/model with .from_pretrained() | 03-19-2020 04:14:03 | 03-19-2020 04:14:03 | tweaked version of #3247 <|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3341?src=pr&el=h1) Report
> Merging [#3341](https://codecov.io/gh/huggingface/transformers/pull/3341?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f6d813aaaa96cc43fcf55f255b9439ebc22a31a0&el=desc) will **decrease** coverage by `0.19%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3341?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3341 +/- ##
==========================================
- Coverage 77.63% 77.44% -0.20%
==========================================
Files 100 100
Lines 16943 16943
==========================================
- Hits 13154 13121 -33
- Misses 3789 3822 +33
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3341?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3341/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.82% <ø> (ø)` | |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3341/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `82.46% <0.00%> (-5.91%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3341?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3341?src=pr&el=footer). Last update [f6d813a...c9ce50c](https://codecov.io/gh/huggingface/transformers/pull/3341?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 3,340 | closed | Create README.md | roberta_chinese_large card | 03-19-2020 03:08:39 | 03-19-2020 03:08:39 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3340?src=pr&el=h1) Report
> Merging [#3340](https://codecov.io/gh/huggingface/transformers/pull/3340?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/20139b7c8d88f380f1e4e0ae2baf0b0ac9351039&el=desc) will **not change** coverage by `%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3340?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3340 +/- ##
=======================================
Coverage 77.63% 77.63%
=======================================
Files 100 100
Lines 16943 16943
=======================================
Hits 13154 13154
Misses 3789 3789
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3340?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3340?src=pr&el=footer). Last update [20139b7...b81687c](https://codecov.io/gh/huggingface/transformers/pull/3340?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>I think the filepath for this one is incorrect.
Also could you add
```
---
language: chinese
---
```
at the top of the file? Thanks!<|||||>Merged in 73d6a2f9019960c327f19689c1d9a6c0fba31d86 |
transformers | 3,339 | closed | Create README.md | xlnet_chinese_large card | 03-19-2020 03:08:31 | 03-19-2020 03:08:31 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3339?src=pr&el=h1) Report
> Merging [#3339](https://codecov.io/gh/huggingface/transformers/pull/3339?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/20139b7c8d88f380f1e4e0ae2baf0b0ac9351039&el=desc) will **decrease** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3339?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3339 +/- ##
==========================================
- Coverage 77.63% 77.63% -0.01%
==========================================
Files 100 100
Lines 16943 16943
==========================================
- Hits 13154 13153 -1
- Misses 3789 3790 +1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3339?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3339/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.19% <0.00%> (-0.18%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3339?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3339?src=pr&el=footer). Last update [20139b7...a6ee180](https://codecov.io/gh/huggingface/transformers/pull/3339?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>same issue as #3340 <|||||>Merged in 73d6a2f9019960c327f19689c1d9a6c0fba31d86 |
transformers | 3,338 | closed | Create README.md | roberta_chinese_base card | 03-19-2020 03:07:42 | 03-19-2020 03:07:42 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3338?src=pr&el=h1) Report
> Merging [#3338](https://codecov.io/gh/huggingface/transformers/pull/3338?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/20139b7c8d88f380f1e4e0ae2baf0b0ac9351039?src=pr&el=desc) will **decrease** coverage by `0.18%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3338?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3338 +/- ##
==========================================
- Coverage 77.63% 77.44% -0.19%
==========================================
Files 100 100
Lines 16943 16943
==========================================
- Hits 13154 13122 -32
- Misses 3789 3821 +32
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3338?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3338/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `82.46% <0%> (-5.91%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3338/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.36% <0%> (+0.13%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3338?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3338?src=pr&el=footer). Last update [20139b7...a7ff5ff](https://codecov.io/gh/huggingface/transformers/pull/3338?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,337 | closed | Create README.md | albert_chinese_tiny card | 03-19-2020 03:06:50 | 03-19-2020 03:06:50 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3337?src=pr&el=h1) Report
> Merging [#3337](https://codecov.io/gh/huggingface/transformers/pull/3337?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/20139b7c8d88f380f1e4e0ae2baf0b0ac9351039&el=desc) will **not change** coverage by `%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3337?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3337 +/- ##
=======================================
Coverage 77.63% 77.63%
=======================================
Files 100 100
Lines 16943 16943
=======================================
Hits 13154 13154
Misses 3789 3789
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3337?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3337/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.19% <0.00%> (-0.18%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3337/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.36% <0.00%> (+0.13%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3337?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3337?src=pr&el=footer). Last update [20139b7...7fbca7b](https://codecov.io/gh/huggingface/transformers/pull/3337?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,336 | closed | Create README.md | albert_chinese_small card | 03-19-2020 03:05:50 | 03-19-2020 03:05:50 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3336?src=pr&el=h1) Report
> Merging [#3336](https://codecov.io/gh/huggingface/transformers/pull/3336?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/20139b7c8d88f380f1e4e0ae2baf0b0ac9351039&el=desc) will **decrease** coverage by `0.03%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3336?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3336 +/- ##
==========================================
- Coverage 77.63% 77.60% -0.04%
==========================================
Files 100 100
Lines 16943 16943
==========================================
- Hits 13154 13148 -6
- Misses 3789 3795 +6
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3336?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3336/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.15% <0.00%> (-0.85%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3336/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.19% <0.00%> (-0.18%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3336/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.36% <0.00%> (+0.13%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3336?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3336?src=pr&el=footer). Last update [20139b7...38b38af](https://codecov.io/gh/huggingface/transformers/pull/3336?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,335 | closed | Fix #3305: run_ner only possible on ModelForTokenClassification models | Also, #3305 breaks (if i'm not mistaken) the ability to run the example script from a pip-installed instance of transformers (vs. from an instance installed from source) (This PR does not fix that second issue) | 03-19-2020 02:11:37 | 03-19-2020 02:11:37 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3335?src=pr&el=h1) Report
> Merging [#3335](https://codecov.io/gh/huggingface/transformers/pull/3335?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/20139b7c8d88f380f1e4e0ae2baf0b0ac9351039?src=pr&el=desc) will **decrease** coverage by `0.06%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3335?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3335 +/- ##
==========================================
- Coverage 77.63% 77.57% -0.07%
==========================================
Files 100 100
Lines 16943 16943
==========================================
- Hits 13154 13143 -11
- Misses 3789 3800 +11
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3335?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3335/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.4% <0%> (-1.97%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3335?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3335?src=pr&el=footer). Last update [20139b7...7fc00b6](https://codecov.io/gh/huggingface/transformers/pull/3335?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Ok I'll merge this @srush @LysandreJik as it's more correct (feel free to let me know of your feedback anyways)<|||||>Thanks. Why does it break the pip version?<|||||>Because `MODEL_MAPPING` from `modeling_auto` is not exposed in the package's [`__init__.py`](https://github.com/huggingface/transformers/blob/master/src/transformers/__init__.py) |
transformers | 3,334 | closed | transformers.PreTrainedTokenizer.tokenize does lower case work all the time and discards space and tab. Want this changed. | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):'albert_xxlarge_zh'
Language I am using the model on (English, Chinese ...):Chinese
Two problems are all related to method `transformers.PreTrainedTokenizer.tokenize`:
1. How can I not let this method automatically lower the case of the English words in the input sentence? `tokenizer.init_kwargs["do_lower_case"]=True` doesn't work...
2. How can I not let this method discard '\t' and space in default? Or is there any method that can solve this problem?
## To reproduce
Steps to reproduce the behavior:
`
tokenizer=BertTokenizer.from_pretrained("./albert_pytorch/prev_trained_model/albert_xxlarge_zh/")`
`print(tokenizer.init_kwargs.get("do_lower_case")) #output None`
`tokenizer.init_kwargs["do_lower_case"]=True`
`print(tokenizer.init_kwargs.get("do_lower_case")) #output True`
`seq=tokenizer.tokenize("我喜欢\tAPP和WIFI。")`
`print(seq)
`
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Expected output is:
['我', '喜', '欢',**'\t'**, **'APP'**, '和', **'WIFI'**, '。']
While the actual output is:
['我', '喜', '欢', **'app'**, '和', **'wifi'**, '。']
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:2.5.1
- Platform:ubuntu
- Python version:3.6.1
- PyTorch version (GPU?):1.1.0 cuda9
- Using GPU in script?:nope
- Using distributed or parallel set-up in script?:nope
**BTW**, `python transformers-cli env` didn't work, the callback :
> python: can't open file 'transformers-cli': [Errno 2] No such file or directory | 03-19-2020 02:02:11 | 03-19-2020 02:02:11 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,333 | closed | Finetuning T5 Model | Hi there. I am trying to fine tune T5, but I have noticed your documentation gives conflicting instructions.
In modeling_t5.py you say
> To match pre-training, T5 input sequence should be formatted with [CLS] and [SEP] tokens
but then in tokenization_t5.py both of those tokens are set to None and the only tokens that are defined are EOS, UNK, and PAD.
Additionally, the actual T5 implementation makes no mention of SEP and CLS as far as I can tell.
Given that, could you clarify how we should be formatting our training data for HuggingFace's T5 implementation? Thank you! | 03-18-2020 23:57:19 | 03-18-2020 23:57:19 | Hi @jkangsta,
Thanks for posting your question here. The docstring was out of date and an in-detail description for T5 will be added here #3507 . |
transformers | 3,332 | closed | run_tf_ner.py doesn't work with unlabelled test data | When running `run_tf_ner.py` in `predict` mode if all the labels in test data are `O`, script errors out with
` File "/home/himanshu/.local/lib/python3.7/site-packages/numpy/lib/function_base.py", line 423, in average
"Weights sum to zero, can't be normalized")
ZeroDivisionError: Weights sum to zero, can't be normalized
`
This is because `pad_token_label_id` https://github.com/huggingface/transformers/blob/cae334c43c49aa770d9dac1ee48319679ee8c72c/examples/ner/run_tf_ner.py#L511 , `label_id` for `O` are both zero, resulting in empty `y_pred`
https://github.com/huggingface/transformers/blob/cae334c43c49aa770d9dac1ee48319679ee8c72c/examples/ner/run_tf_ner.py#L364-L367 Shouldn't the `pad_token_label_id` be different? | 03-18-2020 20:36:13 | 03-18-2020 20:36:13 | I have noticed the same issue and posted a question here: https://stackoverflow.com/questions/60732509/label-handling-confusion-in-run-tf-ner-example
I think `pad_token_label_id` should definitely not fall into the range of actual labels. Maybe we can make it `-1` or `num(label)` or something. Also as shown in `convert_examples_to_features()`, `pad_token_label_id` is not only used for pad tokens at the end of the sequence, but also for non-first tokens inside a word when the word is split up to multiple tokens. Accordingly, during prediction, only the label of the first token in each word is used. So I am wondering if we should modify `input_mask` so that the loss does not take into account non-first tokens in a word.
I tried to set `pad_token_label_id = -1`, mask out non-first tokens in each word by changing `input_mask`, and change `num_labels` to `len(labels)` instead of `len(labels) + 1`. The training and evaluation can run, but the F1-score on the test set becomes much lower (on both conll03 English and Ontonotes English). I am still confused about this.<|||||>I also found the issue, `pad_token_label_id = 0` and first labels id also 0, seems a bug. @jplu <|||||>Hey !
The "fake" pad token id must be 0 and the first "true" token id must be 1. This is important to make the network able to make the difference between a padding and a word that is not part of an entity.
I just have tested the script on multiple NER datasets and works perfectly without any change, so I think, if there is an issue it is only with unlabeled data.
@0dust: the exception you mentioned do not rely on where the script has failed, I will try with testing on unlabeled data to see if I get the same issue. To be honest when I developed this script I never tried with unlabeled testing data.
@VDCN12593: -1 doesn't work because TF do not take into account negative ids.<|||||>@jplu Exactly! That was just a short desription I could think of.<|||||>@0dust Sorry I cannot reproduce your issue, everything works fine for me even with unlabeled data... Please try to reproduce the example over germeval in the README by removing the label column in the test file. For me it works like expected.
If you still get the same issue, please provide an example of data for which it doesn't works :)<|||||>@jplu Thank you for your reply! Here's my thoughts
1. In `run_tf_ner`, it doesn't make the "true" labels start from 1, so that's definitely a bug.
2. We can make `pad_token_label_id = -1`, as long as we also mask out all the non-first tokens inside each word (in function `convert_examples_to_features()`) so that there softmax output are not taken by the loss. I think this makes more sense because we only use the first token (wordpiece) of each word for the tagging, and don't care about the output of the other tokens. This method is also supported by some people like in here: https://www.vamvas.ch/bert-for-ner/<|||||>@VDCN12593 thanks for your feedback! Indeed the script to generate the feature has changed since I make the script and I did not update it to follow the changes, so yes your first bullet point is true since then. About the second, yes I know that but keeping all the values without ignoring some makes your model converging faster.
Anyway, I know this script is unusual and difficult to properly follow. So, this weekend, once I have some time, I will fully review it and make it much easier to understand. I will let you know once done.<|||||>I have done all the changes that was raising some confusion in this PR https://github.com/huggingface/transformers/pull/3511. Basically, the pad token label id to -1 and removing the softmax. The training is a bit longer and results stay unchanged.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,331 | closed | Add model cards for FinBERT. | These are a copy of https://github.com/TurkuNLP/FinBERT/blob/master/README.md. | 03-18-2020 20:00:08 | 03-18-2020 20:00:08 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3331?src=pr&el=h1) Report
> Merging [#3331](https://codecov.io/gh/huggingface/transformers/pull/3331?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/20139b7c8d88f380f1e4e0ae2baf0b0ac9351039&el=desc) will **decrease** coverage by `0.06%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3331?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3331 +/- ##
==========================================
- Coverage 77.63% 77.57% -0.07%
==========================================
Files 100 100
Lines 16943 16943
==========================================
- Hits 13154 13143 -11
- Misses 3789 3800 +11
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3331?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3331/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.40% <0.00%> (-1.97%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3331?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3331?src=pr&el=footer). Last update [20139b7...d354a22](https://codecov.io/gh/huggingface/transformers/pull/3331?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Great. Could you add a metadata block at the top of the file with:
```
---
language: finnish
# optional thumbnail: ...
---
```
Thanks!<|||||>@haamis I've one question regarding to the FinBERT training corpus: would it be possible to obtain the final pre-processed data that you've used for training the BERT model 🤔
I would really like to train an ELECTRA model and release it to the community :)<|||||>@stefan-it We can't publish the corpus due to licensing/copyright issues, but since we are also interested in training a Finnish ELECTRA maybe we could collaborate on this? Please send me an email sajvir(at)utu.fi. |
transformers | 3,330 | closed | Added model cards for SciBERT models uploaded under AllenAI org | 03-18-2020 19:32:53 | 03-18-2020 19:32:53 | ||
transformers | 3,329 | closed | CUDA Error when running run_language_modeling.py | I am trying to run this script[run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py)
I can load the pre-trained roberta model without a problem. However, when it starts training (loss.backward()), then there are issues like these:
> /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [30,0,0] Assertion `t >= 0 && t < n_classes` failed.
> /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [31,0,0] Assertion `t >= 0 && t < n_classes` failed.
I assume it was caused by the `loss.backward()` line:
>
> File "2_fine_tune_bert.py", line 386, in train
> loss.backward()
> File "/home/lily/zl379/anaconda2/envs/py36/lib/python3.7/site-packages/torch/tensor.py", line 195, in backward
> torch.autograd.backward(self, gradient, retain_graph, create_graph)
> File "/home/lily/zl379/anaconda2/envs/py36/lib/python3.7/site-packages/torch/autograd/__init__.py", line 99, in backward
> allow_unreachable=True) # allow_unreachable flag
> RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)
Is it caused by CUDA version or pytorch version? My pytorch is 1.4.0, CUDA: release 10.1, V10.1.243.
Thanks. | 03-18-2020 16:07:28 | 03-18-2020 16:07:28 | This seems related to the classes that you use in NLLLoss, That loss function expects a torch.LongTensor with values in the range [0, nb_classes-1] with no values left out in between.<|||||>Which version of `transformers` do you have installed? There was a recent change from using -1 to -100 for tokens that should be ignored during the calculation of loss. For example in this line: https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py#L218. Therefore, running the latest version of run_language_modeling.py with older versions of `transformers` will give an error similar to what you are seeing.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Problem solved, it was run with Python 3.6+. Return to Python 3.5, no more errors.<|||||>> Problem solved, it was run with Python 3.6+. Return to Python 3.5, no more errors.
That should not be the problem. The repo officially only supports 3.6+ any way. <|||||>I'm seeing exactly the same thing training Roberta using `run_mlm.py`. It's at the same step in the training cycle so I can reproduce but I've not tracked down what the issue is, either there's a problem with my input data in a single batch, or perhaps the training has diverged so forward() produces NaN's.
I'll keep digging. |
transformers | 3,328 | closed | Create README.md | 03-18-2020 14:34:09 | 03-18-2020 14:34:09 | ||
transformers | 3,327 | closed | improve doctstring for tf and pt generate() method | 03-18-2020 12:16:10 | 03-18-2020 12:16:10 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3327?src=pr&el=h1) Report
> Merging [#3327](https://codecov.io/gh/huggingface/transformers/pull/3327?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e8f44af5bf44a79f102678f5d7bb737cd6da3b52&el=desc) will **decrease** coverage by `0.10%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3327?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3327 +/- ##
==========================================
- Coverage 77.10% 77.00% -0.11%
==========================================
Files 100 100
Lines 16953 16953
==========================================
- Hits 13071 13054 -17
- Misses 3882 3899 +17
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3327?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3327/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.43% <ø> (-3.23%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3327/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.09% <ø> (+0.13%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3327?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3327?src=pr&el=footer). Last update [e8f44af...183952e](https://codecov.io/gh/huggingface/transformers/pull/3327?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 3,326 | closed | add link to blog post | 03-18-2020 11:42:21 | 03-18-2020 11:42:21 | ||
transformers | 3,325 | closed | Cubla Error on DistilBert | # 🐛 Bug
When using DistilBert I get `CUBLAS_STATUS_ALLOC_FAILED` when trying to forward
## Information
Exact error is : `RuntimeError : Cuda Error: CUBLAS_STATUS_ALLOC_FAILED when calling cublasCreate(handle)` and traceback indicates i happens on `output = input.matmul(weight.t())` which is probably not informative but the whole stack is filled with forward on a transformer.
I'm using distilbert-base-multilingual-cased with French
## Environment info
- `transformers` version: 2.5.1
- Platform: Linux-5.4.0-3amd-64-x89_64-with-debian-bullseye-sid
- Python version: 3.7.0
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.1.0 (False)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
| 03-18-2020 10:32:47 | 03-18-2020 10:32:47 | After some investigation, I have really no clue on what's happening. This issue is only referred through Tensorflow questions such as [this](https://stackoverflow.com/questions/41117740/tensorflow-crashes-with-cublas-status-alloc-failed) or [this issue](https://github.com/tensorflow/tensorflow/issues/9489) on Tensorflow's Github<|||||>I've got the same issue using the same environment as you.
I'm using model `"mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es"`
My error though is at:
```
RuntimeError Traceback (most recent call last)
<ipython-input-17-90ab5d6d4393> in <module>
13 outputs = model(**inputs, labels=labels)
14 loss, logits = outputs[:2]
---> 15 loss.backward()
16
17
/opt/conda/lib/python3.7/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
193 products. Defaults to ``False``.
194 """
--> 195 torch.autograd.backward(self, gradient, retain_graph, create_graph)
196
197 def register_hook(self, hook):
/opt/conda/lib/python3.7/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
97 Variable._execution_engine.run_backward(
98 tensors, grad_tensors, retain_graph, create_graph,
---> 99 allow_unreachable=True) # allow_unreachable flag
100
101
RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`
```
I've tried most stuff I've found but really nothing seems to work.<|||||>Find similar issue when using regular BERTModel<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Hi, @Ricocotam and @j6e
I got the same error with torch 1.6.0 GPU with a DistilBert Classification error. Would you advise if and how you fix that?<|||||>So, I encountered the same issue
This might sound stupid, but it happened because I was using the wrong tokenizer. So, for future readers.. check that you really use an appropriate tokenizer ?
However, I agree the error message is rather obscure. I'm not sure exactly *why* it is triggered (I guess the other tokenizer would produce some IDs that do not exist in the distilbert vocabulary ?) , so I don't know if there is an easy fix for that |
transformers | 3,324 | closed | Error loading finetuned bert model AttributeError: 'NoneType' object has no attribute 'endswith' | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- -->
Unable to load finetuned bert model for language modeling task using run_language_modeling.py script
<!---->
I used the following code to load a fine-tuned model saved on my disk:
`BertForMaskedLM.from_pretrained('outputs/pytorch_model.bin', config=config, from_tf=True).` however I am getting the following error:
Traceback (most recent call last):
File "/home/mahdi/Desktop/pycharm-community-4.5.3/helpers/pydev/pydevd_vars.py", line 342, in evaluateExpression
compiled = compile(expression, '<string>', 'eval')
File "<string>", line 1
from transformers import WEIGHTS_NAME, BertForMaskedLM
^
SyntaxError: invalid syntax
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/mahdi/Desktop/pycharm-community-4.5.3/helpers/pydev/pydevd_comm.py", line 1071, in doIt
result = pydevd_vars.evaluateExpression(self.thread_id, self.frame_id, self.expression, self.doExec)
File "/home/mahdi/Desktop/pycharm-community-4.5.3/helpers/pydev/pydevd_vars.py", line 344, in evaluateExpression
Exec(expression, updated_globals, frame.f_locals)
File "/home/mahdi/Desktop/pycharm-community-4.5.3/helpers/pydev/pydevd_exec2.py", line 3, in Exec
exec(exp, global_vars, local_vars)
File "<string>", line 2, in <module>
File "/home/mahdi/anaconda3/envs/py36/lib/python3.6/site-packages/transformers/modeling_utils.py", line 482, in from_pretrained
if resolved_archive_file.endswith(".index"):
AttributeError: 'NoneType' object has no attribute 'endswith'
Transformers version:2.5.1
Pytorch version: 1.3.0
**A link to original question on Stack Overflow**:
| 03-18-2020 08:02:45 | 03-18-2020 08:02:45 | The `from_pretrained` method should point to a directory. Could you try to point it to a directory containing both the model weights and the config.json file?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,323 | closed | [Bart/Memory] don't create lm_head | ### Summary
previously we created a `self.lm_head = nn.Linear()` with the exact same weight matrix as the input embeddings, and then skipped `tie_weights`.
This presented 3 problems:
1) 200 MB of extra GPU RAM
2) Can't `tie_weights`
3) Can't `resize_token_embeddings`
This PR alleviates all the concerns by using `lm_logits = F.linear(decoder_outputs[0], self.shared)`. It also adds more aggressive test coverage that `resize_embeddings` is changing the shape of both input and output embeddings.
### Concerns
1) If I recall from an earlier PR, tying the input and output embeddings to a single parameter is unfriendly to torchscript.
However, neither `Bart` before this change nor `T5ForConditionalGeneration`, which uses the `self.lm_head = nn.Linear` technique, pass the common torchscript tests, which suggests that the weight tying in this PR is not removing functionality that existed before it.
2) The failing test here is caused by fact that S3 has `lm_head` in `state_dict`. I will update S3 right before this PR gets merged.
3) To pass unit tests and use `.generate`, `get_output_embeddings` must return `nn.Linear`. To satisfy this constraint, this PR makes the `nn.Linear` module on the fly when `get_output_embeddings` is called. I think (but am not sure) that this is fine because `resize_token_embeddings` works by resizing the input_embeddings then calling `tie_weights`, and we have stopped skipping `tie_weights`
(Note there is a separate but related issue that `test_common.py::test_resize_embeddings` is shallow, detailed in https://github.com/huggingface/transformers/issues/3378)
| 03-18-2020 06:53:22 | 03-18-2020 06:53:22 | @LysandreJik
```python
if hasattr(output_embeddings, "out_features") and hasattr(input_embeddings, "num_embeddings"):
output_embeddings.out_features = input_embeddings.num_embeddings
```
was breaking because bart-large-cnn doesn't have a mask token.
I can investigate more deeply if it's interesting to anyone.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3323?src=pr&el=h1) Report
> Merging [#3323](https://codecov.io/gh/huggingface/transformers/pull/3323?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5ad2ea06af898a95744a268332431f050c62a862&el=desc) will **decrease** coverage by `0.00%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3323?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3323 +/- ##
==========================================
- Coverage 77.83% 77.82% -0.01%
==========================================
Files 100 100
Lines 17051 17048 -3
==========================================
- Hits 13272 13268 -4
- Misses 3779 3780 +1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3323?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3323/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `98.07% <100.00%> (-0.02%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3323/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.73% <0.00%> (-0.14%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3323?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3323?src=pr&el=footer). Last update [5ad2ea0...0b8c252](https://codecov.io/gh/huggingface/transformers/pull/3323?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,322 | closed | [BART] torch 1.0 compatibility | This PR contains two minor fixes and one piece of cleanup for the BartModel.
**1.** Previously, Bart's encoder padding mask used -10000. to represent tokens that should be ignored, then called `masked_fill(mask.to(torch.bool), -inf)` to use the mask.
There are two problems with this:
- it's confusing to set a value to a large negative and then call `bool`. Why not just invert the mask and call `bool` immediately.
- `torch.bool` is released in pytorch 1.2, so this code breaks on earlier versions.
- Fix: let `torch.eq` make the mask the correct dtype at the beginning.
**2.** explicit use of `F.gelu` is not allowed/or broken in earlier torch versions. Let `ACT2FN` handle this logic.
**3.** An unreachable code branch is deleted.
Supplementary Material: torch v 1.2.0 [release notes](https://github.com/pytorch/pytorch/releases/tag/v1.2.0)
| 03-18-2020 05:11:31 | 03-18-2020 05:11:31 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3322?src=pr&el=h1) Report
> Merging [#3322](https://codecov.io/gh/huggingface/transformers/pull/3322?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/38a555a83c8aceae77895d325174af5bd576cec7&el=desc) will **decrease** coverage by `0.93%`.
> The diff coverage is `70.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3322?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3322 +/- ##
==========================================
- Coverage 77.14% 76.20% -0.94%
==========================================
Files 100 100
Lines 16972 16964 -8
==========================================
- Hits 13093 12928 -165
- Misses 3879 4036 +157
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3322?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `97.04% <70.00%> (+0.78%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0.00%> (-81.21%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0.00%> (-10.00%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0.00%> (-2.30%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.00% <0.00%> (-2.23%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.20% <0.00%> (-1.35%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.19% <0.00%> (+0.71%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3322?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3322?src=pr&el=footer). Last update [38a555a...d67639d](https://codecov.io/gh/huggingface/transformers/pull/3322?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>One note: you can add an `activation_function` attribute in `BartConfig` defaulting to "gelu" to be used when calling the `ACT2FN`. This lets people switch to "gelu_new" if they want a different trade-off accuracy versus speed/memory. |
transformers | 3,321 | closed | Init card for model | We create card for this model. | 03-18-2020 03:28:54 | 03-18-2020 03:28:54 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3321?src=pr&el=h1) Report
> Merging [#3321](https://codecov.io/gh/huggingface/transformers/pull/3321?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/38a555a83c8aceae77895d325174af5bd576cec7&el=desc) will **decrease** coverage by `0.03%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3321?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3321 +/- ##
==========================================
- Coverage 77.14% 77.10% -0.04%
==========================================
Files 100 100
Lines 16972 16972
==========================================
- Hits 13093 13087 -6
- Misses 3879 3885 +6
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3321?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3321/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.40% <0.00%> (-1.08%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3321?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3321?src=pr&el=footer). Last update [38a555a...1630a1f](https://codecov.io/gh/huggingface/transformers/pull/3321?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks! [`model page`](https://huggingface.co/clue/roberta_chinese_3L312_clue_tiny)
By the way, I sent an email to the address listed [in your paper](https://arxiv.org/pdf/2003.01355.pdf).
Let me know if you got it.<|||||>Thank you.
Yes, please. And my friend has sent an e-mail to you via that email.
best,
Junyi
On Wed, 18 Mar 2020 at 20:27, Julien Chaumond <[email protected]>
wrote:
> Thanks! model page
> <https://huggingface.co/clue/roberta_chinese_3L312_clue_tiny>
>
> By the way, I sent an email to the address listed in your paper
> <https://arxiv.org/pdf/2003.01355.pdf>.
> Let me know if you got it.
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/pull/3321#issuecomment-600593364>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEMFSVEG4UOVDKYZRLCCSN3RIC43XANCNFSM4LOEAGWQ>
> .
>
--
Junyi Li
+ 86 136 0354 2466
dukeenglish.github.io
|
transformers | 3,320 | closed | TF BERT not FP16 compatible? | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): TFBertForQuestionAnswering
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] my own modified scripts:
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: SQUaD
## To reproduce
Simple example to reproduce error:
```
import tensorflow as tf
from transformers import TFBertForQuestionAnswering
# turn on mp (fp16 operations)
tf.keras.mixed_precision.experimental.set_policy('mixed_float16')
model = TFBertForQuestionAnswering.from_pretrained('bert-base-uncased')
```
The error occurs here:
transformers/modeling_tf_bert.py", line 174, in _embedding
embeddings = inputs_embeds + position_embeddings + token_type_embeddings
And this is the error:
tensorflow.python.framework.errors_impl.InvalidArgumentError: cannot compute AddV2 as input #1(zero-based) was expected to be a half tensor but is a float tensor [Op:AddV2] name: tf_bert_for_question_answering/bert/embeddings/add/
## Expected behavior
I want to use TF BERT with mixed precision (for faster inference on tensor core GPUs). I know that full fp16 is not working out-of-the-box, because the model weights need to be in fp16 as well. Mixed precision, however, should work because only operations are performed in fp16.
I get some dtype issue. Seems the mode is not fp16 compatible yet? Will this be fixed in the future?
## Environment info
- `transformers` version: 2.5.0
- Platform: ubuntu 16.04
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0 (GPU)
- Tensorflow version (GPU?): 2.1.0 (GPU)
- Using GPU in script?: sort of
- Using distributed or parallel set-up in script?: nope
| 03-18-2020 03:20:45 | 03-18-2020 03:20:45 | I've aced same issue. Maybe it's hard coded the data type somewhere? Have you found solution?<|||||>Tried this on Colab TPU, same error.<|||||>Same here, would be convenient as hell :)<|||||>Having the same error also for `transformers` version 2.11.0.
Here some code to easily reproduce the error:
```python
#!/usr/bin/env python3
from transformers import TFBertModel, BertTokenizer
from tensorflow.keras.mixed_precision import experimental as mixed_precision
policy = mixed_precision.Policy('mixed_float16')
mixed_precision.set_policy(policy)
tok = BertTokenizer.from_pretrained("bert-base-uncased")
model = TFBertModel.from_pretrained("bert-base-uncased")
input_ids = tok("The dog is cute", return_tensors="tf").input_ids
model(input_ids) # throws error on GPU
```<|||||>Encountering the same issue here:
```python3
import tensorflow as tf
from transformers.modeling_tf_distilbert import TFDistilBertModel
tf.keras.mixed_precision.experimental.set_policy('mixed_float16')
model = TFDistilBertModel.from_pretrained('distilbert-base-uncased')<|||||>Put this issue on my TF ToDo-List :-) <|||||>+1<|||||>Hi @patrickvonplaten, is this problem fixed?
I got the same error recently with version 3.0.2<|||||>This is still an open problem...I didn't find the time yet to take a look! Will link this issue to the TF projects.<|||||>This is already solved in new version.
`position_embeddings = tf.cast(self.position_embeddings(position_ids), inputs_embeds.dtype)
token_type_embeddings = tf.cast(self.token_type_embeddings(token_type_ids), inputs_embeds.dtype)
embeddings = inputs_embeds + position_embeddings + token_type_embeddings` |
transformers | 3,319 | closed | [BART] cleanup: remove redundant kwargs, improve docstrings | Small Bart code cleanups before pip release.
### Cleanup
- Deletes unused `value` argument for SelfAttention. (`value` is always the same as `key`.) This might be moderately controversial as most attention modules take query, key, value as arguments, but this change reduces the signature to just query and key (since key always the same as value).
- Deletes redundant `static_kv` argument for SelfAttention. It is always the same as `self.encoder_decoder_attention`.
- Context: the `static_kv` variable decides whether we want to extend the keys and values in the cache or, if `True`, use them without modification.
- This PR keeps a local `static_kv` variable because that variable name describes the purpose of the variable better than `self.encoder_decoder_attention`. But `static_kv` is no longer a kwarg. This simplifies the API and avoids having the same logic in two places.
#### Two new fast tests
- test coverage for `dummy_inputs` (previously broken)
- test coverage for the default generate kwargs. | 03-18-2020 02:53:41 | 03-18-2020 02:53:41 | |
transformers | 3,318 | closed | pipelines.ipynb mask should be [MASK] | transformers/notebooks/03-pipelines.ipynb fill-mask task, use newest version, <mask> is error, should be [MASK]. | 03-18-2020 02:49:13 | 03-18-2020 02:49:13 | Hi @shibing624,
Thanks for opening this issue.
`fill-mask` pipeline uses Roberta under the hood and the mask token is actually `<mask>` which is the one used in the notebook.
However, it's not the safest way to use the pipeline I agree. I'll update the notebook with
```python
nlp_fill = pipeline('fill-mask')
nlp_fill('Hugging Face is a French company based in ' + nlp_fill.tokenizer.mask_token)
```
This way it will be compatible with any model.
Morgan<|||||>Dont hesitate to reopen if I missed something :) |
transformers | 3,317 | closed | output value of XLNetModel changes for the same input | I was trying to use the pre-trained `XLNetModel` model. I found that for the same input, each time I run the model and the output values are different, which is wired to me. Here is a code that I just used:
```
tokenizer =XLNetTokenizer.from_pretrained("xlnet-base-cased")
model=XLNetModel.from_pretrained('xlnet-base-cased')
sent=tokenizer.encode("I love my dog")
test=torch.tensor(sent)
test=test.view(1,test.shape[0])
```
Now If I run the model, each time the output is different.
```
out=model(test)
print(out[0])
```
Why this be behaviour?
I have another question, isn't this model the pre-trained transformer trained on language modeling task (transformer without the LM head)? | 03-17-2020 23:27:32 | 03-17-2020 23:27:32 | It seems that after fine-tuning if I don't use `model=model.eval()`, this can happen.
|
transformers | 3,316 | closed | TextClassificationPipeline does not work with pretrained BERT model | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): 'nlptown/bert-base-multilingual-uncased-sentiment'
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
model = BertModel.from_pretrained(pretrained_model_name_or_path='nlptown/bert-base-multilingual-uncased-sentiment')
tokenizer = BertTokenizer.from_pretrained(pretrained_model_name_or_path='nlptown/bert-base-multilingual-uncased-sentiment')
sentiment_analyzer = TextClassificationPipeline(model=model, tokenizer=tokenizer)
sentiment_analyzer('This is awesome!')
/usr/local/lib/python3.7/site-packages/transformers/pipelines.py in __call__(self, *args, **kwargs)
504 def __call__(self, *args, **kwargs):
505 outputs = super().__call__(*args, **kwargs)
--> 506 scores = np.exp(outputs) / np.exp(outputs).sum(-1)
507 return [{"label": self.model.config.id2label[item.argmax()], "score": item.max()} for item in scores]
508
ValueError: operands could not be broadcast together with shapes (1,8,768) (1,8)
```
## Expected behavior
A sentiment score.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.1
- Platform: Ubuntu 16.04
- Python version: 3.7
- PyTorch version (GPU?): 1.4.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 03-17-2020 21:13:27 | 03-17-2020 21:13:27 | That would be because you're using a `BertModel` instead of a `BertModelForSequenceClassification`.<|||||>What model should we be using, and where can we download it from? |
transformers | 3,315 | closed | how does masked_lm_labels work ? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | 03-17-2020 21:02:22 | 03-17-2020 21:02:22 | Hi all
from hugging face{https://huggingface.co/transformers/model_doc/bert.html#bertformaskedlm}
in this code :
{
from transformers import BertTokenizer, BertForMaskedLM
import torch
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForMaskedLM.from_pretrained('bert-base-uncased')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1
outputs = model(input_ids, masked_lm_labels=input_ids)
loss, prediction_scores = outputs[:2]
}
what happens by {outputs = model(input_ids, masked_lm_labels=input_ids)} ?
it will automatically makes [mask] 15% of all the tokens in each sentence of each bs , and calculates loss just for them ?
@thomwolf @tholor<|||||>No I don't think so, you need to mask the tokens and then pass them to model, look here (e.g. evaluate) for an example
https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py<|||||>@blackcat84 , thanks it helped a lot<|||||>@blackcat84
and one more thing ,
does any function in those scripts , concatenate the short lines to each other ?
in order not to be enforced to pad each line so much<|||||>It's been a while so I might be wrong but I think you are correct, I don't remember in which function though. A simple way to be sure about it is to pass a dummy input to the script/function and check it by yourself<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,314 | closed | Mismatch in the accuracy figures | # ❓ Questions & Help
Hi Just wanted to know if "bert-base-multilingual-cased" model and https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip are same.
I tried using the tokenizer from the multi_cased_L-12_H-768_A-12 and bert-base-multilingual-cased model which performed better than using the tokenizer and model from pretrained "bert-base-multilingual-cased" .
I was trying to fine tune a sentiment analysis task and the performance of using tokenizers above provide different performances.
Can anybody shed light on this ? | 03-17-2020 19:20:38 | 03-17-2020 19:20:38 | |
transformers | 3,313 | closed | KeyError in GLUE data tokenization with RoBERTA | # 🐛 Bug
I'm getting a KeyError [here](https://github.com/huggingface/transformers/blob/master/src/transformers/data/processors/glue.py#L94) when using RoBERTa in [examples/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/run_glue.py) and trying to access `'token_type_ids'` while preprocessing the data, maybe from [this commit](https://github.com/huggingface/transformers/commit/5164ea91a7b4d35cb03867233527fa383a651775) removing `'token_type_ids'` from RoBERTa (and DistilBERT)?
I get the error when fine-tuning RoBERTa on CoLA and RTE. I haven't tried other tasks, but I think you'd get the same error.
I don't get the error when fine-tuning XLNet (presumably, since XLNet does use `'token_type_ids'`), and I don't get the error when I do `pip install transformers` instead of `pip install .` (which I think means the issue is coming from a recent commit).
Here's the full error message:
```bash
03/17/2020 11:53:58 - INFO - transformers.data.processors.glue - Writing example 0/13997
Traceback (most recent call last):
File "examples/run_glue.py", line 731, in <module>
main()
File "examples/run_glue.py", line 679, in main
train_dataset = load_and_cache_examples(args, args.task_name, tokenizer, evaluate=False)
File "examples/run_glue.py", line 419, in load_and_cache_examples
pad_token_segment_id=4 if args.model_type in ["xlnet"] else 0,
File "/home/ejp416/cmv/transformers/src/transformers/data/processors/glue.py", line 94, in glue_convert_examples_to_features
input_ids, token_type_ids = inputs["input_ids"], inputs["token_type_ids"]
KeyError: 'token_type_ids'
```
## Information
Model I am using (Bert, XLNet ...): RoBERTa. I think DistilBERT may run into the same issue as well.
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
I've made slight modifications to the training loop in the official [examples/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/run_glue.py), but I did not touch the data pre-processing, which is where the error occurs (before any training).
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
I've run into the error on CoLA and RTE, though I think the error should happen on all GLUE tasks.
## To reproduce
Steps to reproduce the behavior:
1. Install `transformers` using the latest clone (use `pip install .` not `pip install transformers`)
2. Download the RTE data (e.g., into `data/RTE` using the GLUE download scripts in this repo)
3. Run a command to train RoBERTa (base or large). I'm using:
```
python examples/run_glue.py --model_type roberta --model_name_or_path roberta-base --output_dir models/debug --task_name rte --do_train --evaluate_during_training --data_dir data/RTE --max_seq_length 32 --max_grad_norm inf --adam_epsilon 1e-6 --adam_beta_2 0.98 --weight_decay 0.1 --logging_steps 874 --save_steps 874 --num_train_epochs 10 --warmup_steps 874 --per_gpu_train_batch_size 1 --per_gpu_eval_batch_size 2 --learning_rate 1e-5 --seed 12 --gradient_accumulation_steps 16 --overwrite_output_dir
```
## Expected behavior
`load_and_cache_examples` (and specifically, the call to `convert_examples_to_features`) in `examples/run_glue.py` should run without error, to load, preprocess, and tokenize the dataset.
## Environment info
- `transformers` version: 2.5.1
- Platform: Linux-3.10.0-1062.12.1.el7.x86_64-x86_64-with-centos-7.7.1908-Core
- Python version: 3.7.6
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Error happens with both GPU and CPU
- Using distributed or parallel set-up in script?: No
| 03-17-2020 16:57:58 | 03-17-2020 16:57:58 | I also have this issue when i run run_multiple_choice.py in RACE data with RoBERTA.<|||||>I get the same error when I try to fine-tune Squad<|||||>Tagging @LysandreJik <|||||>>
>
> I also have this issue when i run run_multiple_choice.py in RACE data with RoBERTA.
Same here. Any solution?<|||||>@nielingyun @orena1 @Onur90 maybe try pulling again from the latest version of the repo and see if it works? The error went away after I pulled recently, not sure if that fixed it or something else I did - let me know if that worked<|||||>@ethanjperez by latest version you mean **latest commit** or the **latest release** (v2.6.0)? It is still not working with the **latest commit**. |
transformers | 3,312 | closed | GPT2Tokenizer doesn't include BOS or EOS token | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): GPT-2
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Script:
```
from transformers import GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
encoded_dict = tokenizer.encode_plus(text="Hello I am Moin", add_special_tokens=True, \
max_length=512, truncation_strategy="longest_first", pad_to_max_length=False, \
return_tensors=None, return_token_type_ids=True, return_attention_mask=True, \
return_overflowing_tokens=False, return_special_tokens_mask=False)
print(tokenizer.bos_token_id)
print(encoded_dict['input_ids'])
```
You should see that the `input_ids` do not include the `bos_token_id`. Shouldn't `encode_plus` be doing this?
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The <|endoftext|> token would appear, since I included to `add_special_tokens`.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Linux-4.15.0-54-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.2
- PyTorch version (GPU?): 1.3.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 03-17-2020 16:34:27 | 03-17-2020 16:34:27 | Accidental double post -- closing this in favour of #3311 |
transformers | 3,311 | closed | GPT2 -- build_inputs_with_special_tokens lacking BOS and EOS tokens. | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): GPT-2
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Script:
```
from transformers import GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
encoded_dict = tokenizer.encode_plus(text="Hello I am Moin", add_special_tokens=True, \
max_length=512, truncation_strategy="longest_first", pad_to_max_length=False, \
return_tensors=None, return_token_type_ids=True, return_attention_mask=True, \
return_overflowing_tokens=False, return_special_tokens_mask=False)
print(tokenizer.bos_token_id)
print(encoded_dict['input_ids'])
```
You should see that the `input_ids` do not include the `bos_token_id`. Shouldn't `encode_plus` be doing this?
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The <|endoftext|> token would appear, since I included to `add_special_tokens`.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Linux-4.15.0-54-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.2
- PyTorch version (GPU?): 1.3.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 03-17-2020 16:33:57 | 03-17-2020 16:33:57 | Hi @moinnadeem,
Thanks for posting this!
As it is implemented in the moment, you are right, GPT2 Tokenizer does not add the BOS in the beginning nor the EOS token in the end.
You can see e.g. that the XLNet tokenizer has a method that adds special tokens to the encoded input string (see https://github.com/huggingface/transformers/blob/4e4403c9b44324671cb795df2ef30e70fe3b606e/src/transformers/tokenization_xlnet.py#L241), whereas the GPT2 tokenizer does not have such a function and thus uses the default one which does not add any special tokens.
As far as I can see this could be a feature request, where a `build_inputs_with_special_tokens()` would be added to `tokenization_gpt2.py`.
The expected behavior could be:
input_string -> BOS + encoded(input_string) + EOS in the case of GPT2.
Feel free to open a PR to include this feature :-) In the meantime you can obviously just manually add the BOS and EOS token before encoding.
@mfuntowicz do you think such a PR would make sense? <|||||>I don't think this has been fixed, right?<|||||>It's not really a bug because the default behavior of GPT2 is to just not add bos or eos tokens. GPT2 is mainly used to generate text so it would not make a lot of sense to add a EOS of a input prompt. If one wants he could just manually add `gpt2_tokenizer.eos_token` to the input and the eos_token_id will be added<|||||>> It's not really a bug because the default behavior of GPT2 is to just not add bos or eos tokens. GPT2 is mainly used to generate text so it would not make a lot of sense to add a EOS of a input prompt. If one wants he could just manually add `gpt2_tokenizer.eos_token` to the input and the eos_token_id will be added
I think in the original GPT2 model, there *are* special tokens for bos and eos, both of which are `<|endoftext|>`, right? So if I want to finetune it, we should do the same thing -- add both bos and eos to the corpus for finetune, right?<|||||>@zhujl1991 - yes this is correct.
We also set bos and eos token to `<|endoftet|>` for GPT2 as you can verify as follows:
```python
from transformers import GPT2Tokenizer
tok = GPT2Tokenizer.from_pretrained("gpt2")
print(tok.eos_token)
print(tok.bos_token)
```
However, I don't think we plan on adding these tokens automatically when tokenizing an input string because the main use case for GPT2 is open-domain text generation where these tokens should not be added.
I agree that they could /should be added for fine-tuning.
So I'm not sure if we want to add any special "fine-tune" behavior to the GPT2Tokenizer. @LysandreJik - what do you think?<|||||>
> @zhujl1991 - yes this is correct.
> We also set bos and eos token to `<|endoftet|>` for GPT2 as you can verify as follows:
>
> ```python
> from transformers import GPT2Tokenizer
> tok = GPT2Tokenizer.from_pretrained("gpt2")
> print(tok.eos_token)
> print(tok.bos_token)
> ```
>
> However, I don't think we plan on adding these tokens automatically when tokenizing an input string because the main use case for GPT2 is open-domain text generation where these tokens should not be added.
> I agree that they could /should be added for fine-tuning.
>
> So I'm not sure if we want to add any special "fine-tune" behavior to the GPT2Tokenizer. @LysandreJik - what do you think?
The behavior of "set add_special_tokens to True but no special tokens are added while there are special tokens in the tokenizer" looks like a bug to me anyway. If the user doesn't want to add special tokens when tokenizing, e.g., as you said, when generating text, the user should set add_special_tokens to False.<|||||>I see what you mean @zhujl1991 -> Thinking about backwards compatibility and that by default `add_special_tokens` is set to `True`, I still do not think that we should add this feature to the `__call__` or `encode_plus` functions for GPT2. On the other hand such a functionality would be very useful for training/fine-tuning.
I see three options:
1) overwrite the __call__ method in GPT2 to have add_special_tokens=`False` by default and append BOS and EOS if set to `True` => I don't like this option as it's quite hacky and would still not be 100% backward compatible
2) Add a new method `prepare_for_training` where the input is prepared for fine-tuning / training as you said.
3) Don't do anything about it and let the user overwrite such a method himself.
I would be fine with option 2), but also don't think it's that important of a feature (option 3))....let's see what @LysandreJik @sgugger, @thomwolf and @sshleifer think<|||||>IMO this is something that should be written by the user for their specific needs (option 3). We can document more that the tokenizers are pre-set for the most common tasks the corresponding models are used for, to avoid any user being too surprised.
I feel that if we add a method, it will cover some use cases but not all and it will either be overly too complex or only used by a small percentage of the users.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Ran into this too – this seems like a bug to me, or at the least not intuitive behaviour.
If there's a tokeniser that has an EOS token, and I encode with `add_special_tokens=True`, I'd expect it to include the eos token at the end of sentence. <|||||>+1 on this.
The main issue here is really how opaque and unintuitive this has been for me. Here's my thought process:
```
from transformers import GPT2TokenizerFast
gpt2_tok = GPT2TokenizerFast.from_pretrained("gpt2")
gpt2_tok("Mary had a little lamb")["input_ids"]
# prints [24119, 550, 257, 1310, 19343]
```
Mh, weird, no special tokens? I've used HF before and I thought the default was to add them?
Then I went and looked up the function, and indeed the default is to have them on. Bah, whatevs. Let's explicitly pass it as on:
```
gpt2_tok("Mary had a little lamb", add_special_tokens=True)["input_ids"]
# No difference :D
```
This part is what got me massively confused. I think it's entirely fine to change the default behavior for GPT-2 if the majority of the users don't care/want those tokens, but it would be more intuitive to change the default to add_special_tokens=False, and actually add the special tokens when the option is passed explicitly! :)
<|||||>I had some thoughts over this question too.
In the end, I realized that the model has been trained using "full paragraphs/articles of text", which means that spaces and new line symbols were part of the training. The <|endoftext|> token was added between paragraphs/articles.
So the <|endoftext|> should only be added at the beginning and end of text paragraphs/articles for fine-tuning, but it seems to be a detail since in fact, it is just a kind of text formatting.
For text generation, usually you think of the "end of text" as a punctuation mark or newline character not as the <|endoftext|> token which denotes the end of a paragraph/article.
So I think that the code is perfectly right.
<|||||>@patrickvonplaten
Hi, I also believe that BOS should be prepended before an input sentence (w1, w2, ...) for two reasons:
1. Without BOS, the model cannot calculate the probability of generating the first token, i.e. P(w1|BOS).
2. BOS also affects the probability of generating the following words, e.g. P(w2|w1) != P(w2|w1, BOS).
For the second point, see the following example:
```
import torch
from transformers import GPT2Tokenizer, GPT2LMHeadModel
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2LMHeadModel.from_pretrained('gpt2')
inputs = tokenizer("<|endoftext|>This", return_tensors="pt")
# inputs: {'input_ids': tensor([[50256, 1212]]), 'attention_mask': tensor([[1, 1]])}
outputs = model(**inputs, labels=inputs["input_ids"])
tokenizer.convert_ids_to_tokens(outputs.logits[0][1].topk(20)[1])
# ['Ġis', 'Ġarticle', 'Ġpost', 'Ġweek', 'Ġpage', 'Ġstory', 'Ġyear', 'Ġwas', 'Ġmonth', 'Ġsite', 'Ġbook', 'Ġpast', 'Ġitem', 'Ġproject', 'Ġblog', 'Ġstudy', 'Ġsection', 'Ġmorning', 'Ġvideo', 'Ġgame']
inputs = tokenizer("This", return_tensors="pt")
# {'input_ids': tensor([[1212]]), 'attention_mask': tensor([[1]])}
outputs = model(**inputs, labels=inputs["input_ids"])
tokenizer.convert_ids_to_tokens(outputs.logits[0][0].topk(20)[1])
# ['Ġis', ',', '.', 'Ċ', "'s", 'Ġwas', 'Ġto', 'Ġand', 'Ġthe', 'Ġin', 'Ġhas', 'Ġof', 'Ġwill', 'Ġa', ':', 'Ġare', 'Ġcan', 'Ġ(', '-', 'Ġfor']
```
Comparing these two generations, the prediction with "<|endoftext|>" seems more accurate (e.g. Without BOS, some punctuations are predicted as the next word of "This").
Due to the lack of documentation, I am not entirely sure if the "<|endoftext|>" token is actually used as a BOS token during training, but the following example suggests it may be the case.
```
inputs = tokenizer("<|endoftext|>", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
tokenizer.convert_ids_to_tokens(outputs.logits[0][0].topk(20)[1])
# ['Ċ', 'The', '"', 'A', 'I', 'In', '.', 'It', 'S', 'This', 'B', '-', 'C', 'We', '1', 'T', "'", 'P', '(', 'G']
```
Even if you opt not to prepend BOS, I believe these things should be clarified more in the documentation.<|||||>To add confirmation that `<|endoftext|>` is also a BOS token, the official repo uses it for inference as well: https://github.com/openai/gpt-2/blob/a74da5d99abaaba920de8131d64da2862a8f213b/src/generate_unconditional_samples.py#L60<|||||>Would be nice to add some documentation on this in the GPT2Tokenizer [docs](https://huggingface.co/docs/transformers/model_doc/gpt2#transformers.GPT2Tokenizer).
(Self-note for future PR) <|||||>@patrickvonplaten
> However, I don't think we plan on adding these tokens automatically when tokenizing an input string because the main use case for GPT2 is open-domain text generation where these tokens should not be added.
Hi.
I'd like to ask you a question.
Could you explain how the model want to stop generation if there's no EOS token?<|||||>> @patrickvonplaten
>
> > However, I don't think we plan on adding these tokens automatically when tokenizing an input string because the main use case for GPT2 is open-domain text generation where these tokens should not be added.
>
> Hi. I'd like to ask you a question. Could you explain how the model want to stop generation if there's no EOS token?
I'm trying to train the model with EOS tokens at the end. Let's see if that works...
Shouldn't the EOS tokens be set by default when we use `DataCollatorForLanguageModeling(..., mlm=False)`? It makes sense to me that they should. If not, **at least** [this documentation](https://huggingface.co/docs/transformers/tasks/language_modeling) should be changed and the EOS token should be added at the end of each raw text.
<|||||>> > @patrickvonplaten
> > > However, I don't think we plan on adding these tokens automatically when tokenizing an input string because the main use case for GPT2 is open-domain text generation where these tokens should not be added.
> >
> >
> > Hi. I'd like to ask you a question. Could you explain how the model want to stop generation if there's no EOS token?
>
> I'm trying to train the model with EOS tokens at the end. Let's see if that works...
>
> Shouldn't the EOS tokens be set by default when we use `DataCollatorForLanguageModeling(..., mlm=False)`? It makes sense to me that they should. If not, **at least** [this documentation](https://huggingface.co/docs/transformers/tasks/language_modeling) should be changed and the EOS token should be added at the end of each raw text.
I agree with you. I really don't understand how CLM train without EOS token.<|||||>> I'm trying to train the model with EOS tokens at the end. Let's see if that works...
this worked for me, but I really had to make sure that the EOS token was always at the end of each sequence<|||||>> this worked for me, but I really had to make sure that the EOS token was always at the end of each sequence
Do you mean the inference is working? How the model decide to stop generate if there's no EOS token or there're multiple EOS tokens when they concat sequences as mentioned in [this documentation](https://huggingface.co/learn/nlp-course/chapter7/6#preparing-the-dataset)?
<|||||>> > this worked for me, but I really had to make sure that the EOS token was always at the end of each sequence
>
> Do you mean the inference is working? How the model decide to stop generate if there's no EOS token or there're multiple EOS tokens when they concat sequences as mentioned in [this documentation](https://huggingface.co/learn/nlp-course/chapter7/6#preparing-the-dataset)?
Tokenizers used in causal models don't append the EOS token by default, while the ones in encoder-decoder (like T5) do.
```
t5_tokenizer("My name is Sarah and I live in London")
Out[7]: {'input_ids': [499, 564, 19, 8077, 11, 27, 619, 16, 1524, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
t5_tokenizer.eos_token_id
Out[8]: 1
gpt2_tokenizer("My name is Sarah and I live in London")
Out[9]: {'input_ids': [3666, 1438, 318, 10490, 290, 314, 2107, 287, 3576], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1]}
gpt2_tokenizer.eos_token_id
Out[10]: 50256
```
To make the model generate EOS tokens at inference time, I had to tokenize my texts and then add the EOS token at the end, like: `tokenized_texts = tokenizer([t + tokenizer.eos_token for t in texts])`
If the tokenizer doesn't even have an EOS token, then you may have to create a new one, or rely on some heuristics to stop the generation. |
transformers | 3,310 | closed | Add sample softmax possibility to TransfoXL model for TransfoXL training | # 🚀 Feature request
TransfoXL samples the logits during training if required. At the moment TransfoXL can only be used whithout sampling from the logits during training. A partrly finished implementation can be found under the branch: `add_sampling_and_training_to_transfo_xl_models` .
## Motivation
To be able to train tranfoXL correctly.
## Your contribution
Already looked into the issue. Could try to implement it correctly with help from @thomwolf and @LysandreJik . Not a priority at the moment though.
| 03-17-2020 15:14:48 | 03-17-2020 15:14:48 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,309 | closed | Create model card for CodeBERTaPy | 03-17-2020 14:20:30 | 03-17-2020 14:20:30 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3309?src=pr&el=h1) Report
> Merging [#3309](https://codecov.io/gh/huggingface/transformers/pull/3309?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2187c49f5cde57306c3fd1eb67dbc68fab9c6403&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3309?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3309 +/- ##
==========================================
+ Coverage 76.92% 76.93% +0.01%
==========================================
Files 100 100
Lines 16953 16953
==========================================
+ Hits 13041 13043 +2
+ Misses 3912 3910 -2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3309?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3309/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.09% <0.00%> (+0.26%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3309?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3309?src=pr&el=footer). Last update [2187c49...003d51b](https://codecov.io/gh/huggingface/transformers/pull/3309?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 3,308 | closed | Loading DistilBertModel with AutoModel gives 12 layers | I am using ``AutoModel`` to load ``distilbert-base-uncased`` and save the fine-tuned model after training using ``model.save_pretrained('path_to_save')``. However, when I load the fine-tuned model using ``AutoModel.from_pretrained('path_to_the_saved_model')``, it extracts 12 layers instead of 6 layers. I also checked the ``config.json`` file that was saved automatically and the number of layers is still 6. When I load the model with ``DistilBertModel.from_pretrained()`` it extracts 6 layers. In the following, I copied the ``config.json`` file. Does anyone know why this happens? am I lacking some packages or files when loading/saving the model?
> {
"activation": "gelu",
"architectures": [
"DistilBertModel"
],
"attention_dropout": 0.1,
"bos_token_id": 0,
"dim": 768,
"do_sample": false,
"dropout": 0.1,
"eos_token_ids": 0,
"finetuning_task": null,
"hidden_dim": 3072,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"is_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"length_penalty": 1.0,
"max_length": 20,
"max_position_embeddings": 512,
"model_type": "distilbert",
"n_heads": 12,
"n_layers": 6,
"num_beams": 1,
"num_labels": 2,
"num_return_sequences": 1,
"output_attentions": true,
"output_hidden_states": false,
"output_past": true,
"pad_token_id": 0,
"pruned_heads": {},
"qa_dropout": 0.1,
"repetition_penalty": 1.0,
"seq_classif_dropout": 0.2,
"sinusoidal_pos_embds": false,
"temperature": 1.0,
"tie_weights_": true,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"use_bfloat16": false,
"vocab_size": 40000
}
| 03-17-2020 10:37:42 | 03-17-2020 10:37:42 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,307 | closed | Make sacremoses dependency optional due to GPL license. | Closes #2453. | 03-17-2020 10:07:01 | 03-17-2020 10:07:01 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3307?src=pr&el=h1) Report
> Merging [#3307](https://codecov.io/gh/huggingface/transformers/pull/3307?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fa5423b1695cd24856bcff47214172e0f540d924&el=desc) will **decrease** coverage by `0.94%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3307?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3307 +/- ##
==========================================
- Coverage 77.79% 76.85% -0.95%
==========================================
Files 145 145
Lines 25355 25356 +1
==========================================
- Hits 19726 19488 -238
- Misses 5629 5868 +239
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3307?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/3307/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `83.00% <100.00%> (+0.06%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/3307/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/3307/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3307/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-1.76%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3307/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3307/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3307/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3307/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3307?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3307?src=pr&el=footer). Last update [fa5423b...e371b9a](https://codecov.io/gh/huggingface/transformers/pull/3307?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>I agree with this, but will let others chime in.
However as discussed in https://github.com/huggingface/transformers/issues/2453#issuecomment-656103152 I think `sacremoses` is MIT-licensed<|||||>Now that `sacremoses` has changed the license (link from @julien-c and https://github.com/alvations/sacremoses/commit/90376dfaf0f41399a090e7620feb3c2494f865a6) the original reason for this pull request is gone.
Feel free to simply close this if you currently don't want to use this to reduce the default dependencies. For this use case it would probably make sense to also make the `xlnet` and `gpt` dependencies optional.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,306 | closed | Create README.md | 03-17-2020 08:14:43 | 03-17-2020 08:14:43 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3306?src=pr&el=h1) Report
> Merging [#3306](https://codecov.io/gh/huggingface/transformers/pull/3306?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/68ef0a111f8740f06ca4e5a00374ec4e2adb0a6d&el=desc) will **decrease** coverage by `0.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3306?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3306 +/- ##
==========================================
- Coverage 77.48% 77.47% -0.02%
==========================================
Files 99 99
Lines 16799 16799
==========================================
- Hits 13017 13015 -2
- Misses 3782 3784 +2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3306?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3306/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.68% <0.00%> (-0.54%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3306/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.84% <0.00%> (+0.13%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3306?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3306?src=pr&el=footer). Last update [68ef0a1...6053b4e](https://codecov.io/gh/huggingface/transformers/pull/3306?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks for contributing @jannesgg – if this is a Swedish LM, could you add the language tag to the top of the model card:
```
---
language: swedish
---
```<|||||>Thanks! [`Model page`](https://huggingface.co/jannesg/bertsson) |
|
transformers | 3,305 | closed | Update examples/ner/run_ner.py to use AutoModel | This PR is updating `run_ner.py` to use AutoModel implementation. Refer to #3290 and is simpler than before.
Maybe @srush can review this. | 03-17-2020 02:55:05 | 03-17-2020 02:55:05 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3305?src=pr&el=h1) Report
> Merging [#3305](https://codecov.io/gh/huggingface/transformers/pull/3305?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b2028cc26b61a9dad960274d427e261af7c9bdc8&el=desc) will **decrease** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3305?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3305 +/- ##
==========================================
- Coverage 77.47% 77.46% -0.01%
==========================================
Files 99 99
Lines 16799 16799
==========================================
- Hits 13015 13014 -1
- Misses 3784 3785 +1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3305?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3305/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.68% <0.00%> (-0.36%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3305/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.70% <0.00%> (+0.13%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3305?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3305?src=pr&el=footer). Last update [b2028cc...85e70d9](https://codecov.io/gh/huggingface/transformers/pull/3305?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,304 | closed | Error in loading albert-base-v2 | 

Help~~
The above problems appear in loading the pre-trained Albert by transformers. | 03-17-2020 02:29:22 | 03-17-2020 02:29:22 | @anjubaoGDUT If you can provide the code in text (instead of image) that can copy and paste, it is easy to test. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,303 | closed | Error in loading Albert model | 
Help~
The above problems appear in loading the pre-trained Albert by transformers. Albert pre-trained model download address is https://drive.google.com/file/d/1byZQmWDgyhrLpj8oXtxBG6AA52c8IHE-/view | 03-17-2020 02:21:56 | 03-17-2020 02:21:56 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,302 | closed | [BART] Delete redundant unit test | 03-17-2020 01:39:28 | 03-17-2020 01:39:28 | ||
transformers | 3,301 | closed | Add model card for Google AI's BERT Miniatures | This model card is intended to be shared among all models under google/bert_uncased_*
(We'll need some support from HuggingFace to get this card cross-linked from all models) | 03-16-2020 23:21:18 | 03-16-2020 23:21:18 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3301?src=pr&el=h1) Report
> Merging [#3301](https://codecov.io/gh/huggingface/transformers/pull/3301?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/47591763137f17021928e686ef171f25c240f076&el=desc) will **decrease** coverage by `0.35%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3301?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3301 +/- ##
==========================================
- Coverage 77.68% 77.33% -0.36%
==========================================
Files 99 99
Lines 16799 16799
==========================================
- Hits 13051 12991 -60
- Misses 3748 3808 +60
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3301?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/3301/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.89% <0.00%> (-6.50%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3301/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `82.46% <0.00%> (-5.91%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3301/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.01% <0.00%> (-0.99%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3301?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3301?src=pr&el=footer). Last update [4759176...46b9c45](https://codecov.io/gh/huggingface/transformers/pull/3301?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>I just symlinked all 24 under Google's namespace to this one in 68ef0a111f8740f06ca4e5a00374ec4e2adb0a6d.
Thanks for uploading the models @iuliaturc-google and @srush!
Example model page: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2<|||||>Thanks Sasha and Julien! |
transformers | 3,300 | closed | ImportError: cannot import name 'BartForConditionalGeneration' |
## Information
Hi, I am trying to use the BART model to sumamrize a text snippet.
The problem arises when using:
* from transformers import BartTokenizer, BartConfig, BartForConditionalGeneration
## To reproduce
Steps to reproduce the behavior:
1. Installed Tensorflow 2.0 and Pytorch
2. pip install transformers
3. from transformers import BartTokenizer, BartConfig, BartForConditionalGeneration
<!-- ImportError: cannot import name 'BartForConditionalGeneration'-->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: (2.5.1)
- Platform:
- Python version: Python 3.6.5 :: Anaconda, Inc.
- PyTorch version (GPU?): (1.3.1)
- Tensorflow version (GPU?): (2.0.0)
| 03-16-2020 19:33:30 | 03-16-2020 19:33:30 | I saw that to use the examples, it has to be installed from source. |
transformers | 3,299 | closed | add camembert for Question answering for examples | This one might have been accidently deleted in PR #2700 I think. | 03-16-2020 18:40:20 | 03-16-2020 18:40:20 | Wow that was quick :D |
transformers | 3,298 | closed | [generate] do_sample default back to False | This somewhat reverts the commit:
https://github.com/huggingface/transformers/commit/6c1b23554f8bb5b5e1f6c80969acab764c755678
and the decision taken in #2696
and sets the default sampling behavior of `generate()` to greedy - / beam search.
Pros:
- `False` is the more natural default value
- Prettier API (especially for encoder_decoder models which will mostly only use beam search generate())
Cons:
- Some people might aleady be used to the `do_sample=True` default value and this commit might break the logic of their code (but would be trivial to change for them)
I'm somewhat indifferent whether this PR should be merged, but I think @thomwolf and @sshleifer are in favor of it.
@LysandreJik @thomwolf @sshleifer | 03-16-2020 12:10:41 | 03-16-2020 12:10:41 | Not 100% sure how this results in a "Prettier API," but agree this isn't a big deal to fix downstream. (my current code explicitly sets `do_sample=True` just in case something like this happened.)
If you are creating any demo generation notebooks/tooling like Write With Transformer, I recommend explicitly noting this behavior. |
transformers | 3,297 | closed | Getting output of any hidden layer | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
Is there a way to get output from any hidden layer of the model?
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
I'm working on the ALBERT transformer (more specifically `AlbertForSequenceClassification`, and when I print the model, this is the model's architecture:
```py
AlbertForSequenceClassification(
(albert): AlbertModel(
(embeddings): AlbertEmbeddings(
(word_embeddings): Embedding(30000, 128, padding_idx=0)
(position_embeddings): Embedding(512, 128)
(token_type_embeddings): Embedding(2, 128)
(LayerNorm): LayerNorm((128,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0, inplace=False)
)
(encoder): AlbertTransformer(
(embedding_hidden_mapping_in): Linear(in_features=128, out_features=768, bias=True)
(albert_layer_groups): ModuleList(
(0): AlbertLayerGroup(
(albert_layers): ModuleList(
(0): AlbertLayer(
(full_layer_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(attention): AlbertAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0, inplace=False)
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
)
(ffn): Linear(in_features=768, out_features=3072, bias=True)
(ffn_output): Linear(in_features=3072, out_features=768, bias=True)
)
)
)
)
)
(pooler): Linear(in_features=768, out_features=768, bias=True)
(pooler_activation): Tanh()
)
(dropout): Dropout(p=0, inplace=False)
(classifier): Linear(in_features=768, out_features=2, bias=True)
)
```
I would like to get the outputs of middle / hidden layers, for example of layers `ffn_output` or `pooler`, but I'm not sure if that option exists. I've tried extracting `hidden_states` by setting `output_hidden_states` to True in AlbertConfig, but that doesn't bring me the result that I want.
I believe tf-hub has a method or attribute for this called by `model.get_layer(layer_name)`.
Is there a way to extract hidden layers?
| 03-16-2020 10:32:27 | 03-16-2020 10:32:27 | Yes, this will be quite hard and is not a feature that is implemented at the moment nor a feature that we plan on implementing soon.
An easy way to get what you want though, will be to clone the repo and adapt the code. You can easily add the layer outputs (e.g. `ffn_output`) you want to the `return` functions of the different Albert layers (you will probalby have to return and retrieve it multiple times until you have it in the `AlbertForSequenceClassification.formard() `function<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,296 | closed | Installation error: can not find Rust compiler | I used pip to install transformers like this:
pip install transformers
in the end, I got the error:
Can not find Rust compiler
while, I have installed rust on my apple computer, please tell me how to deal with the problem, thank you!
| 03-16-2020 10:03:55 | 03-16-2020 10:03:55 | Can you please open an issue on https://github.com/huggingface/tokenizers?
Thanks!
cc @n1t0 @mfuntowicz |
transformers | 3,295 | closed | Create CodeBERTaJS model card | 03-16-2020 09:15:58 | 03-16-2020 09:15:58 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3295?src=pr&el=h1) Report
> Merging [#3295](https://codecov.io/gh/huggingface/transformers/pull/3295?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/af471ce5e8ca7c19183e70bb998561170addc276?src=pr&el=desc) will **increase** coverage by `0.19%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3295?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3295 +/- ##
==========================================
+ Coverage 77.82% 78.02% +0.19%
==========================================
Files 98 98
Lines 16666 16666
==========================================
+ Hits 12970 13003 +33
+ Misses 3696 3663 -33
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3295?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3295/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.58% <0%> (-0.14%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3295/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `68.4% <0%> (+0.4%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3295/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.37% <0%> (+5.9%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3295?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3295?src=pr&el=footer). Last update [af471ce...62cb24d](https://codecov.io/gh/huggingface/transformers/pull/3295?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 3,294 | closed | BertForPreTraining should compute only <MASKED> prediction_scores | # 🚀 Feature request
In class transformers.BertForPreTraining, the forward compute all prediction_scores. In fact, we may only calculate the prediction_scores on the <MASKED> tokens to save some computational cost.
## Motivation
In source code of class BertForPreTraining:
```python
outputs = self.bert(input_ids, attention_mask, ...)
sequence_output, pooled_output = outputs[:2]
prediction_scores, seq_relationship_score = self.cls(sequence_output, pooled_output)
```
It compute `prediction_scores` using `sequence_output` as the input of function `self.cls()`. I was wondering if we can gather the MASKED index (not equal to -100) and only pass the MASKED sequence_output into function `self.cls` and return the MASKED prediction_scores. Then pass them into `CrossEntropy()` to compute the `masked_lm_loss`. In this way, we can save some computational cost and partially relief the OOM problem in GPU.
## My suggestion
We may change the code to
```python
outputs = self.bert(input_ids, attention_mask, ...)
sequence_output, pooled_output = outputs[:2]
# before gather_indexes, size of sequence_output: (batch_size, sequence_length, hidden_size)
# after gather_indexes, size of sequence_output: (batch_size, masked_lm_nums, hidden_size)
sequence_output = gather_indexes(sequence_output, masked_ln_labels)
prediction_scores, seq_relationship_score = self.cls(sequence_output, pooled_output)
```
Then pass prediction_scores and corresponding `masked_lm_labels` (also need to gather the indexes) into function CrossEntropy to compute the masked_lm_loss.
Note: if we adopt this method, the return of BertForPreTraining is changed since we will not compute prediction_scores on all tokens. | 03-16-2020 08:13:16 | 03-16-2020 08:13:16 | I had little modification on the source code and solve the problem. (But I may changed the behavior and the return format of class BertForPretraining)
```python
def gather_indexes_auto(sequence_tensor, masked_lm_labels):
"""Gathers the vectors according to masked_lm_labels over a minibatch.
Input
sequence_tensor: (batch_size, sequence_length, hidden_size)
masked_lm_labels: (batch_size, sequence_length)
Output
output_tensor: (-1, hidden_size)
output_lm_labels: (-1, )
"""
batch_size = sequence_tensor.size(0)
sequence_length = sequence_tensor.size(1)
hidden_size = sequence_tensor.size(2)
# Flatten sequence_tensor into (-1, hidden_size)
# Flatten masked_lm_labels into (-1, )
sequence_tensor_flat = sequence_tensor.view(batch_size*sequence_length, hidden_size)
masked_lm_labels_flat = masked_lm_labels.view(-1)
# Get non -100 index
# Note: the input index of torch.index_select is 1-D tensor
masked_lm_location = masked_lm_labels_flat.ge(0).nonzero().view(-1)
# Select corresponding values
output_tensor = torch.index_select(sequence_tensor_flat, dim=0, index=masked_lm_location)
output_lm_labels = torch.index_select(masked_lm_labels_flat, dim=0, index=masked_lm_location)
return output_tensor, output_lm_labels
```
And in class `BertForPreTraining`
```
class BertForPreTraining(BertPreTrainedModel):
...
def forward(...):
outputs = self.bert(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
)
sequence_output, pooled_output = outputs[:2]
sequence_output, output_lm_labels = gather_indexes_auto(sequence_output, masked_lm_labels)
prediction_scores, seq_relationship_score = self.cls(sequence_output, pooled_output)
outputs = (prediction_scores, seq_relationship_score,) + outputs[
2:
] # add hidden states and attention if they are here
if masked_lm_labels is not None and next_sentence_label is not None:
loss_fct = CrossEntropyLoss()
masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), output_lm_labels.view(-1))
next_sentence_loss = loss_fct(seq_relationship_score.view(-1, 2), next_sentence_label.view(-1))
total_loss = masked_lm_loss + next_sentence_loss
outputs = (total_loss,)
return outputs # (loss),
```
Note that this code only compute and return the loss on <MASKED> labels and thus save a lot computations and GPU memories.
Before this change, I can only run BERT-MEDIUM (L8 H512 A8) on batch_size = 64 using P100 (16G RAM), after this change, I can run the pretraining using batch_size = 128.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,293 | closed | Create model card for spanbert-finetuned-squadv2 | 03-16-2020 08:02:08 | 03-16-2020 08:02:08 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.