repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
3,192
closed
Provide comprehensive guide & best-practices for run_language_modeling.py
# 🚀 Feature request Provide comprehensive guide for running scripts included in the repository, especially `run_language_modeling.py` it's parameters and model configurations. ## Motivation 1. Current version has `argparse` powered help, from which a lot of parameters seem to be either mysterious or have variable runtime behaviour (i.e `tokenizer_name` is sometimes path and the value that user provides is expected to provide different data for different models, ie. for Roberta and BERT). Again, when it comes to `tokenizer_name` - it claims that `If both are None, initialize a new tokenizer.`, which does not work at all, i.e when you use RoBERTa model. It should handle the training of the new tokenizer on provided `train_data` right away. 1. There are bunch of parameters that are critical to run the script at all (!), which are not even mentioned here https://huggingface.co/blog/how-to-train or even in the notebook https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb, i.e: for Roberta, without `"max_position_embeddings": 514,` in config, the script crashes with: ``` CUDA error: device-side assert triggered ``` I had to dig into github to see some unresolved issues around this case and try out a few solutions before the script finally executed (https://github.com/huggingface/transformers/issues/2877). 1. Models with LM heads will train even though the head output size is different than vocab size of the tokenizer - the script should warn the user or (better) raise an exception in such scenarios. 1. Describe how the input dataset should look like. Is it required to have one sentence per-line or one article per line or maybe one paragraph per line? 1. Using multi-GPU on single machine and parameter `--evaluate_during_training` crashes the script -why? It might be worth an explanation. It's probably also a bug (https://github.com/huggingface/transformers/issues/1801). 1. Those are just from the top of my head - I will update this issue once I come up with more or maybe someone else will also add something to this thread. Given the number of issues currently open, I suspect that I'm not the only one that struggles with the example script. **The biggest problem here is that running it without proper configuration might really cost a lot, but the script will still execute, yielding garbage model.** Moreover - by improving the docs and providing best practices guide, you can enable many people with even better toolkit for their research and business.
03-09-2020 17:29:51
03-09-2020 17:29:51
Even I tried to follow the blog and train a LM from scratch but the instructions are ambiguous. Like for ex config file is passed as command line args but if its passed it tries to load it and throws error .<|||||>I've covered some of the parts here: https://zablo.net/blog/post/training-roberta-from-scratch-the-missing-guide-polish-language-model/<|||||>https://stackoverflow.com/questions/61232399/decoding-predictions-for-masked-language-modeling-task-using-custom-bpe I posted a question related to this on SO. Any help is appreciated! @marrrcin <|||||>bump!<|||||>> I've covered some of the parts here: https://zablo.net/blog/post/training-roberta-from-scratch-the-missing-guide-polish-language-model/ Hey Marcin, Your post is very informative. Thanks for that. Could you say a few words on the reasoning for the vocab size being 32000 exactly? Are there any heuristics that helped your decision? (or) anyone here can say a few words on if there are any good heuristics you can follow to choose this hyperparameter? Thanks<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,191
closed
Add integration tests lm generate torch tf
Add integration tests for all LM models that are able to generate language. - All integration tests use `do_sample=False` (greedy) generation and verify that TF 2.0 and PT yield the same results. - Fixed a small bug with TFXLMModelWithLMHead
03-09-2020 14:03:25
03-09-2020 14:03:25
Approx. how long do these tests take on a single V100?<|||||>> Approx. how long do these tests take on a single V100? Don't know how long they take on a V100. On a cpu, all tests combined (7 model tests for PT and 6 model tests for TF) take less than 10min (whereas `test_modeling_tf_xlnet.py`, `test_modeling_xlnet.py` and `test_modeling_transfo_xl.py` combined take ca. 8min)<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3191?src=pr&el=h1) Report > Merging [#3191](https://codecov.io/gh/huggingface/transformers/pull/3191?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e03129ad447ad7670fcc6206e5eb27a5435d4d86?src=pr&el=desc) will **increase** coverage by `0.01%`. > The diff coverage is `100%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3191/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3191?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3191 +/- ## ========================================== + Coverage 78.15% 78.16% +0.01% ========================================== Files 98 98 Lines 16641 16641 ========================================== + Hits 13006 13008 +2 + Misses 3635 3633 -2 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3191?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/3191/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `90.4% <100%> (ø)` | :arrow_up: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3191/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.56% <0%> (+0.15%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3191/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.89% <0%> (+0.19%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3191?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3191?src=pr&el=footer). Last update [e03129a...9050ffe](https://codecov.io/gh/huggingface/transformers/pull/3191?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Good to merge for me
transformers
3,190
closed
fix repetition penalty mask in tf
Fixed bug with TF 2.0 `repetition_penalty` when doing generation add make `early_stopping` an argument to the function `generate()`.
03-09-2020 14:01:14
03-09-2020 14:01:14
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3190?src=pr&el=h1) Report > Merging [#3190](https://codecov.io/gh/huggingface/transformers/pull/3190?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b29fed790bdaa4be38b6d2c5de88e307474ea38d?src=pr&el=desc) will **increase** coverage by `0.1%`. > The diff coverage is `42.85%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3190/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3190?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3190 +/- ## ========================================= + Coverage 77.98% 78.09% +0.1% ========================================= Files 98 98 Lines 16641 16645 +4 ========================================= + Hits 12978 12999 +21 + Misses 3663 3646 -17 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3190?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.74% <100%> (+0.02%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `91.38% <33.33%> (+3.59%)` | :arrow_up: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.56% <0%> (-0.16%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3190?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3190?src=pr&el=footer). Last update [b29fed7...847d370](https://codecov.io/gh/huggingface/transformers/pull/3190?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Good to merge for me.<|||||>> Small typo bug otherwise it's good to go Thanks for spotting!
transformers
3,189
closed
I want to import the model path on my owm computer?
hi, I want to import the model path from my own computer, how to write or change the code, can u give me a example?
03-09-2020 13:58:01
03-09-2020 13:58:01
There are a lot of examples in [the documentation](https://huggingface.co/transformers/main_classes/model.html#transformers.PreTrainedModel).
transformers
3,188
closed
Beam search sometimes fails this assert error
# 🐛 Bug ## Information Model I am using: GPT2 with custom config (vocab=27) Language I am using the model on (English, Chinese ...): Molecules... (see https://en.wikipedia.org/wiki/Simplified_molecular-input_line-entry_system) The problem arises when using: Using the .generate(beam=2) function The tasks I am working on is: Generating molecules ## The Problem Essentially, every so ## To reproduce Steps to reproduce the behavior: Just do the generation with these args: ```args = { "num_beams": 3, "max_length": 50, "temperature": 1, "repetition_penalty": 1, "length_penalty": 1, "do_sample": true, "top_k": 50, "top_p": 1} ``` The generation runs fine for several batches, but then after like 100s of iterations, it sometimes bugs out with this error: ```File "/Users/laithani/anaconda3/envs/TransformerVAE/lib/python3.7/site-packages/transformers/modeling_utils.py", line 979, in _generate_beam_search assert len(next_batch_beam) == num_beams * (batch_ex + 1), f"{next_batch_beam}, {num_beams}, {batch_ex}" ``` I then added print statements to modeling_utils to try and see what is going on. I changed the assert line to: ``` assert len(next_batch_beam) == num_beams * (batch_ex + 1), f"{next_batch_beam}, {num_beams}, {batch_ex}" ``` And with this I got: ``` AssertionError: [(tensor(-19.8421), tensor(26), tensor(0)), (tensor(-20.9710), tensor(26), tensor(0)), (tensor(-30.5064), tensor(5), tensor(0)), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (tensor(-17.4236), tensor(11), tensor(9)), (tensor(-26.3645), tensor(16), tensor(9)), (tensor(-23.9410), tensor(16), tensor(9)), (0, 0, 0), (0, 0, 0), (0, 0, 0), (tensor(-58.0648), tensor(0), tensor(15))], 3, 5 ``` If you count up the length of the list, it is length 16, which =/= 3 * (5 + 1) Not sure what is going on here, looking into the code now to try and figure out what is going on. - `transformers` version: 2.4.1 - Platform: Ubuntu and Mac (problem occurs in both) - Python version: 3.6 and 3.7 (problem occurs in both) - PyTorch version (GPU?): 1.3.0 - Tensorflow version (GPU?): N/A - Using GPU in script?: Yes, either V100 or K80 - Using distributed or parallel set-up in script?: No
03-09-2020 12:50:21
03-09-2020 12:50:21
@Laksh1997 thanks a lot for reporting this error. Can you provide a code snippet and maybe a link to your data to easily reproduce this error? In the meantime: - There has been a lot of changes recently in the beam search decoding -> I would recommend using the master branch of beam search decoding! - Beam search is not really made for top_p_top_k sampling, when using beam search I recommend setting do_sample=False<|||||>Right, I'll try the master branch and inform any further problems. For generation of samples (without any context or input), there is no point of having sample set to False, as one will always generate the same sample as a result.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>I'm running into this issue somewhat commonly as well. - Python 3.7 - transformers 3.0.2 - torch 1.5.1 - macOS 10.15 - Running on CPU Code to reproduce error: ```python import torch from transformers import MarianMTModel, MarianTokenizer torch.manual_seed(15) phrase = "Ich verstehe nicht, was du sagen willst. Sprich doch Deutsch!" model = MarianMTModel.from_pretrained("Helsinki-NLP/opus-mt-de-ZH") tokenizer = MarianTokenizer.from_pretrained("Helsinki-NLP/opus-mt-de-ZH") # Nucleus sampling as per https://github.com/huggingface/blog/blob/master/notebooks/02_how_to_generate.ipynb input_ids = tokenizer.prepare_translation_batch([phrase]) token_ids_p = model.generate( **input_ids, do_sample=True, top_p=0.9, ) translated_p = [tokenizer.decode(string, skip_special_tokens=True) for string in token_ids_p] print(translated_p) ``` Error: ``` Traceback (most recent call last): File "temp.py", line 14, in <module> top_p=0.9, File "/Users/kaz/envs/venv-3.7/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context return func(*args, **kwargs) File "/Users/kaz/envs/venv-3.7/lib/python3.7/site-packages/transformers/generation_utils.py", line 459, in generate model_specific_kwargs=model_specific_kwargs, File "/Users/kaz/envs/venv-3.7/lib/python3.7/site-packages/transformers/generation_utils.py", line 757, in _generate_beam_search assert len(next_sent_beam) == num_beams, "Beam should always be full" ``` @patrickvonplaten Is it possible to revive this issue?
transformers
3,187
closed
Knowing the specific data set used for DistilBertForQuestionAnswering
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> Hi, I am using the pipeline for question answering and am wondering what dataset was used to train the underlying model. The loaded model is DistilBertForQuestionAnswering. But I need to know if it used Squad 1.1 or Squad 2.0 which includes the possibility of no answer? <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. -->
03-09-2020 12:37:31
03-09-2020 12:37:31
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,186
closed
CPU/GPU memory benchmarking utilities - Remove support for python 3.5 (now only 3.6+)
This PR add some utilities to benchmark (RAM) memory consumption of the models. This is actually a generic utility that can work with any arbitrary python code Ex: ```python import torch from transformers import GPT2Model, GPT2Tokenizer from transformers import start_memory_tracing, stop_memory_tracing tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2Model.from_pretrained('gpt2') sequence = tokenizer.encode("Hello how are you", return_tensors='pt') # Line by line memory tracing (all code in the module `transformers`). trace = start_memory_tracing(modules_to_trace="transformers") output = model(sequence) summary = stop_memory_tracing(trace) # Summary contain three fields: # `sequential`: list of line by line consumption (with line code and location) # `cumulative`: list of cumulative line by line consumption (when lines are executed several times) ordered from the most memory consuming line to the least (also with line code and location) # `total`: total memory consumption of the script (default to sum memory increase at each line and ignore released mem, can be seet to count increase and release by less reliable on ubuntu). # Each `Memory` object contain CPU, GPU and CPU + GPU memory, each both in int and human readable string print(f"Total memory consumption: {summary.total}") top_line = summary.cumulative[0] print(f"Consumed {top_line.cpu_gpu}: {top_line.frame.line_text} at {top_line.frame.filename}:{top_line.frame.line_number}") ``` Incorporated in the `./examples/benchmark.py` script. Example of command-line run: ``` bash (py37) bash-3.2$ python ./examples/benchmarks.py --models gpt2 --torch --batch_sizes 1 --slice_sizes 64 256 512 512 512 --no_speed --verbose Running with arguments Namespace(amp=False, average_over=30, batch_sizes=[1], csv_filename=None, fp16=False, keras_predict=False, models=['gpt2'], no_memory=False, no_speed=True, save_to_csv=False, slice_sizes=[64, 256, 512, 512, 512], tensorflow=False, torch=True, torch_cuda=False, torchscript=False, verbose=False, xla=False) 1 / 1 Token indices sequence length is longer than the specified maximum sequence length for this model (2708 > 1024). Running this sequence through the model will result in indexing errors .... /Users/thomwolf/Documents/GitHub/transformers/src/transformers/modeling_gpt2.py:487: mem 0.000B: presents = presents + (present,) /Users/thomwolf/Documents/GitHub/transformers/src/transformers/modeling_gpt2.py:489: mem 0.000B: if self.output_attentions: /Users/thomwolf/Documents/GitHub/transformers/src/transformers/modeling_gpt2.py:477: mem 0.000B: for i, (block, layer_past) in enumerate(zip(self.h, past)): /Users/thomwolf/Documents/GitHub/transformers/src/transformers/modeling_gpt2.py:492: mem 0.000B: hidden_states = self.ln_f(hidden_states) /Users/thomwolf/Documents/GitHub/transformers/src/transformers/modeling_gpt2.py:494: mem 0.000B: hidden_states = hidden_states.view(*output_shape) /Users/thomwolf/Documents/GitHub/transformers/src/transformers/modeling_gpt2.py:496: mem 0.000B: if self.output_hidden_states: /Users/thomwolf/Documents/GitHub/transformers/src/transformers/modeling_gpt2.py:499: mem 0.000B: outputs = (hidden_states,) /Users/thomwolf/Documents/GitHub/transformers/src/transformers/modeling_gpt2.py:500: mem 0.000B: if self.output_past: /Users/thomwolf/Documents/GitHub/transformers/src/transformers/modeling_gpt2.py:501: mem 0.000B: outputs = outputs + (presents,) /Users/thomwolf/Documents/GitHub/transformers/src/transformers/modeling_gpt2.py:502: mem 0.000B: if self.output_hidden_states: /Users/thomwolf/Documents/GitHub/transformers/src/transformers/modeling_gpt2.py:504: mem 0.000B: if self.output_attentions: /Users/thomwolf/Documents/GitHub/transformers/src/transformers/modeling_gpt2.py:509: mem 0.000B: return outputs # last hidden state, (presents), (all hidden_states), (attentions) Top 5 script lines consuming the most memory: 0 => /Users/thomwolf/Documents/GitHub/transformers/src/transformers/activations.py:31: mem 276.004MB: return 0.5 * x * (1 + torch.tanh(math.sqrt(2 / math.pi) * (x + 0.044715 * torch.pow(x, 3)))) 1 => /Users/thomwolf/Documents/GitHub/transformers/src/transformers/modeling_utils.py:1311: mem 151.520MB: x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2 => /Users/thomwolf/Documents/GitHub/transformers/src/transformers/modeling_gpt2.py:146: mem 146.004MB: w = w * b - 1e4 * (1 - b) 3 => /Users/thomwolf/Documents/GitHub/transformers/src/transformers/modeling_gpt2.py:143: mem 132.004MB: w = w / math.sqrt(v.size(-1)) 4 => /Users/thomwolf/Documents/GitHub/transformers/src/transformers/modeling_gpt2.py:187: mem 36.000MB: present = torch.stack((key.transpose(-2, -1), value)) # transpose to have same shapes for stacking 5 => /Users/thomwolf/Documents/GitHub/transformers/src/transformers/modeling_gpt2.py:159: mem 33.000MB: outputs = [torch.matmul(w, v)] Memory increase computed by summing traced script lines: 843.758MB =========== RESULTS =========== ======= MODEL CHECKPOINT: gpt2 ======= ===== BATCH SIZE: 1 ===== gpt2/1/64: N/A 75.176MB gpt2/1/256: N/A 349.695MB gpt2/1/512: N/A 843.758MB gpt2/1/512: N/A 843.758MB gpt2/1/512: N/A 843.758MB ```
03-09-2020 12:06:04
03-09-2020 12:06:04
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3186?src=pr&el=h1) Report > Merging [#3186](https://codecov.io/gh/huggingface/transformers/pull/3186?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e03129ad447ad7670fcc6206e5eb27a5435d4d86&el=desc) will **decrease** coverage by `0.50%`. > The diff coverage is `30.76%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3186/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3186?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3186 +/- ## ========================================== - Coverage 78.15% 77.64% -0.51% ========================================== Files 98 99 +1 Lines 16641 16795 +154 ========================================== + Hits 13006 13041 +35 - Misses 3635 3754 +119 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3186?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3186/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.34% <15.38%> (-3.07%)` | :arrow_down: | | [src/transformers/benchmark\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3186/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmtfdXRpbHMucHk=) | `31.74% <31.74%> (ø)` | | | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/3186/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.92% <100.00%> (+0.01%)` | :arrow_up: | | [src/transformers/configuration\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/3186/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `97.29% <100.00%> (+0.07%)` | :arrow_up: | | [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/3186/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.07% <100.00%> (ø)` | | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3186/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `91.53% <0.00%> (-2.17%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3186?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3186?src=pr&el=footer). Last update [e03129a...cb67ca6](https://codecov.io/gh/huggingface/transformers/pull/3186?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,185
closed
Tokenizers v3.0.0
03-09-2020 10:34:57
03-09-2020 10:34:57
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3185?src=pr&el=h1) Report > Merging [#3185](https://codecov.io/gh/huggingface/transformers/pull/3185?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7420a6a9cc1750c2bd2c2c245d00048ec36d3bf0?src=pr&el=desc) will **decrease** coverage by `0.24%`. > The diff coverage is `80.42%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3185/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3185?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3185 +/- ## ========================================== - Coverage 77.79% 77.55% -0.25% ========================================== Files 100 100 Lines 17025 17105 +80 ========================================== + Hits 13245 13265 +20 - Misses 3780 3840 +60 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3185?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/3185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `95.33% <ø> (-1.7%)` | :arrow_down: | | [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `94.36% <100%> (-5.64%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/3185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `74.51% <100%> (-0.28%)` | :arrow_down: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/3185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `40.67% <100%> (-0.43%)` | :arrow_down: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `86.18% <78.61%> (-5.81%)` | :arrow_down: | | ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/3185/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3185?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3185?src=pr&el=footer). Last update [7420a6a...860cf66](https://codecov.io/gh/huggingface/transformers/pull/3185?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Really like the new typings!
transformers
3,184
closed
`Failed to build tokenizers` when installing 2.5.1 version.
`Failed to build tokenizers` when i try to install transformers==2.5.1 `Failed to build tokenizers` when i try to install tokenizers == 0.5.2 so, i want to know `tokenizers == 0.5.2` is must ? thanks~
03-09-2020 08:52:34
03-09-2020 08:52:34
`tokenizers` is only required when you wish to use the new, fast [tokenizers](https://github.com/huggingface/tokenizers). By default, though, the standard (slower) tokenizers are used. So you do not actually need the `tokenizers` library to run the `transformers` library. Related: https://github.com/huggingface/transformers/issues/2980 https://github.com/huggingface/transformers/issues/2831 <|||||>Can you please open an issue over at https://github.com/huggingface/tokenizers with your OS/Python details?
transformers
3,183
closed
About the examples document of bert with SQuAD 2.0
I'm wondering if there is a mistake. In the README document of /examples, the training parameters setting for training on SQuAD dataset, the first code block: ``` export SQUAD_DIR=/path/to/SQUAD python run_squad.py \ --model_type bert \ --model_name_or_path bert-base-cased \ --do_train \ --do_eval \ --do_lower_case \ --train_file $SQUAD_DIR/train-v1.1.json \ --predict_file $SQUAD_DIR/dev-v1.1.json \ --per_gpu_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2.0 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ ``` Because this model is bert-base-cased, which means it's cased, I think the ``--do_lower_case`` shouldn't be here. Is that a mistake?
03-09-2020 08:52:02
03-09-2020 08:52:02
Indeed, I think this is a mistake, I think it should be `bert-base-uncased` instead. I'm updating it.
transformers
3,182
closed
How can i use pipeline with pretrained automodel ?
how can i use pipeline with pretrained auto model and tokenizer ?
03-09-2020 07:55:42
03-09-2020 07:55:42
There are many examples in [the docs](https://huggingface.co/transformers/usage.html). The [pipeline reference](https://huggingface.co/transformers/main_classes/pipelines.html) will also be helpful.
transformers
3,181
closed
The implementation of GPT2 masked attention mechanism will cause errors when the model was trained after some iterations.
# 🐛 Bug ## Information Model I am using (GPT2): Language I am using the model on ( Chinese ...): File in modeling_gpt2.py: 146 w = w * b - 1e4 * (1 - b) # here the bias "-1e4" is too big, suggest replace it to "-1e10" The item value of "query * key" may be close to or lower to "-1e4" , then after the softmax operation, the weights of masked attention may be attend to "unseen" context. This will cause inference errors when to evaluate the model.
03-09-2020 03:22:27
03-09-2020 03:22:27
Thanks for the bug report @xunzi2020! Did you by any chance measure the degree results would differ when replacing `-1e4` by `-1e10`? <|||||>Yes, I trained with chinese text (~30GB), it happened. Patrick von Platen <[email protected]> 于2020年3月9日周一 下午4:11写道: > Thanks for the bug report @xunzi2020 <https://github.com/xunzi2020>! Did > you by any chance measure the degree results would differ when replacing > -1e4 by -1e10? > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/3181?email_source=notifications&email_token=AOYV777JVZ6HL4BNOKYY7HDRGSQEXA5CNFSM4LEAB3C2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEOGCLSI#issuecomment-596387273>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AOYV777AZV3K5GBW4TFVFFDRGSQEXANCNFSM4LEAB3CQ> > . > <|||||>Do you have a comparison such as: | - | GPT2 with -1e4 vs. GPT2 with -1e10 | | ------------- | ------------- | | abs mean(softmax(logits) - softmax(logits)) | ???? | | relative mean(softmax(logits) - softmax(logits)/softmax(logits)) | ???? | lets say averaged over 100 - 1000 input samples and all GPT2 logits (50256)? That would be great to quantify the impact this change would have. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,180
closed
NER - pl example
1. Solves most of the issues raised in #3159 2. Streamlines shell script pipeline 3. pl logs related changes 4. added in Readme
03-08-2020 23:37:09
03-08-2020 23:37:09
This looks great to me. Thanks @shubhamagarwal92. You need to just run the black command, I believe to is `make style` to fix formatting. Otherwise it looks good.<|||||>@srush I ran the `make style` which changed a lot of files, however, the `check_code_quality` is still failing! Do you want me to revert the last two commits and manually change the `'logs'` to `"logs"` BTW, `pip install -e ".[dev]"` is failing on both mac and linux for `tensorflow` and on `sentencepiece` for mac. I had to manually install the `["black", "isort", "flake8"]` packages. Python=3.8.2 in conda env. <|||||>> BTW, pip install -e ".[dev]" is failing on both mac and linux for tensorflow and on sentencepiece for mac. That would be because [TensorFlow only supports python 3.5-3.7](https://www.tensorflow.org/install/pip?lang=python3#system-requirements), unfortunately.<|||||>> > BTW, pip install -e ".[dev]" is failing on both mac and linux for tensorflow and on sentencepiece for mac. > > That would be because [TensorFlow only supports python 3.5-3.7](https://www.tensorflow.org/install/pip?lang=python3#system-requirements), unfortunately. @LysandreJik Thanks. Installs on ubuntu with python 3.6. However, on mac: ``` conda create -n transformers_dev python=3.6 -y conda activate transformers_dev pip install -e ".[dev]" Failed to build tokenizers ERROR: Could not build wheels for tokenizers which use PEP 517 and cannot be installed directly ``` Mac specs: ``` Python 3.6.10 | packaged by conda-forge | (default, Mar 5 2020, 09:56:10) [GCC Clang 9.0.1 ] on darwin ```<|||||>Hi @shubhamagarwal92 , thanks for that PR and fixing the issues :+1: I just ran the `run_pl.sh` script, training and final testing run without errors now. However, during training precision, recall and F1 are always 0 -> final output on the test set shows: ```bash TEST RESULTS {'val_loss': tensor(7.0679), 'precision': 0.0, 'recall': 0.0, 'f1': 0} ---------------------------------------------------------------------------------------------------- Testing: 200it [00:07, 27.98it/s] ``` Last lines of the prediction output: ```bash der I-OTHderiv Bibliothek I-OTHderiv berufen I-OTHderiv wurde I-OTHderiv , I-OTHderiv verließ I-OTHderiv Gardthausen I-OTHderiv den I-OTHderiv Bibliotheksdienst I-OTHderiv . I-OTHderiv ``` <|||||>> I just ran the `run_pl.sh` script, training and final testing run without errors now. However, during training precision, recall and F1 are always 0 -> final output on the test set shows: > > ```shell > TEST RESULTS > {'val_loss': tensor(7.0679), 'precision': 0.0, 'recall': 0.0, 'f1': 0} > ---------------------------------------------------------------------------------------------------- > Testing: 200it [00:07, 27.98it/s] > ``` Thanks for reporting this. Could you please verify the version of `pytorch-lightning`. For me it is `pytorch-lightning==0.7.1`, `transformers==2.5.1` and the results as reported in the [README](https://github.com/huggingface/transformers/pull/3180/commits/ed39624dd0d0f3bce55352a8c4c9a8f515793e29#diff-eb7fd389de7be266012669eab7db207bR119). Also could you please check if the results in `${OUTPUT_DIR}/test_results.txt` mentioned [here](https://github.com/huggingface/transformers/pull/3180/commits/84ee92d47ee6d659edaf6a61d09b393ebeea4d5b#diff-5a6311e9856e7b0057d9c1b85cd85fadR27) also correspond to 0. It works for me as: ![Screen Shot 2020-03-09 at 3 39 26 PM](https://user-images.githubusercontent.com/7984532/76230915-556c7900-621c-11ea-82bc-1289b1933613.png) <|||||>Hi, I'm using the same versions of both `pytorch-lightning` and `transformers` 😂 Output of `test_results` is: ```bash $ cat germeval-model/test_results.txt f1 = 0 precision = 0.0 recall = 0.0 val_loss = tensor(9.4173) ``` But I'm going to test it on another machine :)<|||||> > But I'm going to test it on another machine :) I am also attaching my environment file via `pip freeze > requirements.txt`: [requirements.txt](https://github.com/huggingface/transformers/files/4307693/requirements.txt) Please let me know if this doesn't work. <|||||>@shubhamagarwal92 I think somehow you have the wrong version of our style checks installed. Can you try running under this command? ``` sudo pip install git+git://github.com/timothycrosley/isort.git@e63ae06ec7d70b06df9e528357650281a3d3ec22#egg=isort sudo pip install .[tf,torch,quality] ``` @LysandreJik we have to fix this, it is really confusing... @stefan-it would love to see your log as well. Could you also try `rm cached*` I think maybe your feature cache got messed up? <|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3180?src=pr&el=h1) Report > Merging [#3180](https://codecov.io/gh/huggingface/transformers/pull/3180?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e03129ad447ad7670fcc6206e5eb27a5435d4d86?src=pr&el=desc) will **increase** coverage by `0.01%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3180/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3180?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3180 +/- ## ========================================== + Coverage 78.15% 78.16% +0.01% ========================================== Files 98 98 Lines 16641 16641 ========================================== + Hits 13006 13008 +2 + Misses 3635 3633 -2 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3180?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3180/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.72% <0%> (+0.31%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3180?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3180?src=pr&el=footer). Last update [e03129a...9f949d3](https://codecov.io/gh/huggingface/transformers/pull/3180?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>> ``` > sudo pip install git+git://github.com/timothycrosley/isort.git@e63ae06ec7d70b06df9e528357650281a3d3ec22#egg=isort > sudo pip install .[tf,torch,quality] > ``` @srush I reverted the last 3 style related commits, force-pushed and added a small commit to pass all the checks. Please merge if everything is fine. Also, for `isort`, I guess you meant using `git+https://` ``` pip install git+https://github.com/timothycrosley/isort.git@e63ae06ec7d70b06df9e528357650281a3d3ec22#egg=isort ``` This link is also wrong at [contributing.md](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests). Could you please also state the python version in the md. This command `pip install .[tf,torch,quality]` is still failing on Mac as mentioned in my previous comment. ``` Failed to build tokenizers ERROR: Could not build wheels for tokenizers which use PEP 517 and cannot be installed directly ```<|||||>Thanks @shubhamagarwal92. Sorry for the annoyance. @LysandreJik this lgtm. <|||||>@srush Happy to help! :) Thanks for approving the PR!<|||||>@shashwath94 I think I've found the reason for the bad evaluation results: I'm using apex and the `--fp16` parameter in the `run_pl.sh` script! Do you have any idea, why it is not working using half precision 🤔<|||||>Ah, I will check with pytorch-lightning. It is plausible we are not integrating correctly with them <|||||>@srush While you are it, could you please check the status of my PR in pl as well. https://github.com/PyTorchLightning/pytorch-lightning/pull/1094 Basically, I was observing memory leak on GPU0 if other GPU id (eg. [1]) was provided when running the NER example. AFAIK, the solution is `torch.cuda.set_device()`
transformers
3,179
closed
I can not import transformers
# 🐛 Bug ## Information when I execute "from transformers import TFBertModel, BertModel" in ipython, the error, "ImportError: cannot import name 'BartConfig' from 'transformers.configuration_auto'" was raised. This error occured after the tensorflow update from version 2.0 to version 2.1 and the python update from version 3.6 to 3.7. In addition, after I update the python from version 3.6 to 3.7, I installed torch1.4 and tensorflow2.1 in a same env, the "import" is always failed but when I "import transformers" in other env which only include torch1.4 and python3.7, it succeed I want to know how to make it. Thank you. - `transformers` version:2.5.1 - Platform: windows10 - Python version: 3.7.6 - PyTorch version (GPU?):GPU1.4 - Tensorflow version (GPU?):GPU2.1
03-08-2020 15:43:08
03-08-2020 15:43:08
I have solved it by rebooting my computer and reinstall some modules that transformers can not found but have been installed<|||||>This issue still exists. I have Tensorflow 2.1 with GPU and transformers version 2.8.0 I tried with rebooting and creating new conda environments multiple times but still no success :(<|||||>I can import it and my python version is 3.7.6 , 2.8.0 for transformers and 2.1 for tf. Maybe you could try to upgrade your python if its' version is under 3.7.<|||||>> This issue still exists. I have Tensorflow 2.1 with GPU and transformers version 2.8.0 > > I tried with rebooting and creating new conda environments multiple times but still no success :( I can import it and my python version is 3.7.6 , 2.8.0 for transformers and 2.1 for tf. Maybe you could try to upgrade your python if its' version is under 3.7.<|||||>Hmm, the next day I tried it again with the same environment and it works xD Probably because of rebooting or something. But I also rebooted the first day, idk...
transformers
3,178
closed
Pretraining QA corpora from scratch with sentence pairs
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> I would like to pretrain a non-factoid QA corpora (question and answer passages) from scratch using BERT. I have looked at both: https://huggingface.co/blog/how-to-train and https://gist.github.com/aditya-malte/2d4f896f471be9c38eb4d723a710768b#file-smallberta_pretraining-ipynb I would like to confirm that what I am doing is correct: 1. I concatenated the question and answers with a [SEP] token, so each line in my input data file looks like: question [SEP] answer 2. I am running the script with --line_by_line I am uncertain if this is correct because how would the script know which is sentence A and which is sentence B, or is this not necessary? <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
03-08-2020 12:19:56
03-08-2020 12:19:56
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,177
closed
Distilgpt2 finetuning and text generation
This ipynb notebook contains a finetuning and text generation tutorial for distilgpt2. The tutorial also have used code from run_generation.py file to make generation faster than using the original file for every iteration.
03-08-2020 11:29:48
03-08-2020 11:29:48
Hi @BlackJack01 - thanks so much for this great contribution! We are still going through the notebook and discussing how to add it :-) Will let you know soon! <|||||>Hi @BlackJack01, sorry for the late answer. We now have community notebooks here: https://github.com/huggingface/transformers/tree/master/notebooks#community-notebooks Feel free to open a PR to add it there :-)
transformers
3,176
closed
GLUE test set predictions
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> ## Motivation The `run_glue` script is super helpful. But it currently doesn't implement producing predictions on the test datasets for the GLUE tasks. I think this would be extremely helpful for a lot of people. I'm sure there are plenty of people who have implemented this functionality themselves, but I haven't found any. Since `transformers` already provides train and dev for GLUE, it would be cool to complete the feature set with providing test set predictions. <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> ## Your contribution I'm personally working on a branch that extends the `glue_processors` to support the test sets (which are already downloaded by the recommended `download_glue.py` script. I also update the `run_glue.py` script to produce the `*.tsv` files required by the GLUE online submission interface. I think I'm a couple days out from testing/completing my implementation. I'm also sure plenty of implementations exist of this. If there are no other plans to support this in the works, I'm happy to submit a PR. <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
03-08-2020 09:45:03
03-08-2020 09:45:03
hi @shoarora can you share the script to to report performance on test set?<|||||>@Mahmedturk you can check out the branch in PR #3405 It's diverge pretty heavily from master and I haven't updated it yet, but you should still be able to run `run_glue.py` off that branch with the `--do_test` flag that I added and it should produce the `.tsv` files required for submission. <|||||>@shoarora I pulled the repo to update run_glue.py as I wanted to use this new feature. However, I now get an error when I run run_glue.py! Please see below the output of the error message. It looks like in previous versions, there weren't any keyword arguments named "mode" in GlueDataset() --possible? `Traceback (most recent call last): File "./transformers/examples/text-classification/run_glue.py", line 228, in <module> main() File "./transformers/examples/text-classification/run_glue.py", line 139, in main test_dataset = GlueDataset(data_args, tokenizer=tokenizer, mode="test") if training_args.do_predict else None TypeError: __init__() got an unexpected keyword argument 'mode'`<|||||>@AMChierici I didn't author #4463, which is what has made it to master to enable this feature. I haven't played with it yet so sorry I can't be of more help<|||||>@AMChierici make sure you run from master, there's indeed a `mode` kwarg now. @shoarora Thanks for this first PR and I did check yours while merging the other (to make sure that the indices in csv parsing, etc. were correct)<|||||>Thanks, @julien-c . Yes, solved.. In fact, I was not running from master. <|||||>downloaded master right now. File "examples/text-classification/run_glue.py", line 143, in main if training_args.do_eval TypeError: __init__() got an unexpected keyword argument 'mode'
transformers
3,175
closed
Updated `Tokenw ise` in print statement to `Token wise`
03-08-2020 09:41:42
03-08-2020 09:41:42
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3175?src=pr&el=h1) Report > Merging [#3175](https://codecov.io/gh/huggingface/transformers/pull/3175?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e03129ad447ad7670fcc6206e5eb27a5435d4d86?src=pr&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3175/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3175?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3175 +/- ## ======================================= Coverage 78.15% 78.15% ======================================= Files 98 98 Lines 16641 16641 ======================================= Hits 13006 13006 Misses 3635 3635 ``` ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3175?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3175?src=pr&el=footer). Last update [e03129a...70d11c4](https://codecov.io/gh/huggingface/transformers/pull/3175?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,174
closed
How can I assign a specific gpu when using examples/run_language_modeling.py?
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> Hello, I'm wondering if I can assign a specific gpu when using examples/run_language_modeling.py to train a language model? Lots of thanks!
03-08-2020 06:08:46
03-08-2020 06:08:46
This might answer your question: https://stackoverflow.com/questions/39649102/how-do-i-select-which-gpu-to-run-a-job-on<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,173
closed
Get the CNN/Daily Mail Data for BART
@sshleifer In the README for the BART summarization it says: > download both CNN and Daily Mail datasets from Kyunghyun Cho's website `tar -xvf cnn_stories.tgz && tar -xvf dailymail_stories.tgz` > this should make a directory called cnn_dm/ with files like test.source. To use your own data, copy that files format. Each article to be summarized is on its own line. This doesn't produce a cnn_dm directory, it produces two different folders. The contents of the folders are `text.story` files, not `test.source`. Did you use [this repo](https://github.com/artmatsak/cnn-dailymail)? Or did you get the data from somewhere else? Happy to submit a PR either way! Thanks!
03-07-2020 20:10:12
03-07-2020 20:10:12
Its a typo in the docs. I just used cnn.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,172
closed
Quick tour TF 2.0
# 🐛 Bug ## Information I am trying **Quick tour TF 2.0**. The problem arises when using a quick example: **How a TensorFlow 2.0 model can be trained in 12 lines of code**: ```python import tensorflow as tf import tensorflow_datasets from transformers import * # Load dataset, tokenizer, model from pretrained model/vocabulary tokenizer = BertTokenizer.from_pretrained('bert-base-cased') model = TFBertForSequenceClassification.from_pretrained('bert-base-cased') data = tensorflow_datasets.load('glue/mrpc') # Prepare dataset for GLUE as a tf.data.Dataset instance train_dataset = glue_convert_examples_to_features(data['train'], tokenizer, max_length=128, task='mrpc') valid_dataset = glue_convert_examples_to_features(data['validation'], tokenizer, max_length=128, task='mrpc') train_dataset = train_dataset.shuffle(100).batch(32).repeat(2) valid_dataset = valid_dataset.batch(64) # Prepare training: Compile tf.keras model with optimizer, loss and learning rate schedule optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy') model.compile(optimizer=optimizer, loss=loss, metrics=[metric]) # Train and evaluate using tf.keras.Model.fit() history = model.fit(train_dataset, epochs=2, steps_per_epoch=115, validation_data=valid_dataset, validation_steps=7) ``` which produces the output below: ```python --------------------------------------------------------------------------- NameError Traceback (most recent call last) <ipython-input-4-ea94e17f2f79> in <module>() 1 tokenizer = BertTokenizer.from_pretrained('bert-base-cased') ----> 2 model = TFBertForSequenceClassification.from_pretrained('bert-base-cased') 3 data = tensorflow_datasets.load('glue/mrpc') NameError: name 'TFBertForSequenceClassification' is not defined ``` The above behavior can be reproduced using this [Colab ](https://colab.research.google.com/drive/1aAmOVlvkuP9PLOuGKVx7-k0vsVBoD506)notebook. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> Any help will be much appreciated.
03-07-2020 15:19:43
03-07-2020 15:19:43
Our models are TensorFlow 2.x only, while your notebook is in TensorFlow 1.x: ``` The default version of TensorFlow in Colab will soon switch to TensorFlow 2.x. We recommend you upgrade now or ensure your notebook will continue to use TensorFlow 1.x via the %tensorflow_version 1.x magic: more info. ``` You can use the following command at the start of your notebook to use TensorFlow 2.x: ``` %tensorflow_version 2.x ```<|||||>@LysandreJik thank you.
transformers
3,171
closed
Do we have a whole-word-masked version of BERT?
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> Thanks for the excellent work! But I'm wonder is there a whole-word-masked version of BERT? Moreover, how can I adapt the Tokenizer class to make it support other parsing method (e.g. In Chinese the BERTTokenizer simply parses the sequences in character level, while parser like jieba can parse them into Chinese words. How can I keep the features of Tokenizer class while using other parsing method?)?
03-07-2020 14:38:05
03-07-2020 14:38:05
Yes, there are a few available on the hub, you can search for [whole word masking](https://huggingface.co/models?search=whole-word-masking) or [wwm](https://huggingface.co/models?search=wwm). For your second question, you would have a better answer if you opened an issue on [huggingface/toeknizers](https://github.com/huggingface/tokenizers/issues) instead.
transformers
3,170
closed
Semantic Code Retrieval using Transformers
I am entering the world of transformers and would like to use some architectures to create a semantic search engine to retrieve source code (Python, Javascript, Ruby, Go, Java, and PHP code). Currently, the [dataset ](https://github.com/github/CodeSearchNet#data-details)contains 2 million pairs **(code, docstring)**, where code is a list of tokens from a method or function and docstring is a short description of the code in natural language. As a starting point, it would be interesting to construct a model architecture that receives the code and the docstring **([ [code], [docstring] ])** as input example and outputs the code embedding and docstring embedding. Using cosine similarity as loss function the model could be fine-tuned to encode both code and docstring to the same embedding space. As shown in the figure below: <pre> <img src="https://i.stack.imgur.com/4fx3h.png" width="480"> </pre> I started reading and tokenizing the dataset: ```python from transformers import BertTokenizer # reads a list of [[code], [docstring]] reader = CodeDocstringReader(dataset_path) # loads tokenizer model_name = "bert-base-uncased" tokenizer = BertTokenizer.from_pretrained(model_name, do_lower_case=True) # returns a list of tokenized examples # [[code_tokes_ids], [docstring_tokens_ids]] tokenized_features = tokenizer_examples( reader.get_examples(), tokenizer ) ``` The definition and training of the model are still incomplete, but it is outlined as: ```python import tensorflow as tf from transformers import BertModel class JointEncoder(tf.keras.Model): """Encodes the code and docstring into an same space of embeddings.""" def __init__(self, path, name="jointencoder"): super(JointEncoder, self).__init__(name=name) self.bert = BertModel.from_pretrained(path) def call(self, inputs): """Returns code and docstring embeddings""" ... code_embedding = .. docstring_embedding = .. return code_embedding, docstring_embedding ``` However, I'm stuck on how to code this simple architecture. Could you give me some directions? Thanks in advance.
03-07-2020 13:00:42
03-07-2020 13:00:42
You might be interested in: - https://huggingface.co/huggingface/CodeBERTa-small-v1#codeberta and https://huggingface.co/huggingface/CodeBERTa-language-id - more generally, https://huggingface.co/blog/how-to-train<|||||>Wow, it helped a lot @julien-c. Basically, did you train a language model using CodeSearchNet dataset?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@julien-c, Is there an approach where this tutorial https://huggingface.co/blog/how-to-train is trained over TPU cores?<|||||>maybe @LysandreJik or @sgugger have a link to a notebook?<|||||>I haven't tried training in notebooks on TPU, only with the example scripts.<|||||>Unfortunately I only have notebooks that run the example scripts on TPU, nothing similar to the `how-to-train` blogpost.<|||||>Thanks @julien-c, @sgugger and @LysandreJik, Maybe I can adapt the [Language Modeling example script](https://github.com/huggingface/transformers/tree/master/examples/language-modeling) applying [Pytorch Lightning](https://www.pytorchlightning.ai/) approach which easily support TPUs.<|||||>If you're using the language modeling script, running it on TPUs is supported, just follow the instructions [here](https://github.com/huggingface/transformers/tree/master/examples#running-on-tpus).<|||||>> If you're using the language modeling script, running it on TPUs is supported, just follow the instructions [here](https://github.com/huggingface/transformers/tree/master/examples#running-on-tpus). Great, but it runs on Google Colab?<|||||>Yes it does run on colab!<|||||>You can find an example running the `run_glue.py` script [here](https://colab.research.google.com/drive/15q6UUzwNugWvVXNfOkWlGCaKIUvLZpxd?usp=sharing). You can do the same with the language modeling script! (Cloning the repository and running the script from there would be cleaner than `wget`ting all the files like it's done in this colab, though)<|||||>> You can find an example running the `run_glue.py` script [here](https://colab.research.google.com/drive/15q6UUzwNugWvVXNfOkWlGCaKIUvLZpxd?usp=sharing). You can do the same with the language modeling script! (Cloning the repository and running the script from there would be cleaner than `wget`ting all the files like it's done in this colab, though) Fantastic! That was of great help.<|||||>@LysandreJik, @sgugger, Unfortunately, even using a small dataset (~400MB), the Colab killed the process due to the use of all available RAM (12.72GB).
transformers
3,169
closed
issues while modifying modeling_roberta.py file
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details I need the CLS representation from RobertaForMultipleChoice For this, I changed outputs = (reshaped_logits,) + outputs[2:] To outputs = (reshaped_logits,) + outputs[1:] in file https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_roberta.py since outputs[1] gives the CLS representation. Now to incorporate this change I need to import RobertaForMultipleChoice from this file and replace "from transformers import ..." in https://github.com/huggingface/transformers/blob/master/examples/run_multiple_choice.py to "from modeling_roberta.py import ..." I am getting import issues ![image](https://user-images.githubusercontent.com/19836137/76137311-8d937200-5ff8-11ea-98fc-4d361eee0b32.png) Can somebody help in resolving this? <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
03-07-2020 05:20:36
03-07-2020 05:20:36
That's more of a general Python question related to imports than an issue with the library, you would have more luck in trying with stack overflow. I believe the easiest way to modify files is to clone the repository and install it in your environment as an editable: ``` git clone https://github.com/huggingface/transformers cd transformers pip install -e . ``` Every file modification will be directly reflected in your Python runtime (if you're on Jupyter you would need to restart your kernel for it to take effect).<|||||>Thanks a lot. Actually the problem was that I was appending the path (i.e src/transformers) at the end in PYTHON_PATH because of which it was loading modules from the transformers library. I added the source path at index 0 and now it is working the way I want it to work. and thanks @LysandreJik for the restart kernel trick.
transformers
3,168
closed
Can we use GPT-2 sentence embedding for classification tasks?
I am experimenting on the use of transformer embeddings in sentence classification tasks **without finetuning them**. I have used BERT embeddings and those experiments gave me very good results. Now I want to use GPT-2 embeddings (without fine-tuning). So I have two questions, 1. Can I use GPT-2 embeddings like that (because I know Gpt-2 is trained on the left to right) 2. Is there any example uses of GPT-2 in classification tasks other than generation tasks? 3. If I can use GPT-2 embeddings, how should I do it ?
03-07-2020 03:25:17
03-07-2020 03:25:17
GPT-2 and BERT are both transformer networks with very similar architectures. You can use the GPT-2 embeddings the same way you used BERT embeddings. As you said, GPT-2 only handles left context. You can read [the paper](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) where the authors showcase results on several tasks in a zero-shot setting (section 3).<|||||>I recently imported the GPT2Model and built a simple classifier. I think the model is too naive. And could use some improvements. If you notice any mistakes, please correct me. :) ``` class SimpleGPT2SequenceClassifier(nn.Module): def __init__(self, hidden_size: int, num_classes:int ,max_seq_len:int, gpt_model_name:str, cache_dir:str): super(SimpleGPT2SequenceClassifier,self).__init__() self.gpt2model = GPT2Model.from_pretrained( gpt_model_name, cache_dir = cache_dir ) self.fc1 = nn.Linear(hidden_size, num_classes) def forward(self, x_in): """ Args: x_in: encoded inputs ids of sent. """ gpt_out = self.gpt2model(x_in)[0] #returns tuple batch_size = gpt_out.shape[0] prediction_vector = self.fc1(gpt_out.view(batch_size,-1)) #(batch_size , max_len, num_classes) return prediction_vector ``` For preprocessing the text before encoding them with the tokenizer. ``` punkt_sentence_detector = nltk.data.load('tokenizers/punkt/english.pickle') class GPT2Preprocessor: def __init__(self, transformer_tokenizer, sentence_detector): self.transformer_tokenizer = transformer_tokenizer self.sentence_detector = sentence_detector def add_eos_tokens(self, text): eos_token = " " + self.transformer_tokenizer.eos_token + " " sentences = self.sentence_detector.tokenize(text) eos_added_text = ( eos_token.join(sentences) + " " + self.transformer_tokenizer.eos_token ) return eos_added_text ```<|||||>I tried GPT-2 embeddings and compare them with Roberta embeddings for the task of sentiment classification (both networks were frozen during the training). GPT-2 couldn't outperform the results of Roberta.<|||||>@cozek from the code, it isn't obvious whether you've frozen gpt2 layers or not ?<|||||>> @cozek from the code, it isn't obvious whether you've frozen gpt2 layers or not ? Of course, I have not frozen any layers. It is not always necessary to freeze the layers. If required you can easily freeze the layers as necessary. <|||||>> I tried GPT-2 embeddings and compare them with Roberta embeddings for the task of sentiment classification (both networks were frozen during the training). GPT-2 couldn't outperform the results of Roberta. Do you still have the notebooks? I would be interested to see how you implemented a classification head on top of gpt-2. <|||||>> > I tried GPT-2 embeddings and compare them with Roberta embeddings for the task of sentiment classification (both networks were frozen during the training). GPT-2 couldn't outperform the results of Roberta. > > Do you still have the notebooks? I would be interested to see how you implemented a classification head on top of gpt-2. https://github.com/cozek/OffensEval2020-code/blob/master/notebooks/Eng%20Task%20A%20-%20Ensemble%20DistilGPT2.ipynb Here you go. I used it for OffenEval 2020, Hate Speech Detection. I used the distilled version. Feel free to swap it out and take the full GPT-2. We got 0.90 Macro f1 with this model. <|||||>You can add a CLS token to the vocabulary `tokenizer.add_special_tokens({'cls_token': '[CLS]'}) model.resize_token_embeddings(len(tokenizer))` Then append this CLS token at the end of your input Then use the representation of this CLS token for classification as done in BERT. cc @cozek <|||||>> > > I tried GPT-2 embeddings and compare them with Roberta embeddings for the task of sentiment classification (both networks were frozen during the training). GPT-2 couldn't outperform the results of Roberta. > > > > > > Do you still have the notebooks? I would be interested to see how you implemented a classification head on top of gpt-2. > > https://github.com/cozek/OffensEval2020-code/blob/master/notebooks/Eng%20Task%20A%20-%20Ensemble%20DistilGPT2.ipynb > > Here you go. I used it for OffenEval 2020, Hate Speech Detection. I used the distilled version. Feel free to swap it out and take the full GPT-2. We got 0.90 Macro f1 with this model. Thanks a lot. Very helpful! <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@cozek I see in your code that you concatenate all the token embeddings together to produce the sentence representation and then pass that through `fc1`: ```python gpt_out = self.gpt2model(x_in)[0] #returns tuple batch_size = gpt_out.shape[0] prediction_vector = self.fc1(gpt_out.view(batch_size,-1)) ``` Instead of concatenating all the token embeddings, did you try: 1. pooling over all the tokens to get the sentence representation? For example, max pooling or mean pooling? 2. using the embedding of the last token? @AsmirMumin <|||||>> @cozek I see in your code that you concatenate all the token embeddings together to produce the sentence representation and then pass that through `fc1`: > > ```python > gpt_out = self.gpt2model(x_in)[0] #returns tuple > batch_size = gpt_out.shape[0] > prediction_vector = self.fc1(gpt_out.view(batch_size,-1)) > ``` > > Instead of concatenating all the token embeddings, did you try: > > 1. pooling over all the tokens to get the sentence representation? For example, max pooling or mean pooling? > 2. using the embedding of the last token? I did not try 1 or 2. Option 1 seems logical as it would reduce the size of the FC layer and increase training speed. I am not familiar with option 2.
transformers
3,167
closed
padding and attention mask does not work as intended in batch input in GPT2 language model
The following code is without batch: ``` from transformers import GPT2LMHeadModel, GPT2Tokenizer import torch tokenizer = GPT2Tokenizer.from_pretrained("gpt2") model = GPT2LMHeadModel.from_pretrained('gpt2') model.eval() context=torch.tensor([tokenizer.encode("This is")]) output, past = model(context) token = torch.argmax(output[..., -1, :]) print(tokenizer.decode(token.item())) output: ' a' ``` Now, I extended this to batch setting: ``` from transformers import GPT2LMHeadModel, GPT2Tokenizer import torch tokenizer = GPT2Tokenizer.from_pretrained("gpt2") model = GPT2LMHeadModel.from_pretrained('gpt2') model.eval() context=[torch.tensor(tokenizer.encode("This is ")),torch.tensor(tokenizer.encode("Hello How are "))] context=pad_sequence(context,batch_first=True) mask=torch.tensor([[1,1,0],[1,1,1]]) output, past = model(context,attention_mask=mask) token = torch.argmax(output[..., -1, :],dim=1) tokenizer.decode(token) output: '\n you' ``` Here `\n` is next token for the first context and `you` is next token for second context of the batch. But The expected next token for the first context is "a", since all the setting are same. Futhermore, if you reduce the second context to 2 token you will get `'a'` in this batch setting. So clearly, model can not understand the padding. Also, **the attention mask does not work**. Because, after padding the next token of sequence "`this is`" is 0 (zero). And according to the attention mask (`[1,1,0]`), this zero should be avoided and only the tokens `this` and `is` should be attended. The proofs that this attention masking is not working are: - Use attention mask [1,1,1], that means attend even on the padding zero, you get the same output which is `\n'. - Use the the string `this is!`. Here `!` has the zero index in the vocabulary matrix. Again you get the same output which is `\n'. Only time, it is possible to get desirable output without the batch settings and attention mask ( now it seems, it does not matter because it has no effect anyway) Then I found [this](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.pad_token), which suggest to use `pad_token`. So I used like following: ``` from transformers import GPT2LMHeadModel, GPT2Tokenizer import torch from torch.nn.utils.rnn import pad_sequence tokenizer = GPT2Tokenizer.from_pretrained("gpt2",pad_token="<PAD>") model = GPT2LMHeadModel.from_pretrained('gpt2') model.eval() context=[torch.tensor(tokenizer.encode("This is <PAD> ")),torch.tensor(tokenizer.encode("Hello How are"))] context=torch.stack(context) print(context) mask=torch.tensor([[1,1,0],[1,1,1]]) output, past = model(context,attention_mask=mask) token = torch.argmax(output[..., -1, :],dim=1) tokenizer.decode(token) output: 'The you' ``` Here `The` is next token for the first context and `you` is next token for second context of the batch. This is also not working. Because `The` is not expected for the first context. How do I use variable length sequence in batch setting in gpt/gpt2 model?
03-07-2020 01:35:20
03-07-2020 01:35:20
Hi mainulquraishi, As mentioned in earlier issues #3031, #3021, #2975, #3069 it is not advised to use GPT2LMHeadModel in inference using a padded batch. To answer your two questions: 1. You don't get the same output in your second code snippet as in your first code snippet because you "argmax" the next token from the logits corresponding to the masked token on position 3. You would get the same output 'a' if you would take the "argmax" the next token from the **last non-padded token** which is at postion 2 in your example. 2. The attention mask works as far as I can see. Using an attention mask means that logits at **other** positions than the masked position input are not influenced by the masked position input. This means that if you mask position 3 you will see that changing the input for position 3 will not change the output for postion 4 - N, but changing the input for position 3 will surely influence the output of position 3 (A token cannot mask its own output). I would advise you to take a good look at Issue #3021 <|||||>Hi @patrickvonplaten I ran into the same issue that you described properly in your first point. Some questions for the record: a) Could you please describe why position_ids argument is not required here? It's not clear for me why it was needed in https://github.com/huggingface/transformers/issues/3021 and not here. b) Any padded batch will likely have sentences with many different lengths. Is there a way that `GPT2LMHeadModel` is able to identify the last non-padded token for each sentence (maybe via the attention-mask) so we get the corresponding logits easily? Any function to do that filtering? If not, I guess we can do it from the client side via some tensor operation to discard the last padded tokens (we can infer the last padded tokens via the attention mask). Is this correct? c) Could we apply what we are discussing here in terms of padding to run the model with Torchscript? Any advice / warning here?<|||||>The answer for (b) can be found in the code snippet that Patrick added in https://github.com/huggingface/transformers/issues/3021 . The following does the trick: ``` last_non_masked_idx = torch.sum(attention_mask, dim=1) - 1 start_idx = inp_idx = (last_non_masked_idx).view(-1, 1).repeat(1, st.tokenizer.vocab_size).unsqueeze(1) logits = logits.gather(1, start_idx).squeeze(1) ``` <|||||>You just saved me @Damiox! Thank you so much :)
transformers
3,166
closed
[BERT] Implementation of the sliding window for long sequences
I was trying to look for the references where the sliding window is implemented to process long sequences. How do we split a long sequence and then after getting the embeddings, how do we unpack them? I am unable to find the code segments that handle these operations. Also, is it possible to describe the main trick? I am trying to implement it in plain PyTorch. I am unable to implement in Batches without running any loops. Any help would be appreciated.
03-07-2020 01:19:03
03-07-2020 01:19:03
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>I have the same issue.. anyone has any inputs on this
transformers
3,165
closed
Reason for speedup
I used to develop a code based on the pytorch_transformers package (v1.1) and now transfering to the current version. I see a 2x speedup in Bert run_glue.py. Wondering what is the major reason? Looking at the code, I couldn't find major differences.
03-07-2020 00:13:36
03-07-2020 00:13:36
Depending on which version you are using, it might be that you are using the fast [tokenizers ](https://github.com/huggingface/tokenizers) library which offers a much improved tokenizer interface built on Rust. <|||||>Thanks for the reply. I don't think it is because of the tokenizer. I measured the encoder part and see huge improvement in the encoder speed. Still cannot figure out the difference. comparing pytorch-transformers 1.1 with transformers 2.5.1.<|||||>It is unlikely that there have been architecture changes, but it might be caused by a different torch/tensorflow version? Are you testing both transformers versions on the same framework version? It is likely that this is caused by an optimisation of an activation function. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,164
closed
wrong configuration of ALBERT xlarge
# 🐛 Bug ## Information If you are **reproducing** the [ALBERT](https://github.com/google-research/albert), the `num_attention_heads` of albert xlarge should be 32 instead of 16 as https://github.com/huggingface/transformers/blob/db9279dedbb9c5e7d24569a1ac3f74f9d5c3eb18/src/transformers/configuration_albert.py#L27
03-06-2020 20:37:06
03-06-2020 20:37:06
Where did you get 32 from? The [official file ](https://tfhub.dev/google/albert_xlarge/2)says 16.<|||||>Well, the model configuration in tar file downloaded from TF Hub shows 32, which conflicts with the official definition.<|||||>Indeed, that's right. I'll follow the issue you opened https://github.com/google-research/ALBERT/issues/180 and act accordingly.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,163
closed
Inference is slow with
Hi @julien-c, I know that @loretoparisi already talked with you guys about some problems we have with inference using BertTokenizer and BertForSequenceClassification Classes. So for starting to discuss about that, we would like to show you the most important lines of our inference and training code, that probably can have some bugs and slowdowns. Briefly, we are developing a language classifier trained on 93 languages, so we started from multilingual bert ( both tokenizer and model ) and then we want to finetune it on our dataset that has about 8M lines. # INFERENCE CODE ``` # tokenize entire text tokens = tokenizer.encode(original_text.strip()) # remove bos and eos tokens tokens = tokens[1:-1] # get number of slices we have to insert in DL model number_of_slices = len(tokens) // (MAX_SEQUENCE_LENGTH - 2) if len(tokens) % (MAX_SEQUENCE_LENGTH - 2) != 0: number_of_slices +=1 # create slices to be inserted slices = [] for index in range(number_of_slices): slice_ = tokens[ index*(MAX_SEQUENCE_LENGTH - 2) : (index+1)*(MAX_SEQUENCE_LENGTH - 2)] slice_ = [tokenizer.bos_token_id] + slice_ + [tokenizer.eos_token_id] slices.append(slice_) # for every slice, preprocess data creating mask and padding texts = [] masks = [] for text in slices: padding = [tokenizer.pad_token_id] * (MAX_SEQUENCE_LENGTH - len(text)) mask = torch.zeros(MAX_SEQUENCE_LENGTH, dtype=torch.int32).tolist() mask[:len(text)] = [1]*len(text) text = text + padding texts.append(text) masks.append(mask) # texts to tensor pytorch texts = torch.tensor(texts) # masks to tensor pytorch masks = torch.tensor(masks) #inference from DL model logits = model(texts, attention_mask=masks) # stack logits logits = torch.stack(logits).mean(dim=0) #sum logits in order to have a single logits logits = torch.sum(logits, dim=0) ``` # TRAINING CODE ``` tokenizer = BertTokenizer.from_pretrained( os.path.join(data_path, 'model') ) special_tokens_dict = {'eos_token': '[CLS]', 'unk_token' : '[UNK]', 'eos_token' : '[SEP]', 'bos_token' : '[CLS]', 'pad_token' : '[PAD]' } num_added_toks = tokenizer.add_special_tokens(special_tokens_dict) train_loader, validation_loader = load_datasets(data_path, tokenizer, batch_size, max_sequence_length, random_sequence_length, epoch_size, token_dropout, seed) model = BertForSequenceClassification.from_pretrained(os.path.join(data_path, 'model')) print("Num_labels:") print(model.num_labels) if torch.cuda.is_available(): model.cuda() if rank == 0: summary(model) if distributed(): dist.barrier() if world_size > 1: model = DistributedDataParallel(model, [rank], output_device=rank, find_unused_parameters=True) optimizer = Adam(model.parameters(), lr=learning_rate, weight_decay=weight_decay) epoch_loop = count(1) if max_epochs is None else range(1, max_epochs + 1) logdir = os.environ.get("LOGDIR", "logs") os.makedirs(logdir, exist_ok=True) from torch.utils.tensorboard import SummaryWriter writer = SummaryWriter(logdir) if rank == 0 else None best_validation_accuracy = 0 for epoch in epoch_loop: try: if world_size > 1: train_loader.sampler.set_epoch(epoch) validation_loader.sampler.set_epoch(epoch) train_metrics = train(model, optimizer, device, train_loader, f'Epoch {epoch}') validation_metrics = validate(model, device, validation_loader) ``` Do you have some suggestion about that our inference is slow and loading model is very long? We do not use hub actually, we pre download model on local disk and we load with local path. Thanks
03-06-2020 18:18:02
03-06-2020 18:18:02
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,162
closed
Does this project have this function ?
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> can we use this project to calculate the probability that a input text as a real/resonable sentence base on the corpus we trained
03-06-2020 17:33:11
03-06-2020 17:33:11
https://github.com/huggingface/transformers/issues/2311<|||||>@frankniujc it is helpful but maybe a better way is take the all tokens in a whole, not prediction the next tokens<|||||>The probability of a sentence P(s0s1s2s3s4...sn) = P(s1|s0) * P(s2|s0s1) * P(s3|s0s1s2) * ... * P(sn|s0s1s2...sn-1) So you can do something like this ```Python def sentence_probability(sent): bos = tokenizer.encode('<|endoftext|>') tokens = tokenizer.encode(sent) tokens = bos + tokens input_ids = torch.tensor(tokens).unsqueeze(0).to('cuda') sent_probs = [] for i, next_word in enumerate(tokens[1:]): next_word_logits = model(input_ids[:,:i+1])[0][0, -1].detach() next_word_prob = F.log_softmax(next_word_logits, dim=0)[next_word].item() sent_probs.append(next_word_prob) return sum(sent_probs) ```<|||||>@loveJasmine Have a look at [`lm-scorer`](https://github.com/simonepri/lm-scorer). It is a tiny wrapper around `transformers` I wrote that allows you to get sentences probabilities using models that support it (only GPT2 models are implemented at the time of writing).<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,161
closed
urgent - ROBERTA on WSC
Hi I am looking for how to run ROBERTA on WSC data, similar to the example in fairseq: https://github.com/pytorch/fairseq/tree/master/examples/roberta However, fairseq is hard to use and modify and I really appreciate if you could add this to your great repo thanks
03-06-2020 16:51:22
03-06-2020 16:51:22
The conversion script should pretty much work out of the box, so feel free to do it and we welcome a PR (and we can upload the converted weights to our S3)<|||||>which script you mean? sorry i did not get it. i think this is not possible to train it on huggingface is this? On Fri, Mar 6, 2020, 10:12 PM Julien Chaumond <[email protected]> wrote: > The conversion script should pretty much work out of the box, so feel free > to do it and we welcome a PR (and we can upload the converted weights to > our S3) > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/3161?email_source=notifications&email_token=AODM7I44VQEQWKUWI5SYT3TRGFRMLA5CNFSM4LDD7F42YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEOC335Y#issuecomment-595967479>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AODM7I6VCDKQWMAKL4XDHETRGFRMLANCNFSM4LDD7F4Q> > . > <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,160
closed
[Bart] add imports to examples
03-06-2020 16:12:53
03-06-2020 16:12:53
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3160?src=pr&el=h1) Report > Merging [#3160](https://codecov.io/gh/huggingface/transformers/pull/3160?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6ffe03a0a1d472a4e5941793fd361d2b82c8be3f?src=pr&el=desc) will **increase** coverage by `0.01%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3160/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3160?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3160 +/- ## ========================================== + Coverage 78.11% 78.12% +0.01% ========================================== Files 98 98 Lines 16651 16651 ========================================== + Hits 13007 13009 +2 + Misses 3644 3642 -2 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3160?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3160/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.41% <ø> (ø)` | :arrow_up: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3160/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.72% <0%> (+0.31%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3160?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3160?src=pr&el=footer). Last update [6ffe03a...0f206be](https://codecov.io/gh/huggingface/transformers/pull/3160?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,159
closed
NER: some issues in PyTorch Lightning example
Hi, I wanted to try out the new NER example script (`./ner/run_pl_ner.py`) that uses PyTorch Lightning. Here are some bugs that I've found: Dataset preparation method is not called. Usually, InputBatch batches or input features are written and stored in a file. However, the `prepare_data()` [1] method is not called and no input features are written. I fixed that adding this method to the `train_dataloader()` [2] function, but I'm not sure if it's the right place. Model training will work then. Evaluation is currently not working correctly. The checkpoint output file name is: ```bash # ls 'checkpointepoch=0.ckpt' 'checkpointepoch=1.ckpt' 'checkpointepoch=2.ckpt' ``` so the pattern `checkpointepoch=<number_epoch>.ckpt` is used, whereas the main script expects an output checkpoint pattern of `checkpoint_<number_epoch>.ckpt` [3] [1] https://github.com/huggingface/transformers/blob/master/examples/ner/run_pl_ner.py#L56-L80 [2] https://github.com/huggingface/transformers/blob/master/examples/ner/transformer_base.py#L126-L139 [3] https://github.com/huggingface/transformers/blob/master/examples/ner/run_pl_ner.py#L220
03-06-2020 15:37:51
03-06-2020 15:37:51
After fixing the these issues the output predictions seems very weird, maybe this is in an label 2 id bug? ```bash Nachdem I-OTH er I-PERpart 1907 B-ORGderiv nicht I-PER zum I-PER Direktor I-PER der I-PERderiv Bibliothek B-ORGpart berufen I-PER wurde I-PER , I-PER verließ B-PERderiv Gardthausen I-PERderiv den I-ORG Bibliotheksdienst B-ORGpart . I-PERderiv ``` 😂<|||||>cc @srush (via #3053)<|||||>Thanks I will take a look. Can you verify you are on the latest pytorch-lightning? `prepare_data` was just added. Also can you post your log. What was the Val accuracy? Are you on single GPU?<|||||>@srush Please see the PR #3180 I have updated the bash script and README to run effortlessly. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,158
closed
[Bart] _prepare_decoder_inputs should use large negative
Also renames some things and adds a nice test. I suspect that this didn't break integration tests because we don't have a serious integration test with decoder_input_ids set (e.g. calculating loss for a summarization example)
03-06-2020 15:36:16
03-06-2020 15:36:16
Thanks @tomhosking for noticing!<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3158?src=pr&el=h1) Report > Merging [#3158](https://codecov.io/gh/huggingface/transformers/pull/3158?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6ffe03a0a1d472a4e5941793fd361d2b82c8be3f?src=pr&el=desc) will **increase** coverage by `<.01%`. > The diff coverage is `100%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3158/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3158?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3158 +/- ## ========================================== + Coverage 78.11% 78.12% +<.01% ========================================== Files 98 98 Lines 16651 16651 ========================================== + Hits 13007 13008 +1 + Misses 3644 3643 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3158?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3158/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.57% <100%> (+0.16%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3158?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3158?src=pr&el=footer). Last update [6ffe03a...71e626b](https://codecov.io/gh/huggingface/transformers/pull/3158?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,157
closed
[model_cards]Add albert chinese model
Hi, this PR adds the model card for albert-chinese model - albert_chinese_tiny - albert_chinese_small - albert_chinese_base - albert_chinese_large - albert_chinese_xlarge - albert_chinese_xxlarge
03-06-2020 14:26:49
03-06-2020 14:26:49
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3157?src=pr&el=h1) Report > Merging [#3157](https://codecov.io/gh/huggingface/transformers/pull/3157?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6ffe03a0a1d472a4e5941793fd361d2b82c8be3f?src=pr&el=desc) will **decrease** coverage by `0.15%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3157/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3157?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3157 +/- ## ========================================== - Coverage 78.11% 77.96% -0.16% ========================================== Files 98 98 Lines 16651 16651 ========================================== - Hits 13007 12982 -25 - Misses 3644 3669 +25 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3157?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3157/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.35% <0%> (-5.23%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3157/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.56% <0%> (+0.15%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3157?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3157?src=pr&el=footer). Last update [6ffe03a...491bea5](https://codecov.io/gh/huggingface/transformers/pull/3157?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thanks for sharing. I'll just switch the language tag to chinese (tags are case-sensitive) Model pages: [`voidful/albert_chinese_tiny`](https://huggingface.co/voidful/albert_chinese_tiny) [`voidful/albert_chinese_small`](https://huggingface.co/voidful/albert_chinese_small) [`voidful/albert_chinese_base`](https://huggingface.co/voidful/albert_chinese_base) [`voidful/albert_chinese_large`](https://huggingface.co/voidful/albert_chinese_large) [`voidful/albert_chinese_xlarge`](https://huggingface.co/voidful/albert_chinese_xlarge) [`voidful/albert_chinese_xxlarge`](https://huggingface.co/voidful/albert_chinese_xxlarge)
transformers
3,156
closed
Partially fix space only input without special tokens added int the output
Original issue #3091. It fixes the issue for all non BPE-based tokenizers. For BPE ones, the output is different from Python and Rust: GPT2: - Python : `[]` - Rust: `['Ġ']` Roberta: - Python: `[]` - Rust: `['<s>', 'Ġ', '</s>']` Rust seems the right one here. I should have a look at Roberta why it include the special_tokens even if not asked to do so. cc @n1t0 cc @LysandreJik fyi. Signed-off-by: Morgan Funtowicz <[email protected]>
03-06-2020 10:30:58
03-06-2020 10:30:58
transformers
3,155
closed
i want to browse and store image in data base in tkinter can u give me suggessions?
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
03-06-2020 08:39:32
03-06-2020 08:39:32
![image](https://user-images.githubusercontent.com/60382059/76066795-4aafab00-5fb4-11ea-8afe-6630aea361ab.png)
transformers
3,154
closed
seed parameter for model generate()
# 🚀 Feature request There should be a `seed` parameter for the `generate()` function of a model. Although a seed can be manually set before calling `generate()` (as tested in #3063), using it as a parameter is more intuitive (and covers all the bases) ## Motivation Generation reproducibility (also good for CI tests) ## Your contribution The implementation `set_seed()` functions around the repo (e.g. https://github.com/huggingface/transformers/blob/6b1ff250842f52136d5159bb67a26b50ba01485d/examples/run_generation.py#L74) should be sufficient.
03-06-2020 06:10:13
03-06-2020 06:10:13
Hi @minimaxir - thanks for the Feature request! Not too sure about this though. Two arguments against it from my side: 1. I feel like it's very easy to set the seed parameter before calling `generate()` without any real drawback. 2. Also we want all our `generate()` arguments to have default values with a lot of them defined in `configuration_utils.py`. Adding a `seed` argument would either break this logic or set a default seed value in the `PretrainedConfig` class in `configuration_utils.py` which I definitely don't want to do.<|||||>Same here, thanks for the proposition @minimaxir but I feel like there are many ways you can/should assigne seeds (e.g. [here](https://github.com/huggingface/transformers/blob/master/examples/run_glue.py#L104-L109)) and I don't think we would be comfortable with having this inside the model it-self.<|||||>That's fair; for my purposes, it's not too difficult to wrap. (and I inadvertently realized it's better to wrap it for batch generation too) Thanks!
transformers
3,153
closed
Fresh macOS install errors out on import
# 🐛 Bug Fresh Install errors out on import ## Information Following from the README instructions, created a venv, installed torch and tensorflow. Installed transformers. Upon import Model I am using (Bert, XLNet ...): transformers==2.5.1 Language I am using the model on (English, Chinese ...): N/A The problem arises when using: * [X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [X] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Create a new virtual env ```$ virtualenv -p python3 venv``` 2. Source your new venv ```$ source venv/usr/local/bin/activate``` 3. Pip install packages ``` (venv) $ pip install torch tensorflow transformers``` 4. import package ``` python -m transformers``` ```python >>> import transformers Traceback (most recent call last): File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/runpy.py", line 183, in _run_module_as_main mod_name, mod_spec, code = _get_module_details(mod_name, _Error) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/runpy.py", line 142, in _get_module_details return _get_module_details(pkg_main_name, error) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/runpy.py", line 109, in _get_module_details __import__(pkg_name) File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/transformers/__init__.py", line 22, in <module> from .configuration_albert import ALBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, AlbertConfig File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/transformers/configuration_albert.py", line 18, in <module> from .configuration_utils import PretrainedConfig File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/transformers/configuration_utils.py", line 25, in <module> from .file_utils import CONFIG_NAME, cached_path, hf_bucket_url, is_remote_url File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/transformers/file_utils.py", line 53, in <module> import tensorflow as tf File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/tensorflow/__init__.py", line 101, in <module> from tensorflow_core import * File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/tensorflow_core/__init__.py", line 40, in <module> from tensorflow.python.tools import module_util as _module_util File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 959, in _find_and_load_unlocked File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/tensorflow/__init__.py", line 50, in __getattr__ module = self._load() File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/tensorflow/__init__.py", line 44, in _load module = _importlib.import_module(self.__name__) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/tensorflow_core/python/__init__.py", line 64, in <module> from tensorflow.core.framework.graph_pb2 import * File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/tensorflow_core/core/framework/graph_pb2.py", line 7, in <module> from google.protobuf import descriptor as _descriptor File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/google/protobuf/__init__.py", line 37, in <module> __import__('pkg_resources').declare_namespace(__name__) File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/pkg_resources/__init__.py", line 84, in <module> __import__('pkg_resources.extern.packaging.requirements') File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/pkg_resources/_vendor/packaging/requirements.py", line 9, in <module> from pkg_resources.extern.pyparsing import stringStart, stringEnd, originalTextFor, ParseException File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 668, in _load_unlocked File "<frozen importlib._bootstrap>", line 638, in _load_backward_compatible File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/pkg_resources/extern/__init__.py", line 43, in load_module __import__(extant) File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/pkg_resources/_vendor/pyparsing.py", line 4756, in <module> _escapedPunc = Word( _bslash, r"\[]-*.$+^?()~ ", exact=2 ).setParseAction(lambda s,l,t:t[0][1]) File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/pkg_resources/_vendor/pyparsing.py", line 1284, in setParseAction self.parseAction = list(map(_trim_arity, list(fns))) File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/pkg_resources/_vendor/pyparsing.py", line 1066, in _trim_arity this_line = extract_stack(limit=2)[-1] File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/pkg_resources/_vendor/pyparsing.py", line 1050, in extract_stack frame_summary = traceback.extract_stack(limit=-offset+limit-1)[offset] File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/traceback.py", line 211, in extract_stack stack = StackSummary.extract(walk_stack(f), limit=limit) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/traceback.py", line 363, in extract f.line File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/traceback.py", line 285, in line self._line = linecache.getline(self.filename, self.lineno).strip() File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/linecache.py", line 16, in getline lines = getlines(filename, module_globals) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/linecache.py", line 48, in getlines for mod in sys.modules.values(): RuntimeError: dictionary changed size during iteration ``` ## Expected behavior Importing transformers into the python runtime should import transformers. - `transformers` version: 2.5.1 - Platform: macOS Catalina 10.15.3 - Python version: Python 3.7.3 - PyTorch version (GPU?): N/A - Tensorflow version (GPU?): N/A - Using GPU in script?: N/A - Using distributed or parallel set-up in script?: N/A
03-06-2020 02:43:44
03-06-2020 02:43:44
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,152
closed
BART.generate: possible to reduce time/memory?
# 🐛 Performance issues I did a quick benchmark between HuggingFace's implementation of **BART** and FairSeq's implementation. You can find the benchmark code [here](https://gist.github.com/Colanim/cc418d19e5e107f462bac306f53ba994). --- Here is my results, on a single GPU GTX 1080 (12 GiB of memory) : | FP16 - Batch size 16 | s/batch | s/sample | |----------------------|---------|----------| | FairSeq | 8.8676 | 0.5664 | | HuggingFace | 12.3358 | 0.7879 | | FP16 - Batch size 32 | s/batch | s/sample | |----------------------|---------|----------| | FairSeq | 17.1247 | 0.5469 | | HuggingFace | OOM | OOM | | FP16 - Batch size 1 | s/sample | |---------------------|----------| | FairSeq | 1.6743 | | HuggingFace | 1.8856 | | FP32 - Batch size 1 | s/sample | |---------------------|----------| | FairSeq | 1.7865 | | HuggingFace | 2.0670 | --- **FairSeq is consistently faster than HuggingFace on all my experiments.** --- This sparks a few questions : * Do you have similar results on your side ? Did I mess my benchmark ? * Why HuggingFace's implementation is significantly slower ? * Why HuggingFace's implementation takes more space in memory (illustrated by `OOM` with batch size of 32) ? * Is the release of the `Summarization Pipeline` going to improve this ? @sshleifer
03-06-2020 02:20:13
03-06-2020 02:20:13
1) Identical to my benchmark for speed. Hadn't tested memory but I'm not surprised that their implementation is less. For both memory and speed, they have a lot of clever tricks that we haven't implemented yet. 4) Summarization Pipeline will not help, but I will take a longer look at this tomorrow and see if we can improve. <|||||>On master, the gap has closed considerably! <16GB GPU RAM for fp16, bs=32, and timings much closer: ![image](https://user-images.githubusercontent.com/6045025/77931576-55d7bd00-727a-11ea-8904-e57070635087.png) My numbers are a bit lower than yours because I am on an NVIDIA RTX GPU. <|||||>I tested again and I have similar results ! Thanks for the fix. I now have exact same GPU memory utilization. --- About the (now) small difference of inference time between implementations, do you know from where it comes from ?<|||||>Haven't investigated. So far, I just investigated memory and the speed improvements were a happy side effect.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,151
closed
How to train a distilled gpt2
Hi! Thanks for everything. I'm interested in training a distilled version of gpt2, because I want it to be even smaller than the distilled gpt2 model. In https://github.com/huggingface/transformers/tree/master/examples/distillation , there is a tutorial about how to get a distilled bert. Could you give instructions about how to train a distilled gpt2? Thanks!
03-06-2020 01:40:27
03-06-2020 01:40:27
很简单哦。看我的代码: """ Training the distilled model. Supported architectures include: BERT -> DistilBERT, RoBERTa -> DistilRoBERTa, GPT2 -> DistilGPT2. """ import argparse import json import os import pickle import shutil import numpy as np import torch from distiller import Distiller from lm_seqs_dataset import LmSeqsDataset from transformers import ( BertConfig, BertForMaskedLM, BertTokenizer, DistilBertConfig, DistilBertForMaskedLM, DistilBertTokenizer, GPT2Config, GPT2LMHeadModel, GPT2Tokenizer, RobertaConfig, RobertaForMaskedLM, RobertaTokenizer, ) from utils import git_log, init_gpu_params, logger, set_seed MODEL_CLASSES = { "distilbert": (DistilBertConfig, DistilBertForMaskedLM, DistilBertTokenizer), "roberta": (RobertaConfig, RobertaForMaskedLM, RobertaTokenizer), "bert": (BertConfig, BertForMaskedLM, BertTokenizer), "gpt2": (GPT2Config, GPT2LMHeadModel, GPT2Tokenizer), } def sanity_checks(args): """ A bunch of args sanity checks to perform even starting... """ assert (args.mlm and args.alpha_mlm > 0.0) or (not args.mlm and args.alpha_mlm == 0.0) assert (args.alpha_mlm > 0.0 and args.alpha_clm == 0.0) or (args.alpha_mlm == 0.0 and args.alpha_clm > 0.0) if args.mlm: assert os.path.isfile(args.token_counts) assert (args.student_type in ["roberta", "distilbert"]) and (args.teacher_type in ["roberta", "bert"]) else: assert (args.student_type in ["gpt2"]) and (args.teacher_type in ["gpt2"]) assert args.teacher_type == args.student_type or ( args.student_type == "distilbert" and args.teacher_type == "bert" ) assert os.path.isfile(args.student_config) if args.student_pretrained_weights is not None: assert os.path.isfile(args.student_pretrained_weights) if args.freeze_token_type_embds: assert args.student_type in ["roberta"] assert args.alpha_ce >= 0.0 assert args.alpha_mlm >= 0.0 assert args.alpha_clm >= 0.0 assert args.alpha_mse >= 0.0 assert args.alpha_cos >= 0.0 assert args.alpha_ce + args.alpha_mlm + args.alpha_clm + args.alpha_mse + args.alpha_cos > 0.0 def freeze_pos_embeddings(student, args): if args.student_type == "roberta": student.roberta.embeddings.position_embeddings.weight.requires_grad = False elif args.student_type == "gpt2": student.transformer.wpe.weight.requires_grad = False def freeze_token_type_embeddings(student, args): if args.student_type == "roberta": student.roberta.embeddings.token_type_embeddings.weight.requires_grad = False def main(): parser = argparse.ArgumentParser(description="Training") parser.add_argument("--force", action="store_true", default=True, help="Overwrite dump_path if it already exists.") parser.add_argument( "--dump_path", type=str, #required=True, default=r'D:\2020.03.02distillgpt2' , help="The output directory (log, checkpoints, parameters, etc.)" ) parser.add_argument( "--data_file", type=str, #required=True, default=r'scripts\gpt2.pickle' , help="The binarized file (tokenized + tokens_to_ids) and grouped by sequence.", ) parser.add_argument( "--student_type", type=str, choices=["distilbert", "roberta", "gpt2"], #required=True, default='gpt2', help="The student type (DistilBERT, RoBERTa).", ) parser.add_argument("--student_config", type=str, #required=True, default=r'training_configs\distilgpt2.json', help="Path to the student configuration.") parser.add_argument( "--student_pretrained_weights", default=None, type=str, help="Load student initialization checkpoint." ) parser.add_argument( "--teacher_type", choices=["bert", "roberta", "gpt2"], #required=True, default='gpt2', help="Teacher type (BERT, RoBERTa)." ) parser.add_argument("--teacher_name", type=str, #required=True, default= r'D:\checkpoint-652500', help="The teacher model.") parser.add_argument("--temperature", default=1.5, type=float, help="Temperature for the softmax temperature.") parser.add_argument( "--alpha_ce", default=0.5, type=float, help="Linear weight for the distillation loss. Must be >=0." ) parser.add_argument( "--alpha_mlm", default=0.0, type=float, help="Linear weight for the MLM loss. Must be >=0. Should be used in coonjunction with `mlm` flag.", ) parser.add_argument("--alpha_clm", default=0.5, type=float, help="Linear weight for the CLM loss. Must be >=0.") parser.add_argument("--alpha_mse", default=0.0, type=float, help="Linear weight of the MSE loss. Must be >=0.") parser.add_argument( "--alpha_cos", default=0.0, type=float, help="Linear weight of the cosine embedding loss. Must be >=0." ) parser.add_argument( "--mlm", action="store_true", help="The LM step: MLM or CLM. If `mlm` is True, the MLM is used over CLM." ) parser.add_argument( "--mlm_mask_prop", default=0.15, type=float, help="Proportion of tokens for which we need to make a prediction.", ) parser.add_argument("--word_mask", default=0.8, type=float, help="Proportion of tokens to mask out.") parser.add_argument("--word_keep", default=0.1, type=float, help="Proportion of tokens to keep.") parser.add_argument("--word_rand", default=0.1, type=float, help="Proportion of tokens to randomly replace.") parser.add_argument( "--mlm_smoothing", default=0.7, type=float, help="Smoothing parameter to emphasize more rare tokens (see XLM, similar to word2vec).", ) parser.add_argument("--token_counts", type=str, default=r'scripts\gpt2_token_counts.pickle' , help="The token counts in the data_file for MLM.") parser.add_argument( "--restrict_ce_to_mask", action="store_true", help="If true, compute the distilation loss only the [MLM] prediction distribution.", ) parser.add_argument( "--freeze_pos_embs", action="store_true", help="Freeze positional embeddings during distillation. For student_type in ['roberta', 'gpt2'] only.", ) parser.add_argument( "--freeze_token_type_embds", action="store_true", help="Freeze token type embeddings during distillation if existent. For student_type in ['roberta'] only.", ) parser.add_argument("--n_epoch", type=int, default=3, help="Number of pass on the whole dataset.") parser.add_argument("--batch_size", type=int, default=4, help="Batch size (for each process).") parser.add_argument( "--group_by_size", action="store_false", help="If true, group sequences that have similar length into the same batch. Default is true.", ) parser.add_argument( "--gradient_accumulation_steps", type=int, default=50, help="Gradient accumulation for larger training batches.", ) parser.add_argument("--warmup_prop", default=0.05, type=float, help="Linear warmup proportion.") parser.add_argument("--weight_decay", default=0.0, type=float, help="Weight deay if we apply some.") parser.add_argument("--learning_rate", default=5e-4, type=float, help="The initial learning rate for Adam.") parser.add_argument("--adam_epsilon", default=1e-6, type=float, help="Epsilon for Adam optimizer.") parser.add_argument("--max_grad_norm", default=5.0, type=float, help="Max gradient norm.") parser.add_argument("--initializer_range", default=0.02, type=float, help="Random initialization range.") parser.add_argument( "--fp16", action="store_true", help="Whether to use 16-bit (mixed) precision (through NVIDIA apex) instead of 32-bit", ) parser.add_argument( "--fp16_opt_level", type=str, default="O1", help="For fp16: Apex AMP optimization level selected in ['O0', 'O1', 'O2', and 'O3']." "See details at https://nvidia.github.io/apex/amp.html", ) parser.add_argument("--n_gpu", type=int, default=1, help="Number of GPUs in the node.") parser.add_argument("--local_rank", type=int, default=-1, help="Distributed training - Local rank") parser.add_argument("--seed", type=int, default=2020, help="Random seed") parser.add_argument("--log_interval", type=int, default=500, help="Tensorboard logging interval.") parser.add_argument("--checkpoint_interval", type=int, default=1500, help="Checkpoint interval.") args = parser.parse_args([]) sanity_checks(args) # ARGS # init_gpu_params(args) set_seed(args) if args.is_master: if os.path.exists(args.dump_path): if not args.force: raise ValueError( f"Serialization dir {args.dump_path} already exists, but you have not precised wheter to overwrite it" "Use `--force` if you want to overwrite it" ) else: shutil.rmtree(args.dump_path) if not os.path.exists(args.dump_path): os.makedirs(args.dump_path) logger.info(f"Experiment will be dumped and logged in {args.dump_path}") # SAVE PARAMS # logger.info(f"Param: {args}") with open(os.path.join(args.dump_path, "parameters.json"), "w",encoding = 'utf-8') as f: json.dump(vars(args), f, indent=4) #git_log(args.dump_path) student_config_class, student_model_class, _ = MODEL_CLASSES[args.student_type] teacher_config_class, teacher_model_class, teacher_tokenizer_class = MODEL_CLASSES[args.teacher_type] # TOKENIZER # from transformers import BertTokenizer tokenizer = BertTokenizer( vocab_file = r"scripts\vocab.txt", unk_token='<unk>', sep_token='<sep>', pad_token='<pad>', cls_token='</s>', mask_token='<mask>') special_tokens_dict = {"bos_token": "<s>", "eos_token": "</s>"} tokenizer.add_special_tokens(special_tokens_dict) special_tok_ids = {} for tok_name, tok_symbol in tokenizer.special_tokens_map.items(): idx = tokenizer.all_special_tokens.index(tok_symbol) special_tok_ids[tok_name] = tokenizer.all_special_ids[idx] logger.info(f"Special tokens {special_tok_ids}") args.special_tok_ids = special_tok_ids args.max_model_input_size = 512 # DATA LOADER # logger.info(f"Loading data from {args.data_file}") with open(args.data_file, "rb") as fp: data = pickle.load(fp) if args.mlm: logger.info(f"Loading token counts from {args.token_counts} (already pre-computed)") with open(args.token_counts, "rb") as fp: counts = pickle.load(fp) token_probs = np.maximum(counts, 1) ** -args.mlm_smoothing for idx in special_tok_ids.values(): token_probs[idx] = 0.0 # do not predict special tokens token_probs = torch.from_numpy(token_probs) else: token_probs = None train_lm_seq_dataset = LmSeqsDataset(params=args, data=data) logger.info(f"Data loader created.") # STUDENT # logger.info(f"Loading student config from {args.student_config}") stu_architecture_config = student_config_class.from_pretrained(args.student_config) stu_architecture_config.output_hidden_states = True if args.student_pretrained_weights is not None: logger.info(f"Loading pretrained weights from {args.student_pretrained_weights}") student = student_model_class.from_pretrained(args.student_pretrained_weights, config=stu_architecture_config) else: student = student_model_class(stu_architecture_config) if args.n_gpu > 0: student.to(f"cuda:{args.local_rank}") logger.info(f"Student loaded.") # TEACHER # teacher = teacher_model_class.from_pretrained(args.teacher_name, output_hidden_states=True) teacher.resize_token_embeddings(len(tokenizer)) # Update the model embeddings with the new vocabulary size teacher.to('cuda') teacher.eval() if args.n_gpu > 0: teacher.to(f"cuda:{args.local_rank}") logger.info(f"Teacher loaded from {args.teacher_name}.") # FREEZING # if args.freeze_pos_embs: freeze_pos_embeddings(student, args) if args.freeze_token_type_embds: freeze_token_type_embeddings(student, args) # SANITY CHECKS # assert student.config.vocab_size == teacher.config.vocab_size assert student.config.hidden_size == teacher.config.hidden_size assert student.config.max_position_embeddings == teacher.config.max_position_embeddings if args.mlm: assert token_probs.size(0) == stu_architecture_config.vocab_size # DISTILLER # torch.cuda.empty_cache() distiller = Distiller( params=args, dataset=train_lm_seq_dataset, token_probs=token_probs, student=student, teacher=teacher ) distiller.train() logger.info("Let's go get some drinks.") if __name__ == "__main__": main() <|||||>Hi, not sure where to put this but there might be some small errors in the training code provided in this repo. 1. `n_gpus` is used when initializing GPUs but the actual name of this variable is `gpus` 2. It seems that now `return_dict` is default to `True`, so the function`step` fails because the results are unpacked to keys rather than values. The easiest fix I guess is to turn off `return_dict` in `train.py` like the following ```python student.config.update(dict(return_dict=False)) teacher.config.update(dict(return_dict=False)) ```<|||||>I followed the guide in README to train a distill gpt2, but the performance of my distilgpt2 is not good as huggingface, actually, the performance is bad. Did you trained a nice performance distilgpt2?
transformers
3,150
closed
Padding changes model outputs (even with attention_mask)
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Bert Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: Consider the following example code which gets a BERT embedding, and compares the BERT embedding on the same input with padding + masking. ``` bertModel=BertModel.from_pretrained('bert-base-uncased') bertModel=bertModel.eval() print('token id:'+str(id)) print('token types:'+str(typeout)) bert_out = torch.mean(bertModel(id, token_type_ids = typeout, attention_mask = mask)[0],1) add_pad = lambda x: torch.cat((x, torch.zeros(1, 10, dtype=x.dtype)),1) print('mask:'+str(mask)) print('padded id:'+str(add_pad(id))) print('padded type:'+str(add_pad(typeout))) print('padded mask:'+str(add_pad(mask))) bert_out_2 = torch.mean(bertModel(add_pad(id), token_type_ids = add_pad(typeout), attention_mask = add_pad(mask))[0],1) print(bert_out[0,0:10]) print(bert_out_2[0,0:10]) ``` The output here is ``` token id:tensor([[ 101, 5292, 3270, 102, 8638, 2060, 102]]) token types:tensor([[0, 0, 0, 0, 1, 1, 1]]) mask:tensor([[1., 1., 1., 1., 1., 1., 1.]]) padded id:tensor([[ 101, 5292, 3270, 102, 8638, 2060, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]) padded type:tensor([[0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]) padded mask:tensor([[1., 1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]) tensor([ 0.2600, 0.1568, -0.2576, -0.0110, 0.1021, -0.1336, 0.2546, -0.5071, 0.0462, -0.2283], grad_fn=<SliceBackward>) tensor([ 0.0996, 0.0061, -0.3331, -0.0237, -0.1110, -0.0050, 0.2755, -0.3335, -0.0565, -0.2542], grad_fn=<SliceBackward>) ``` The two tensors which were generated by the same input but have different padding have drastically different embeddings. ## Expected behavior The last two lines of the output should be the same ## Environment info - `transformers` version: 2.1.1. - Platform: osx - Python version: 3.7.3 - PyTorch version (GPU?): 1.2.0 - Tensorflow version (GPU?): N/A - Using GPU in script?: no - Using distributed or parallel set-up in script?: no
03-05-2020 23:47:03
03-05-2020 23:47:03
I tried to replicate the input as close as I could given the output you gave: ```py from transformers import BertModel import torch token_id = torch.tensor([[ 101, 5292, 3270, 102, 8638, 2060, 102]]) token_types = torch.tensor([[0, 0, 0, 0, 1, 1, 1]]) mask = torch.tensor([[1., 1., 1., 1., 1., 1., 1.]]) padded_id = torch.tensor([[ 101, 5292, 3270, 102, 8638, 2060, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]) padded_type = torch.tensor([[0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]) padded_mask = torch.tensor([[1., 1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]) bertModel = BertModel.from_pretrained('bert-base-uncased') bertModel = bertModel.eval() output = bertModel(token_id, token_type_ids=token_types, attention_mask=mask) padded_output = bertModel(padded_id, token_type_ids=padded_type, attention_mask=padded_mask) ``` I then print the maximum difference between the output of the model with the non-padded input and the output with the padded input: ```py print(torch.max(output[0] - padded_output[0][:, :7])) print(torch.max(output[1] - padded_output[1])) ``` Which outputs the following (negligible) difference: ```py tensor(3.6359e-06, grad_fn=<MaxBackward1>) tensor(5.9605e-07, grad_fn=<MaxBackward1>) ``` Would it be possible for you to give a completely reproducible script so that I may see where the issue lies?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,149
closed
fix missed BartForMaskedLM renaming
Quick fix @sshleifer
03-05-2020 23:36:17
03-05-2020 23:36:17
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3149?src=pr&el=h1) Report > Merging [#3149](https://codecov.io/gh/huggingface/transformers/pull/3149?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/857e0a0d3ba39be6259961524a730d3f106cec9c?src=pr&el=desc) will **decrease** coverage by `0.05%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3149/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3149?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3149 +/- ## ========================================== - Coverage 78.03% 77.97% -0.06% ========================================== Files 98 98 Lines 16588 16588 ========================================== - Hits 12944 12935 -9 - Misses 3644 3653 +9 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3149?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3149/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `91.08% <0%> (-2.13%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3149/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.56% <0%> (+0.15%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3149?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3149?src=pr&el=footer). Last update [857e0a0...58fc8f9](https://codecov.io/gh/huggingface/transformers/pull/3149?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thanks!
transformers
3,148
closed
refactored beam search according to torch implementation
1:1 translation of PR #3135 from Pytorch to TF.
03-05-2020 23:00:23
03-05-2020 23:00:23
good to merge for me<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3148?src=pr&el=h1) Report > Merging [#3148](https://codecov.io/gh/huggingface/transformers/pull/3148?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0001d056861bb1ec7bd6a825006f578629a101fc?src=pr&el=desc) will **decrease** coverage by `1.04%`. > The diff coverage is `95.83%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3148/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3148?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3148 +/- ## ========================================== - Coverage 78.03% 76.98% -1.05% ========================================== Files 98 98 Lines 16573 16583 +10 ========================================== - Hits 12932 12766 -166 - Misses 3641 3817 +176 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3148?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3148/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `91.47% <95.83%> (-1.95%)` | :arrow_down: | | [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3148/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3148/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: | | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3148/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: | | [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3148/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96% <0%> (-2.23%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3148/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3148/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.56% <0%> (-0.16%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3148?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3148?src=pr&el=footer). Last update [0001d05...2861c9d](https://codecov.io/gh/huggingface/transformers/pull/3148?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,147
closed
Pass kwargs to configuration
**kwargs were not passed to the PreTrainedConfiguration when using `from_pretrained` closes #3093
03-05-2020 21:15:10
03-05-2020 21:15:10
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3147?src=pr&el=h1) Report > Merging [#3147](https://codecov.io/gh/huggingface/transformers/pull/3147?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7ac47bfe69f25fc7381be65870b2f4e5cdb8cb6a?src=pr&el=desc) will **increase** coverage by `<.01%`. > The diff coverage is `100%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3147/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3147?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3147 +/- ## ========================================== + Coverage 78% 78.01% +<.01% ========================================== Files 98 98 Lines 16561 16569 +8 ========================================== + Hits 12919 12926 +7 - Misses 3642 3643 +1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3147?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3147/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.72% <100%> (+0.23%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3147/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.41% <0%> (-0.22%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3147?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3147?src=pr&el=footer). Last update [7ac47bf...1159cff](https://codecov.io/gh/huggingface/transformers/pull/3147?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,146
closed
Create README.md for mrm8488/bert-multi-uncased-finetuned-xquadv1
03-05-2020 18:36:04
03-05-2020 18:36:04
transformers
3,145
closed
[Bart] FP16 Support
03-05-2020 17:58:33
03-05-2020 17:58:33
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3145?src=pr&el=h1) Report > Merging [#3145](https://codecov.io/gh/huggingface/transformers/pull/3145?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7ac47bfe69f25fc7381be65870b2f4e5cdb8cb6a?src=pr&el=desc) will **decrease** coverage by `0.06%`. > The diff coverage is `66.66%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3145/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3145?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3145 +/- ## ========================================== - Coverage 78% 77.94% -0.07% ========================================== Files 98 98 Lines 16561 16560 -1 ========================================== - Hits 12919 12907 -12 - Misses 3642 3653 +11 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3145?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3145/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.37% <66.66%> (-0.02%)` | :arrow_down: | | [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3145/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0%> (ø)` | :arrow_up: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3145/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <0%> (ø)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3145/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `91.29% <0%> (-2.34%)` | :arrow_down: | | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3145/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.77% <0%> (ø)` | :arrow_up: | | [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3145/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.22% <0%> (ø)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3145/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.54% <0%> (ø)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3145?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3145?src=pr&el=footer). Last update [7ac47bf...1360dac](https://codecov.io/gh/huggingface/transformers/pull/3145?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,144
closed
[Question]: Why does model.__call__ return the loss too?
# ❓ Questions & Help In PyTorch, model.__call__ returns the output tensor and users have to call a loss function to get the loss. I wonder why models in transformers doesn't follow this convention? Any specific reason?
03-05-2020 17:50:09
03-05-2020 17:50:09
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,143
closed
Correct missing keys
closes #3142 The problem came from the fact that `from_pretrained` set the model on which to load the weights to the base model: it detected that the state dict was made for the base, therefore loading only onto the base. It didn't look for the weights it didn't load. Here, both state dicts are analyzed and the difference (keys present in the state dict with the head and not in the base state dict) are added to the missing keys. Added a test.
03-05-2020 16:58:33
03-05-2020 16:58:33
Very good catch
transformers
3,142
closed
Missing `missing_keys` when loading from saved base model checkpoint
# 🐛 Bug ## Information If a base model (e.g. `BertModel`, `DistilBertModel`, ...) is saved using `save_pretrained` and a model with an additional head (e.g. `BertForSequenceClassification`, `DistilBertForQuestionAnswering`, ...) is loaded from that checkpoint, it will not detect that it is missing layers. ## To reproduce Steps to reproduce the behavior: 1. Instantiate base model from configuration or from `from_pretrained` 2. Save model using `save_pretrained` 3. Load checkpoint in model with head 4. No warning is output. Furthermore, if `output_loading_info=True` in step 3), will output `{'missing_keys': [], 'unexpected_keys': [], 'error_msgs': []}` Here's a reproducible example: ```py from transformers import BertForSequenceClassification, BertModel, BertConfig config = BertConfig() base_model = BertModel(config) base_model.save_pretrained(directory) model, loading_info = BertForSequenceClassification.from_pretrained(directory, output_loading_info=True) print(loading_info) # {'missing_keys': [], 'unexpected_keys': [], 'error_msgs': []} # Should output {'missing_keys': ['classifier.weight', 'classifier.bias'], 'unexpected_keys': [], 'error_msgs': []} ``` ## Expected behavior Should detect the missing keys, as it does when loading from a full checkpoint: ```py from transformers import BertForSequenceClassification model, loading_info = BertForSequenceClassification.from_pretrained("bert-base-cased", output_loading_info=True) print(loading_info) # {'missing_keys': ['classifier.weight', 'classifier.bias'], 'unexpected_keys': ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias'], 'error_msgs': []} ``` ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: master branch - Platform: Linux-5.5.7-arch1-1-x86_64-with-arch - Python version: 3.6.10 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): 2.1.0 (True) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
03-05-2020 15:58:05
03-05-2020 15:58:05
transformers
3,141
closed
GPU memory getting out of bound
I am trying to run the pre-trained small GPT model with language head with batch size 16. Problem is, after each iteration about 440MB of memory is allocated and quickly the GPU memory is getting out of bound. I am not running the pre-trained model in training mode. In my understanding, in each iteration a single word (16 word for batch size 16) is going as input (from the second iteration) and the new attention is calculated and the `past` variable will be updated and increased for 16 word. So, a little bit of memory usage is expected but I don't understand why it is almost half a GB. I ran the following code to measure the memory usage in each iteration: ``` before=torch.cuda.max_memory_allocated(device=device) output, past = model(b_train_contexts,past=past) print("memory usage") after=torch.cuda.max_memory_allocated(device=device) print(after-before) ``` Output: ``` memory 0 memory 270742528 memory 442328576 memory 443433472 memory 444525056 memory 445629952 memory 446721536 memory 447826432 memory 448918016 . . . ```
03-05-2020 15:27:23
03-05-2020 15:27:23
Hi, could you provide a reproducible example so that we may test on our side?<|||||>Thank you for your reply. Here is the code and my 32GB GPU memory getting out of bound before 500 iteration. ``` from transformers import GPT2LMHeadModel, GPT2Tokenizer import torch tokenizer = GPT2Tokenizer.from_pretrained("gpt2") model = GPT2LMHeadModel.from_pretrained('gpt2') device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = model.to(device) model.eval() text="The Manhattan Bridge is a suspension bridge that crosses the East River in New York City, connecting Lower Manhattan at Canal Street with Downtown Brooklyn at the Flatbush Avenue Extension. The main span is 1,470 ft (448 m) long, with the suspension cables being 3,224 ft (983 m) long. The bridge's total length is 6,855 ft (2,089 m). It is one of four toll-free vehicular bridges connecting Manhattan Island to Long Island; the nearby Brooklyn Bridge is just slightly further downtown, while the Queensboro and Williamsburg Bridges are to the north." generated1= tokenizer.encode(text) generated2=tokenizer.encode(text) context = torch.tensor([generated1,generated2]) context =context.to(device) print(context.shape) past = None for i in range(500): before=torch.cuda.max_memory_allocated(device=device) output, past = model(context, past=past) after=torch.cuda.max_memory_allocated(device=device) print(after-before) token = torch.argmax(output[..., -1, :],dim=1) context = token.view(2,-1) ``` If I use a small initial context, this can survive. But problem happens when I use a long initial context. Please try with a small initial context and you will see difference in memory allocation in each iteration. <|||||>I guess this is because the past requires a lot of memory to be saved. It speeds up the sequential decoding but requires a lot of memory. Your script crashes for me at iteration 483, but a script that doesn't make use of the past can reach the maximum length of 1024 tokens on my 24GB of VRAM. Dropping the past when it becomes too large may be a good idea, same as you would do if it were to go over the max sequence length.<|||||>Hi, Thanks for the reply. By "script that does not make use of past", you mean in each iteration the input is (previous context+ generated token id)? I did the following code. For batch size=8, it does work. No memory out of bound error. But for batch size=16, the error comes back. ``` from transformers import GPT2LMHeadModel, GPT2Tokenizer import torch tokenizer = GPT2Tokenizer.from_pretrained("gpt2") model = GPT2LMHeadModel.from_pretrained('gpt2') device = torch.device("cuda" if torch.cuda.is_available() else "cpu") n_gpu = torch.cuda.device_count() torch.cuda.get_device_name() model = model.to(device) model.eval() text="Construction began on the bridge in 1901 under the instruction of the New York City Department of Bridges commissioner Gustav Lindenthal and the chief engineer R.S. Buck. Just three years later, however, local politicking was responsible for the pair being replaced with George E. Best and Othniel Foster Nichols, respectively. The bridge design was based on deflection theory, a new concept at the time that was developed by Joseph Melan and applied to the bridge by the chief engineer Leon Moisseiff. This design saved in cost, material, and construction time. The bridge was officially opened to traffic on Dec. 31, 1909. Renovations in 1940 revealed significant wear on the structure, with the subway trains partly responsible for the wear. Those trains, upon entering the bridge at the same time from opposite sides, would cause the bridge to shift up to 8 feet (approximately 2.5 metres). Additional renovations were undertaken in 1978. Since then the Manhattan Bridge has been featured in movies, has undergone regular repairs and retrofitting, and remains one of the most graceful bridges in New York City." generated1= tokenizer.encode(text) generated2=tokenizer.encode(text) generated3= tokenizer.encode(text) generated4=tokenizer.encode(text) generated5= tokenizer.encode(text) generated6=tokenizer.encode(text) generated7= tokenizer.encode(text) generated8=tokenizer.encode(text) # generated9= tokenizer.encode(text) # generated10=tokenizer.encode(text) # generated11= tokenizer.encode(text) # generated12=tokenizer.encode(text) # generated13= tokenizer.encode(text) # generated14=tokenizer.encode(text) # generated15= tokenizer.encode(text) # generated16=tokenizer.encode(text) context=torch.tensor([generated1,generated2,generated3,generated4,generated5,generated6,generated7,generated8]) # context =generated # generated =generated.to(device) context =context.to(device) print(context.shape) import time batch_size=8 start_time = time.time() for i in range(500): output, past = model(context) new_tokens = torch.argmax(output[..., -1, :],dim=1) new_tokens = new_tokens.view(batch_size,-1) context=torch.cat([context,new_tokens],dim=1) elapsed_time = time.time() - start_time print("time") print(elapsed_time) ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>> What did you mean by dropping the past? Any example?
transformers
3,140
closed
Merge bart generate into default generate
This PR is a first version of how the bart generation() code can be merged into the "default" generation function. I think it's actually much better feasible than we originally thought. Please not that this change also includes all the changes from PR #3135 , so the code changes will be much less cluttered after #3135 merged. This version was passes the general random language generation tests in found in `test_modeling_common.py` and the easy integration test with the original fairseq model (renamed to `test_cnn_summarization_same_as_fairseq_easy` in `test_modeling_bart`) There are a couple of things we should discuss: 1. Both Bart generate() and default generate(), encoder-decoder models **must** have a BOS token and an EOS token. 2. Two new parameters were added: `min_length` and `no_repeat_ngram_size` . I think these parameters should be added generally as it is done now. 3. There was one hack which initializes the `decoder_input_ids` to the EOS token and then forces the model to generate the BOS token afterwards (see comment in code line). I changed it to simply start with the BOS token (which makes more sense) and it also passed the "easy integration tests". This hack might be needed to pass the hard integration test though. 4. Fairseq forces the last token of all beam hypotheses to be the EOS token (see comment in line). This is probably necessary to pass the integration tests. It's up for debate whether this the correct way. I would prefer not to do it this way because it will then be impossible to generate unfinished sentences (sentence that end because they hit `max_length`). If one really wants all beam hypotheses to be finished, one could set the `max_length` higher than usual and set the parameter: `self.early_stopping` in the Beam Hypotheses class to `True`. Up for debate how to handle this. 5. In order to also pass the hard integration tests (which has a padded batch as an input), we will have to add `attention_masks` to the `generate`() function. Here I see three possibilities: a) add the `attention_mask` as a parameter to the generation() function. b) automatically calculate the `attention_mask` from the `input_ids` **if** the model has a `pad_token_id` **and** there is a `pad_token_id` in the input_ids. c) Not allow padded batches for the moment. I would prefer option b) because some models do have a set `pad_token_id` (such as Bart) so we should be able to allow padded generation.
03-05-2020 15:16:25
03-05-2020 15:16:25
clarification: `bart.generate` doesn't add EOS if `max_length` is hit, or require EOS to pass integration tests. It just "finalizes" a hypothesis when the model predicts EOS for a beam. Example: ```python ARTICLE_TO_SUMMARIZE = "code doesnt generate EOS if max_length is hit" inputs = tokenizer.batch_encode_plus([ARTICLE_TO_SUMMARIZE], return_tensors='pt') generated_ids = model.generate(inputs['input_ids'], attention_mask=inputs['attention_mask'], num_beams=4, max_length=5) summary_text = tokenizer.decode(generated_ids[0]) print(generated_ids[0], summary_text) # (tensor([ 0, 2387, 964, 32, 3035]), '<s>My friends are cool') ``` <|||||>> I would thus like to propose the following workflow for the forward pass of all models: > [...] > What do you think especially @LysandreJik and @julien-c Sounds good to me<|||||>By the way, my workflow proposition actually implied that we should use the same workflow and inputs for the `generate()` method as well (I could have been more explicit)<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3140?src=pr&el=h1) Report > Merging [#3140](https://codecov.io/gh/huggingface/transformers/pull/3140?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d6de6423baf02a971d38ee69824104a1f0f85ad2?src=pr&el=desc) will **decrease** coverage by `0.15%`. > The diff coverage is `74.71%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3140/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3140?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3140 +/- ## ========================================== - Coverage 78.14% 77.99% -0.16% ========================================== Files 98 98 Lines 16668 16665 -3 ========================================== - Hits 13026 12998 -28 - Misses 3642 3667 +25 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3140?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3140/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `100% <ø> (ø)` | :arrow_up: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3140/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.82% <100%> (+0.07%)` | :arrow_up: | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3140/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.27% <100%> (+2.69%)` | :arrow_up: | | [src/transformers/configuration\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3140/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `96.55% <100%> (ø)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3140/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.47% <46.96%> (-6.27%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3140/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.84% <90.32%> (-0.57%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3140?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3140?src=pr&el=footer). Last update [d6de642...bc9d5d9](https://codecov.io/gh/huggingface/transformers/pull/3140?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>UPDATE: This version passes all integration tests now. There are two things which I quite hacky we probably should implement in a cleaner way: - the function `prepare_scores_for_generation` is a hacky function to make Bart pass the integration tests. See in code-line comment - initializing all encoder-decoder models with the EOS token is probably not the right thing to do. See in code-line comment. @sshleifer @thomwolf @LysandreJik <|||||>@thomwolf, @sshleifer Thanks for your reviews guys. I think we are on the same page except for two things: 1. Force tokens to be generated. The BOS token is generated at `step=0` and the EOS token is generated at `step=max_length`. This is necessary to reproduce the original fairseq results. Should we keep that as a model specific "prepare" function? 2. Use the EOS token for the Bart models. This is also necessary to reproduce the original fairseq results. Should we keep that? My opinion is: 1. I would remove both statements that force a certain token to be predicted. Even without doing it, Bart produces (in my opinion) good summarization results. 2. Same goes for this point. Would always use BOS token as the starting `decoder_input_ids` for encoder-decoder models and force encoder-decoder models to have a BOS token to be able to do language generation. Doing this would means, we would have to change the integration tests and we won't produce 1:1 the same results as fairseq anymore. @sshleifer you can probably estimate the consequences for the Bart summarization quality much better than me! From the examples in the `test_modeling_bart.py`, I ran the summarization and the output looked good to me, but didn't measure ROUGE scores or anything... What do you guys think? <|||||>I ran eval on the full CNN test set with these simplifications, and Rouge decreases from .21072 to .20285. For context, here are the published Rouge-2 scores of a bunch of different models: ![image](https://user-images.githubusercontent.com/6045025/76256460-5b675780-6226-11ea-8516-28d0427251ca.png) Note: the published bart score is a bit higher (.2128) because there are even [more tricks](https://github.com/pytorch/fairseq/issues/1765#issuecomment-593720522) I didn't implement. <|||||>> core is a bit higher (.2128) because the Awesome! Thanks a lot. I'm not super familiar with Rouge - is that drop in performance very significant? @sshleifer @thomwolf <|||||>> I ran eval on the full CNN test set with these simplifications, and Rouge decreases from .21072 to .20285. > > For context, here are the published Rouge-2 scores of a bunch of different models: > > ![image](https://user-images.githubusercontent.com/6045025/76256460-5b675780-6226-11ea-8516-28d0427251ca.png) > > Note: the published bart score is a bit higher (.2128) because there are even [more tricks](https://github.com/pytorch/fairseq/issues/1765#issuecomment-593720522) I didn't implement. @sshleifer what is exactly the trick you mentioned you didn't implemented? to "force the second token to not be bos"? Overall I think I'm fine with have a clean distinction between post filtering method that will be optionally called for a model and will store all these post filtering tricks in specific models and a generic `generate()` that (for now) will be cleaner. The weirdest trick to me is to initialize with the EOS token (can we maybe use two times the BOS token here for instance?), the other tricks are less shocking.<|||||>Ok let's merge this to be able to move forward with T5 cc @craffel @julien-c @LysandreJik we think the self-hosted failing tests are not related to this. Can you have a look later maybe?<|||||>@patrickvonplaten You mentioned a way to perform batch inference with `GPT2LMHeadModel` using an attention mask here: https://github.com/huggingface/transformers/issues/3021#issuecomment-591418233. Does this PR make this possible by calling `model.generate(input_ids=..., attention_mask=...)`?<|||||>> @patrickvonplaten You mentioned a way to perform batch inference with `GPT2LMHeadModel` using an attention mask here: [#3021 (comment)](https://github.com/huggingface/transformers/issues/3021#issuecomment-591418233). > > Does this PR make this possible by calling `model.generate(input_ids=..., attention_mask=...)`? Hi @thesamuel, Not yet completely. It's one step to make the generation possible as shown in #3021, but there are still two things that are not considered yet in the generate fn: 1. The position embeddings have to be updated - which generate() does not do yet 2. And this is the hard one: If a padded batch is given as an input, it should not be sampled from the last token, but from the last non-padded token and this can be quite hacky. We are currently thinking about how to implement this!<|||||>@patrickvonplaten Got it, thanks!<|||||>(Investigating) This PR may introduce a BOS bug that reduces rouge to 15.068 from 21.28<|||||>Simple bug caused by `do_sample` (which for some reason defaults to True). Anyway, I'm rerunning rouge but it will likely be at a reasonable level.<|||||>Possible follow-up states are explained in PR: https://github.com/huggingface/transformers/pull/3225
transformers
3,139
closed
Remove excess line breaks in DeepPavlov model cards
03-05-2020 14:59:45
03-05-2020 14:59:45
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3139?src=pr&el=h1) Report > Merging [#3139](https://codecov.io/gh/huggingface/transformers/pull/3139?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8a2d9bc9ef38452e80ce872505a5ad5623c12657?src=pr&el=desc) will **decrease** coverage by `0.51%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3139/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3139?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3139 +/- ## ========================================== - Coverage 78.45% 77.94% -0.52% ========================================== Files 98 98 Lines 16561 16561 ========================================== - Hits 12993 12908 -85 - Misses 3568 3653 +85 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3139?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/3139/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.89% <0%> (-27.6%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3139?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3139?src=pr&el=footer). Last update [8a2d9bc...129d1ab](https://codecov.io/gh/huggingface/transformers/pull/3139?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,138
closed
Add model cards for DeepPavlov models
Sorry about the Cyrillic `с` yesterday.
03-05-2020 14:19:52
03-05-2020 14:19:52
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3138?src=pr&el=h1) Report > Merging [#3138](https://codecov.io/gh/huggingface/transformers/pull/3138?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e9e6efdc452b74947d40a5a2e8af2fc444c63b5b?src=pr&el=desc) will **decrease** coverage by `0.52%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3138/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3138?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3138 +/- ## ========================================== - Coverage 78.35% 77.83% -0.53% ========================================== Files 98 98 Lines 16422 16422 ========================================== - Hits 12868 12782 -86 - Misses 3554 3640 +86 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3138?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/3138/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.89% <0%> (-27.6%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3138/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.29% <0%> (-0.16%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3138?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3138?src=pr&el=footer). Last update [e9e6efd...fe1854d](https://codecov.io/gh/huggingface/transformers/pull/3138?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>You can also add a link to a thumbnail image from the `thumbnail:` attribute of the YAML front matter metadata, if you want. Thank you!
transformers
3,137
closed
Refactor BartModel so that input checks are handled within enc/dec
Implementing #3133 I've left the code that creates dummy inputs and checks/filters the outputs, this could potentially also be moved to `BartEncoder` and `BartDecoder`.
03-05-2020 13:50:02
03-05-2020 13:50:02
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3137?src=pr&el=h1) Report > Merging [#3137](https://codecov.io/gh/huggingface/transformers/pull/3137?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/30624f7056ae3b607ba1d02f474f2c7986e87dff?src=pr&el=desc) will **increase** coverage by `0.01%`. > The diff coverage is `100%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3137/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3137?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3137 +/- ## ========================================== + Coverage 77.94% 77.95% +0.01% ========================================== Files 98 98 Lines 16561 16565 +4 ========================================== + Hits 12908 12913 +5 + Misses 3653 3652 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3137?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3137/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.43% <100%> (+0.04%)` | :arrow_up: | | [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3137/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0%> (ø)` | :arrow_up: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3137/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <0%> (ø)` | :arrow_up: | | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3137/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.77% <0%> (ø)` | :arrow_up: | | [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3137/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.22% <0%> (ø)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3137/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.54% <0%> (ø)` | :arrow_up: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3137/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.45% <0%> (+0.15%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3137?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3137?src=pr&el=footer). Last update [30624f7...31acb8d](https://codecov.io/gh/huggingface/transformers/pull/3137?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>have you checked whether this breaks `RUN_SLOW=1 pytest tests/test_modeling_bart.py`? There is some subtletly with caching+remaking the attention mask everytime.<|||||>Looks good I think? ``` ========================================================================== test session starts =========================================================================== platform darwin -- Python 3.7.6, pytest-5.3.5, py-1.8.1, pluggy-0.13.1 rootdir: /Users/tom/dev/transformers plugins: xdist-1.31.0, forked-1.1.3 collected 30 items tests/test_modeling_bart.py ...........s.................s [100%] ============================================================================ warnings summary ============================================================================ .venv/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15 /Users/tom/dev/transformers/.venv/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp -- Docs: https://docs.pytest.org/en/latest/warnings.html ========================================================== 28 passed, 2 skipped, 1 warning in 393.65s (0:06:33) ========================================================== ```<|||||>Awesome, LGTM. Will wait for @thomwolf
transformers
3,136
closed
links of the model's pretrained weights
where can I find all the links of the model's pretrained weights,please?to download them by IDE is too slow.
03-05-2020 13:16:14
03-05-2020 13:16:14
We have a `HfApi.model_list()` method on this PR https://github.com/huggingface/transformers/pull/3132 that might be of interest to you. Do let us know if it solves your use case or not.<|||||>I use the following script in an interactive enviroment: ```python >> from transformers.hf_api import HfApi >> HfApi.model_list() ``` But I got an error:` AttributeError: type object 'HfApi' has no attribute 'model_list'` And my `transformers.__version__` is 2.5.1 Did I miss something ?<|||||>> We have a `HfApi.model_list()` method on this PR #3132 that might be of interest to you. > > Do let us know if it solves your use case or not. same problem with @sevenights ,remain unsolved,Did we miss something? <|||||>#3132 was just merged on master, you should be able to try on master now.<|||||>I get it, thank you :)
transformers
3,135
closed
Refactoring and bug fixing beam search generate
This PR cleanes the beam_search decoding part of language generation. It simplifies the code and fixes a small bug for do_sample=True (see comments in code). It was also tested on all language generation slow tests. ### Future PR - [x] Do the same change for TF 2.0 if ok -> #3148
03-05-2020 12:13:36
03-05-2020 12:13:36
Good to merge for me
transformers
3,134
closed
Cant import my pretrained bert model from NVIDIA/DeepLearningExamples/
Trained with **NVIDIA/DeepLearningExamples/** and got my_bert_model/ckpt_1000.pt **.pt to .bin** Edit my_bert_model/ckpt_1000.pt to my_bert_model/pytorch_model.bin **My code** ```configuration = BertConfig(3258, 768, 12, 12, 3072) model = BertModel(configuration) model = model.from_pretrained('/home/my_bert_model', state_dict) for key, weight in model.state_dict().items(): print (weight) ``` **Output** Every execution of this code produces different ouput
03-05-2020 11:52:25
03-05-2020 11:52:25
Try: ``` model.eval() ``` before you access the weights.<|||||>Its seems not work. ``` embeddings.word_embeddings.weight tensor([[ 0.0073, 0.0080, 0.0307, ..., -0.0172, 0.0148, -0.0401], [-0.0271, 0.0110, 0.0011, ..., -0.0079, 0.0236, -0.0037], [ 0.0005, 0.0066, -0.0009, ..., -0.0065, 0.0167, 0.0301], ..., [ 0.0062, -0.0385, -0.0091, ..., -0.0022, 0.0043, 0.0018], [-0.0188, 0.0154, -0.0023, ..., -0.0049, -0.0108, 0.0393], [-0.0257, -0.0056, 0.0155, ..., -0.0198, 0.0280, -0.0143]])``` ``` ``` embeddings.word_embeddings.weight tensor([[-0.0011, -0.0345, 0.0094, ..., 0.0024, 0.0229, 0.0194], [-0.0246, 0.0329, 0.0231, ..., 0.0436, 0.0246, -0.0012], [ 0.0069, 0.0050, -0.0020, ..., -0.0002, 0.0043, 0.0208], ..., [-0.0139, -0.0091, 0.0110, ..., -0.0128, -0.0015, -0.0027], [ 0.0297, 0.0063, -0.0066, ..., 0.0070, 0.0157, 0.0417], [-0.0341, 0.0458, 0.0054, ..., -0.0525, 0.0003, -0.0122]]) ```<|||||>Solved! I find the layer_name of NVIDIA/DeepLearningExamples/ is dismatch huggingface/transformers<|||||>@Limtle This seems like it is important information. What exactly do you mean? The layer names are not the same in NVIDIA's examples and the layers here in the HuggingFace repo?<|||||>I print the weights_name of bertmodel trained with NVIDIA/DeepLearningExamples/ ``` bert.embeddings.word_embeddings.weight bert.embeddings.position_embeddings.weight bert.embeddings.token_type_embeddings.weight ... bert.encoder.layer.11.output.dense.weight bert.encoder.layer.11.output.dense.bias bert.encoder.layer.11.output.LayerNorm.weight bert.encoder.layer.11.output.LayerNorm.bias bert.pooler.dense_act.weight bert.pooler.dense_act.bias cls.predictions.bias cls.predictions.transform.dense_act.weight cls.predictions.transform.dense_act.bias cls.predictions.transform.LayerNorm.weight cls.predictions.transform.LayerNorm.bias cls.predictions.decoder.weight cls.seq_relationship.weight cls.seq_relationship.bias ``` And the weights_name of huggingface/transformers/BertModel() ``` embeddings.word_embeddings.weight embeddings.position_embeddings.weight embeddings.token_type_embeddings.weight ... encoder.layer.11.output.dense.weight encoder.layer.11.output.dense.bias encoder.layer.11.output.LayerNorm.weight encoder.layer.11.output.LayerNorm.bias pooler.dense.weight pooler.dense.bias ``` This is my code load model from NVIDIA/DeepLearningExamples/ to huggingface/transformers ``` configuration = BertConfig.from_json_file(config_path) tokenizer = BertTokenizer.from_pretrained(vocab_path) model = BertModel(configuration) state_dict = {k.replace('bert.','').replace('.dense_act','.dense'):v for k,v in torch.load(os.path.join(pytorch_pretrained_model_path, 'pytorch_model.bin'))['model'].items()} model.load_state_dict(state_dict, strict= False) #model = model.from_pretrained(pytorch_pretrained_model_path, state_dict= state_dict) ``` > Is it the right way to solve this problem? <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,133
closed
BART: move boilerplate code inside encoder/decoder
# 🚀 Feature request Could we move the boilerplate code in the `BartModel` `forward()` method inside the `forward()` methods for the encoder and decoder? That way, the encoder and decoder can be called separately as independent modules more easily. ## Motivation Currently, there's some cleanup work that is done before calling `BartEncoder.forward()` and `BartDecoder.forward()` (looks like attention mask flipping and preparing dummy outputs). If we want to call the encoder and decoder separately from our own code (eg to use Bart as an encoder, or limit which parts are fine tuned) we currently have to re-implement this code. If we move this logic inside the encoder and decoder, such that `BartModel.forward()` is a thin wrapper around the encoder+decoder, this kind of work would be much easier. Example usage: ``` model = BartModel.from_pretrained(...) inputs = tokenizer.encode('I want to classify this text.') encoding = model.encoder(inputs) predictions = my_classifier(encoding) ``` ## Your contribution I could put together a PR for this if you agree? (cc @sshleifer )
03-05-2020 11:45:20
03-05-2020 11:45:20
Sounds good to me
transformers
3,132
closed
[hf_api] Get the public list of all the models on huggingface
03-05-2020 04:52:32
03-05-2020 04:52:32
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3132?src=pr&el=h1) Report > Merging [#3132](https://codecov.io/gh/huggingface/transformers/pull/3132?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ff9e79ba3a3dd35c1a7edbd669cf78e082b2f7dc?src=pr&el=desc) will **decrease** coverage by `0.03%`. > The diff coverage is `100%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3132/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3132?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3132 +/- ## ========================================== - Coverage 78% 77.97% -0.04% ========================================== Files 98 98 Lines 16561 16581 +20 ========================================== + Hits 12919 12929 +10 - Misses 3642 3652 +10 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3132?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/hf\_api.py](https://codecov.io/gh/huggingface/transformers/pull/3132/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9oZl9hcGkucHk=) | `98% <100%> (+0.5%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3132/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `91.29% <0%> (-2.34%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3132/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.45% <0%> (+0.15%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3132?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3132?src=pr&el=footer). Last update [ff9e79b...3f067f4](https://codecov.io/gh/huggingface/transformers/pull/3132?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,131
closed
Converting tf weights: AttributeError: 'GPT2Model' object has no attribute 'zeLoss'
# 🐛 Bug ## Information Model I am using: gpt-2 I am trying to convert a fine-tuned gpt-2 model on tensroflow to pytorch state_dict. I used the nice script [here](https://github.com/huggingface/transformers/blob/ce50305e5b8c8748b81b0c8f5539a337b6a995b9/src/transformers/convert_gpt2_original_tf_checkpoint_to_pytorch.py). ## To reproduce Steps to reproduce the behavior: 1. I run the following command: ``` python convert_gpt2_original_tf_checkpoint_to_pytorch.py --gpt2_checkpoint_path /home/finetune/model_best.ckpt --pytorch_dump_folder_path /home/gpt2-fine-tf2torch/ ``` after running this command, I see the logs showing that the tf weights are being loaded but suddently I got the following error: ``` INFO:transformers.modeling_gpt2:Loading TF weight transformer_decoder/layer_9/self_attention/multihead_attention/value/bias with shape [1024] INFO:transformers.modeling_gpt2:Loading TF weight transformer_decoder/layer_9/self_attention/multihead_attention/value/kernel with shape [1024, 1024] INFO:transformers.modeling_gpt2:Loading TF weight word_embedder/w with shape [50257, 1024] INFO:transformers.modeling_gpt2:Loading TF weight word_embedder_1/w with shape [50257, 1024] Traceback (most recent call last): File "convert_gpt2_original_tf_checkpoint_to_pytorch.py", line 67, in <module> convert_gpt2_checkpoint_to_pytorch(args.gpt2_checkpoint_path, args.gpt2_config_file, args.pytorch_dump_folder_path) File "convert_gpt2_original_tf_checkpoint_to_pytorch.py", line 38, in convert_gpt2_checkpoint_to_pytorch load_tf_weights_in_gpt2(model, config, gpt2_checkpoint_path) File "/home/anaconda3/envs/torch03/lib/python3.6/site-packages/transformers/modeling_gpt2.py", line 84, in load_tf_weights_in_gpt2 pointer = getattr(pointer, scope_names[0]) File "/home/anaconda3/envs/torch03/lib/python3.6/site-packages/torch/nn/modules/module.py", line 585, in __getattr__ type(self).__name__, name)) AttributeError: 'GPT2Model' object has no attribute 'zeLoss' ``` I tried to load the tf checkpoint and inspect variables, there were no 'zeLoss' but ''OptimizeLoss' which I guess the script mistakenly matched only the 'zeLoss': ``` >>> import tensorflow as tf >>> import os >>> tf_path = os.path.abspath('model_best.ckpt') >>> tf_vars = tf.train.list_variables(tf_path) ``` here is part of the ```tf_vars```: ``` [('OptimizeLoss/beta1_power', []), ('OptimizeLoss/beta2_power', []), ('OptimizeLoss/position_embedder/w/Adam', [1024, 1024]), ('OptimizeLoss/position_embedder/w/Adam_1', [1024, 1024]), ('OptimizeLoss/transformer_decoder/beta/Adam', [1024]), ('OptimizeLoss/transformer_decoder/beta/Adam_1', [1024]), ('OptimizeLoss/transformer_decoder/gamma/Adam', [1024]), ('OptimizeLoss/transformer_decoder/gamma/Adam_1', [1024]), ('OptimizeLoss/transformer_decoder/layer_0/beta/Adam', [1024]), ('OptimizeLoss/transformer_decoder/layer_0/beta/Adam_1', [1024]), ('OptimizeLoss/transformer_decoder/layer_0/ffn/conv1/bias/Adam', [4096]), ('OptimizeLoss/transformer_decoder/layer_0/ffn/conv1/bias/Adam_1', [4096]), ('OptimizeLoss/transformer_decoder/layer_0/ffn/conv1/kernel/Adam', [1024, 4096]), ``` I would appreciate if you can help me fix this . Thanks
03-05-2020 02:29:25
03-05-2020 02:29:25
@LysandreJik Any update on this? Thanks<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,130
closed
Performance Issue about pretrained bert migration from tensorflow to pytorch
# ❓ Questions & Help Hi, I have a question about migration from tensorflow to pytorch. if I change pretrained BERT with google tensorflow 1.x version to pytorch transformers BERT through transformers-cli, performance drops a lot. I wonder why. Is there any similar case?
03-05-2020 01:50:35
03-05-2020 01:50:35
Are you using the latest version? `transformers` and not `pytorch-transformers`. It works with both TF2 and PT. You can test both and compare.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,129
closed
Load pretrained roberta model from fairseq?
# ❓ Questions & Help Would it be possible to load pretrained model of roberta-base that is trained by myself using fairseq, following the instructions in https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.pretraining.md? Since the current transformers library is not suitable for pretraining from scratch, I think it would be nice to be able to load pre-trained model trained on fairseq. I think it might be possible but I am not sure how the current transformers' roberta pretrained model is translated/loaded? thanks!
03-05-2020 01:19:07
03-05-2020 01:19:07
Try https://github.com/huggingface/transformers/blob/master/src/transformers/convert_roberta_original_pytorch_checkpoint_to_pytorch.py You might have to make some modifications, I have never tried this. Good luck! <|||||>huggingface/transformers can be used to train from scratch. See [how to train a LM from scratch](https://huggingface.co/blog/how-to-train).<|||||>Feel free to open another issue if more specific <|||||>related official blog: "Porting fairseq wmt19 translation system to transformers" by @stas00 https://huggingface.co/blog/porting-fsmt Might be able to convert fairseq language models following similar steps.<|||||>> Try https://github.com/huggingface/transformers/blob/master/src/transformers/convert_roberta_original_pytorch_checkpoint_to_pytorch.py > > You might have to make some modifications, I have never tried this. Good luck! Hey :) I'm interested in this but the link doesn't seem to work.<|||||>when pasting links to a repo one needs to hit `y` to get a fixed link to the current revision which would never break (as long as the repo is in place), here you go: https://github.com/huggingface/transformers/blob/7c6d63298f27b4a386f4603262a4603a2a6bf057/src/transformers/models/roberta/convert_roberta_original_pytorch_checkpoint_to_pytorch.py
transformers
3,128
closed
Add Summarization to Pipelines
Choices: 1) This is not TextGenerationPipeline, so it only supports bart-large-cnn. 1) It doesn't return the input back to the caller because it is annoyingly long.
03-04-2020 23:21:23
03-04-2020 23:21:23
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3128?src=pr&el=h1) Report > Merging [#3128](https://codecov.io/gh/huggingface/transformers/pull/3128?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3814e167d99c4b2e135b250d73deaa3f63ebef0c&el=desc) will **increase** coverage by `0.05%`. > The diff coverage is `95.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3128/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3128?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3128 +/- ## ========================================== + Coverage 78.02% 78.08% +0.05% ========================================== Files 98 98 Lines 16670 16689 +19 ========================================== + Hits 13007 13031 +24 + Misses 3663 3658 -5 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3128?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/3128/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.91% <ø> (ø)` | | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/3128/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `72.53% <95.00%> (+1.57%)` | :arrow_up: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3128/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.99% <0.00%> (+0.14%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3128/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.37% <0.00%> (+0.17%)` | :arrow_up: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3128/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.14% <0.00%> (+0.27%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3128?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3128?src=pr&el=footer). Last update [3814e16...a123599](https://codecov.io/gh/huggingface/transformers/pull/3128?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>I addressed all comments, and am ready for review @julien-c @thomwolf. <|||||>Is this pipeline ready to go? When I tried to run an example it said that the summarization pipeline is not one of the options.<|||||>Hey, @Weilin37 . Could you send a snippet of code so that I can reproduce your error? Thanks!<|||||>@Weilin37 are you running from master?<|||||>> @Weilin37 are you running from master? Hi, yes it is resolved now. I thought I upgraded but it didn't
transformers
3,127
closed
Create README.md
03-04-2020 17:13:01
03-04-2020 17:13:01
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3127?src=pr&el=h1) Report > Merging [#3127](https://codecov.io/gh/huggingface/transformers/pull/3127?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ec60e0ae7a88e46ac2bfbf6234d14381a01be06a?src=pr&el=desc) will **increase** coverage by `0.04%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3127/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3127?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3127 +/- ## ========================================== + Coverage 77.79% 77.84% +0.04% ========================================== Files 98 98 Lines 16422 16422 ========================================== + Hits 12776 12783 +7 + Misses 3646 3639 -7 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3127?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3127/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.45% <0%> (+0.15%)` | :arrow_up: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3127/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.72% <0%> (+0.85%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3127?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3127?src=pr&el=footer). Last update [ec60e0a...5e9f364](https://codecov.io/gh/huggingface/transformers/pull/3127?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,126
closed
BartTokenizer and 'bart-large-cnn' out of sync
tok.encode('<mask>') -> 52064, but BartForMaskedLM.from_pretrained('bart-large-cnn') does not support a mask token.
03-04-2020 16:46:26
03-04-2020 16:46:26
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,125
closed
NER tutorial: run_tf_ner.py reports an entity number not matching the one in the .txt files
Hi everyone, I'm following this tutorial https://github.com/huggingface/transformers/tree/master/examples/ner . Generated results are the same but I noticed that the reported entity numbers ("support" column) are not the same B-entity numbers I can count through Notepad++ in *.txt files used as source for training, validation or test. Which is my mistake? Thank you. UPDATE: run_ner.py works correctly. I suppose there is a bug in run_tf_ner.py. I also tag @stefan-it who is the only mentioned in tutorial. Thank you Stefan.
03-04-2020 16:40:19
03-04-2020 16:40:19
I'm currently not able to run the TF ner training script - I'm using TF in version 2.0.0b1 and the following error message is thrown: ```bash ... File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/init_ops_v2.py", line 769, in _assert_float_dtype raise ValueError("Expected floating point type, got %s." % dtype) ValueError: Expected floating point type, got <dtype: 'int32'>. ``` 🤔<|||||>I met the same problem. The script "run_tf_ner.py" can not get the same report result, have you solved it?<|||||>I have solved this issue. Because I evaluate the ner model with train.txt, and when the mode is "train", the dataset will be repeated and shuffled, so the support item of metric report is not same with the last report. When I copy train.txt as dev.txt and change mode from "train" to "dev" during evaluation in the load_and_cache_examples, the dataset will not be repeated and shuffled and the report is reproductive.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,124
closed
[Broken Proposal] CircleCI runs tests with torch=1.0.0
Goal: maintain backwards compatibility!
03-04-2020 16:37:23
03-04-2020 16:37:23
need to avoid the torchscript tests<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,123
closed
fix sklearn release circle ci [temporary]
new sklearn release (https://github.com/scikit-learn/scikit-learn/releases) seems to be broken or leads to errors for PR since this morning - go back to previous version for now to avoid circle ci errors
03-04-2020 16:09:26
03-04-2020 16:09:26
@julien-c this should fix the sklearn problem for the moment<|||||>👍 <|||||>reverted on master as they pushed a fixed release right after that one.
transformers
3,122
closed
include tf gpt2 tests for attn mask and past variable
Test TF GPT2 for correct behavior regarding the past and attn mask variable. Translated code from torch to TF 2.0.
03-04-2020 13:36:04
03-04-2020 13:36:04
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3122?src=pr&el=h1) Report > Merging [#3122](https://codecov.io/gh/huggingface/transformers/pull/3122?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/34de670dbe70a9ead31d0692ad9dc726d3ea4edb?src=pr&el=desc) will **decrease** coverage by `0.04%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3122/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3122?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3122 +/- ## ========================================= - Coverage 77.84% 77.8% -0.05% ========================================= Files 98 98 Lines 16422 16422 ========================================= - Hits 12784 12777 -7 - Misses 3638 3645 +7 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3122?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3122/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.87% <0%> (-0.86%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3122/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.45% <0%> (-0.16%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3122?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3122?src=pr&el=footer). Last update [34de670...1305f35](https://codecov.io/gh/huggingface/transformers/pull/3122?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,121
closed
A better way to process extended_attention_mask in BertModel.forward()
In the `forward()` method of the `BertModel` (https://huggingface.co/transformers/_modules/transformers/modeling_bert.html#BertModel.forward), the `extended_attention_mask` is processed in the following way: > extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0 I know this is to make an additive mask so that the unmasked positions will be unchanged (by adding 0) and the masked positions will be very small (by subtracting 10000). But I think it is better to achieve this goal by the following way: > extended_attention_mask = torch.log(extended_attention_mask)
03-04-2020 12:04:04
03-04-2020 12:04:04
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,120
closed
Making past and mems variables have batch size as their first output dimension.
# 🚀 Feature request And the moment, the variables **past** / **mems** have the shape: `(2, batch_size, num_heads, sequence_length, embed_size_per_head)` and `(mem_len, batch_size, embed_size)` , respectively. meaning that the variable `batch_size` dim is not on first position. Change the variables structure to have `batch_size` on first position. ## Motivation This might be confusing as all other variables have the `batch_size` dim no first position. Being certain that the first dimension is always the `batch_size` would be very helpful for the user. Normally `mems` and `past` variables are just used to speed up decoding, and not too much changed by the user (or even looked at) I think, but consistency should be good anyways. ## Your contribution Changing this for GPT2/CTRL is very straightforward (changing three lines of code), but for xlnet and transfo-xl would probably take a slightly bigger change.
03-04-2020 11:12:26
03-04-2020 11:12:26
Up for discussion @LysandreJik @thomwolf @julien-c <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,119
closed
rename variables named 'word' to 'token' in generate fn
Rename `word` to `token` in generate() function.
03-04-2020 09:43:44
03-04-2020 09:43:44
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3119?src=pr&el=h1) Report > Merging [#3119](https://codecov.io/gh/huggingface/transformers/pull/3119?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/34de670dbe70a9ead31d0692ad9dc726d3ea4edb?src=pr&el=desc) will **decrease** coverage by `1.01%`. > The diff coverage is `90%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3119/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3119?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3119 +/- ## ========================================== - Coverage 77.84% 76.83% -1.02% ========================================== Files 98 98 Lines 16422 16422 ========================================== - Hits 12784 12618 -166 - Misses 3638 3804 +166 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3119?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.29% <90%> (-0.32%)` | :arrow_down: | | [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: | | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: | | [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96% <0%> (-2.23%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `91.86% <0%> (+0.3%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3119?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3119?src=pr&el=footer). Last update [34de670...2caa33f](https://codecov.io/gh/huggingface/transformers/pull/3119?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Good to merge for me
transformers
3,118
closed
Add beam search to generation tf 2 0
Add beam search to the TF generate function as it is done for torch at the moment. Use same TF syntax that was used in PR #3063 EDIT: Also included a quick fix that `TF GPT2 past.shape == PT GPT2 past.shape`
03-04-2020 09:42:39
03-04-2020 09:42:39
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3118?src=pr&el=h1) Report > Merging [#3118](https://codecov.io/gh/huggingface/transformers/pull/3118?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/34de670dbe70a9ead31d0692ad9dc726d3ea4edb?src=pr&el=desc) will **increase** coverage by `0.1%`. > The diff coverage is `84.51%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3118/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3118?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3118 +/- ## ========================================= + Coverage 77.84% 77.95% +0.1% ========================================= Files 98 98 Lines 16422 16561 +139 ========================================= + Hits 12784 12910 +126 - Misses 3638 3651 +13 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3118?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/3118/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `96.14% <100%> (ø)` | :arrow_up: | | [src/transformers/modeling\_tf\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3118/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `99.57% <100%> (ø)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3118/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `91.29% <84.1%> (-0.28%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3118?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3118?src=pr&el=footer). Last update [34de670...7a89a3e](https://codecov.io/gh/huggingface/transformers/pull/3118?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Good to merge for me! @LysandreJik @thomwolf <|||||>This is really cool, thanks a lot @patrickvonplaten Cc @minimaxir
transformers
3,117
closed
BART FP16
# 🚀 Feature request I would like to use BART in FP16 mode, but it seems impossible for now : ``` config = BartConfig(vocab_size=50264, output_past=True) model = AutoModelWithLMHead.from_pretrained('bart-large-cnn', config=config).cuda().half() tokenizer = AutoTokenizer.from_pretrained('bart-large-cnn') ARTICLE_TO_SUMMARIZE = "My friends are cool but they eat too many carbs." inputs = tokenizer.batch_encode_plus([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors='pt') generated_ids = model.generate(inputs['input_ids'].cuda(), attention_mask=inputs['attention_mask'].cuda(), num_beams=4, max_length=5) ``` > File "/data/user/.venv/bartqg/lib/python3.6/site-packages/transformers/modeling_bart.py", line 647, in forward attn_output = torch.bmm(attn_probs, v) RuntimeError: Expected object of scalar type Float but got scalar type Half for argument #2 'mat2' in call to _th_bmm @sshleifer Do you plan to implement a FP16-friendly version of BART ?
03-04-2020 02:52:27
03-04-2020 02:52:27
Not on my roadmap just yet, but I would definitely consider it if there were lots of demand. Since we only have inference code right now, the benefit seems marginal. <|||||>@BramVanroy Should this issue be closed ? FP16 is not implemented yet. And the `wontfix` label is clear. Keeping the issue open may make it easier for people to find it and show their potential interest in FP16.<|||||>This should not be closed indeed. @sshleifer, we intend all the models to be compatible with FP16, this is the direction the field is going and with the Volta-level GPU being widespread now, there is less and less reason not to use mixed-precision fine-tuning (half memory and significantly faster).<|||||>This can probably be fixed by changing the `torch.float32` casting [here](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bart.py#L643) to a cast to the type of `attn_weights` like it's done in the original fairseq code [here](https://github.com/pytorch/fairseq/blob/cb2dc414c692d7de283bec4e4f9c923a66205792/fairseq/modules/multihead_attention.py#L335). Do you mind fixing this and testing the failing script posted in the issue @sshleifer?<|||||>Yep, on it!<|||||>Hi, @sshleifer. Thank you so much for your effort on BART. I encountered the same fp16 issues today. The current BART code can be trained (without fp16) using the run_glue script in: https://github.com/huggingface/transformers/blob/master/examples/run_glue.py So, it will be really nice if the fp16 training can also work out.<|||||>My bad, I thought @sshleifer's labeling was a note that he isn't planning to change anything `wontfix`, so no future updates would be possible and then I closed it. Will keep that in mind for the future.<|||||>No bad @sshleifer for the moment, please ping me with DM before adding "wontfix" labels to issues, thanks.
transformers
3,116
closed
Skipping outputs
Currently, `encode_plus` and `batch_encode_plus` return the same outputs for different models. This is sub-optimal as we can't do the following for each model: ```py inputs = tokenizer.encode_plus(sequence, return_tensors="pt") model(**inputs) ``` This will crash for DistilBERT as the tokenizer would return `token_type_ids` which can't be handled by the model. In order to fix this, each tokenizer has to return model-specific arguments. Usually there are the same default arguments, and some models handle less (e.g. DistilBERT, RoBERTa). This is a mock PR offering a solution using a ~`skip_outputs`~ `return_outputs` argument to tokenizers. ```py from transformers import DistilBertTokenizer tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-cased") print(tokenizer.encode_plus("Hey, how are you?")) ``` Returns a dictionary without the token type ids: ```py {'input_ids': [101, 4403, 117, 1293, 1132, 1128, 136, 102], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1]} ``` Specifying a custom ~`skip_outputs`~ `return_outputs` at initialisation works as expected: ```py from transformers import DistilBertTokenizer tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-cased", return_outputs=["attention_mask", "token_type_ids"]) print(tokenizer.encode_plus("Hey, how are you?")) ``` ```py {'input_ids': [101, 4403, 117, 1293, 1132, 1128, 136, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1]} ``` or with a custom ~skipped~ output: ```py from transformers import DistilBertTokenizer tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-cased", return_outputs=["token_type_ids"]) print(tokenizer.encode_plus("Hey, how are you?")) ``` ```py {'input_ids': [101, 4403, 117, 1293, 1132, 1128, 136, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0]} ``` This also works with saving/reloading: ```py from transformers import DistilBertTokenizer tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-cased", return_outputs=["token_type_ids"]) print(tokenizer.encode_plus("Hey, how are you?")) tokenizer.save_pretrained("xxx") tokenizer = DistilBertTokenizer.from_pretrained("xxx") print(tokenizer.encode_plus("Hey, how are you?")) ``` Returns the following: ```py {'input_ids': [101, 4403, 117, 1293, 1132, 1128, 136, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0]} {'input_ids': [101, 4403, 117, 1293, 1132, 1128, 136, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0]} ```
03-03-2020 23:27:53
03-03-2020 23:27:53
Nice. One question: do we want to have a `skip_output` flag or to have a `keep_output` flag. `skip_output` seems to me as introducing a dependency to be maintained between all the models (if we add a model with additional output that are processed by encode_plus later, we would have to update all the models to avoid this output) `keep_output` is longer to write right now (we have to add it for all the models) but once it's added, all the models are independent from each others.<|||||>I'm ok with both solutions (by the way, in general terms, a lot of software can accept a combination of whitelist and/or blacklist. When both are present, it's usually "include the whitelist, and remove the blacklist") If we do `keep_output`, maybe we name the attribute `return_outputs: List[str]` for consistency with `encode_xxx()` params?<|||||>I agree with both of you. Furthermore, this approach (deleting from the dict `encode_plus` generated) is not compatible with the `return_xxx` in the `encode_plus` arguments. I'm implementing both your proposed changes, looking into fixing the above and into fast tokenizers. I'll then move on to the tests. - [x] replace the blacklist by a whitelist - [x] rename to `return_outputs` for consistency with `encode_plus arguments` - [x] compatibility with all of `encode_plus`'s arguments - [x] fast tokenizers - [x] tests<|||||>I like the solution, 👍 . One question: It requires the user to know / look at the names of the parameters handled by `__call__()` / `forward()`, should we expose a property on PreTrainedModel to give the list of parameter supported by the model ? This one will be overrided in Roberta and Distil. ```python model = SomeModel(...) tokenizer = AutoTokenizer.from_pretrained(..., return_outputs=model.input_names) ``` <|||||>Indeed, such an attribute would be helpful! I'll add it and move on to the tests.<|||||>Regarding the suggestion of @mfuntowicz, in the end, this should be in a common configuration for model and tokenizers I guess, so maybe we could actually have this attribute as input to `PretrainedTokenizer.__init__()` already (instead of class attribute) to prepare for the future.<|||||>That value is currently managed by the `__init__` method, see the examples above It still needs to be a class attribute in my opinion, as it should be overridden by children of `PreTrainedTokenizer` and it should be known by `encode_plus`/`encode`/`batch_encode_plus`.<|||||>Should be good for review. I reverted the `docs` commit because it made the review harder. I'll recommit the docs at merge time.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3116?src=pr&el=h1) Report > Merging [#3116](https://codecov.io/gh/huggingface/transformers/pull/3116?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/49debe62fdc96e161f866dd8914d5915477bb742?src=pr&el=desc) will **increase** coverage by `<.01%`. > The diff coverage is `100%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3116/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3116?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3116 +/- ## ========================================== + Coverage 77.98% 77.99% +<.01% ========================================== Files 98 98 Lines 16645 16660 +15 ========================================== + Hits 12981 12994 +13 - Misses 3664 3666 +2 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3116?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3116/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.85% <100%> (+0.12%)` | :arrow_up: | | [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3116/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `100% <100%> (ø)` | :arrow_up: | | [src/transformers/tokenization\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/3116/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZGlzdGlsYmVydC5weQ==) | `100% <100%> (ø)` | :arrow_up: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3116/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `68% <0%> (-0.41%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3116/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.4% <0%> (-0.16%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3116?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3116?src=pr&el=footer). Last update [49debe6...96b2fa1](https://codecov.io/gh/huggingface/transformers/pull/3116?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Merged after offline review from @thomwolf and @julien-c
transformers
3,115
closed
fix: passing config as Layer trainable param
Lurking bugs discovered while working on other stuff.
03-03-2020 23:08:24
03-03-2020 23:08:24
That's great, thanks a lot @gthb
transformers
3,114
closed
Rename BartForMaskedLM -> BartForConditionalGeneration
03-03-2020 22:42:13
03-03-2020 22:42:13
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3114?src=pr&el=h1) Report > Merging [#3114](https://codecov.io/gh/huggingface/transformers/pull/3114?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f631e01d2c78614416655a85955f326636f69825?src=pr&el=desc) will **decrease** coverage by `1%`. > The diff coverage is `100%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3114/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3114?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3114 +/- ## ========================================== - Coverage 77.82% 76.82% -1.01% ========================================== Files 98 98 Lines 16422 16425 +3 ========================================== - Hits 12781 12618 -163 - Misses 3641 3807 +166 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3114?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/3114/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.91% <100%> (ø)` | :arrow_up: | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3114/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.09% <100%> (+0.03%)` | :arrow_up: | | [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3114/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3114/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: | | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3114/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: | | [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3114/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96% <0%> (-2.23%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3114/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3114/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `91.56% <0%> (-0.31%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3114?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3114?src=pr&el=footer). Last update [f631e01...b3e0a1c](https://codecov.io/gh/huggingface/transformers/pull/3114?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Docs: ![image](https://user-images.githubusercontent.com/6045025/75921740-a3a20680-5e2f-11ea-983c-af02dbe1f84f.png)
transformers
3,113
closed
model cards for both aubmindlab/bert-base-arabert models
03-03-2020 22:34:12
03-03-2020 22:34:12
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3113?src=pr&el=h1) Report > Merging [#3113](https://codecov.io/gh/huggingface/transformers/pull/3113?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e9e6efdc452b74947d40a5a2e8af2fc444c63b5b?src=pr&el=desc) will **decrease** coverage by `0.51%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3113/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3113?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3113 +/- ## ========================================== - Coverage 78.35% 77.84% -0.52% ========================================== Files 98 98 Lines 16422 16422 ========================================== - Hits 12868 12784 -84 - Misses 3554 3638 +84 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3113?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/3113/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.89% <0%> (-27.6%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3113/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.61% <0%> (+0.15%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3113?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3113?src=pr&el=footer). Last update [e9e6efd...07a4c8b](https://codecov.io/gh/huggingface/transformers/pull/3113?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,112
closed
Adds failing tests for the fast tokenizers
This ports some of the tests over that started failing on the AllenNLP side when the new fast tokenizers came out. Note: These tests are failing right now. They will need updates to the fast transformers before this can be merged. Maybe it would be better to merge this branch into the branch where the tokenizers are being fixed?
03-03-2020 21:43:06
03-03-2020 21:43:06
I can't assign reviewers, but you asked me in #3058 to ping @LysandreJik and @mfuntowicz when I do this.<|||||>I added a test for the issue from #3088.<|||||>Added another test for #3091.<|||||>@dirkgr thanks for taking the time to include your tests into ours. It will definitively help making sure everything is working as expected on your side 👍 <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This can be closed now, right?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,111
closed
Create README.md
- Thumbnail is not set!
03-03-2020 21:38:37
03-03-2020 21:38:37
You have to add the thumbnail<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3111?src=pr&el=h1) Report > Merging [#3111](https://codecov.io/gh/huggingface/transformers/pull/3111?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e9e6efdc452b74947d40a5a2e8af2fc444c63b5b?src=pr&el=desc) will **decrease** coverage by `0.52%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3111/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3111?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3111 +/- ## ========================================== - Coverage 78.35% 77.83% -0.53% ========================================== Files 98 98 Lines 16422 16422 ========================================== - Hits 12868 12782 -86 - Misses 3554 3640 +86 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3111?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/3111/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.89% <0%> (-27.6%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3111/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.29% <0%> (-0.16%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3111?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3111?src=pr&el=footer). Last update [e9e6efd...3060b8c](https://codecov.io/gh/huggingface/transformers/pull/3111?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Hi, Julien. Can you add the default thumbnail you usually add to models? (name of the model + Huggingface logo)
transformers
3,110
closed
BartForSequenceClassification: fix num_labels, add test
03-03-2020 20:24:41
03-03-2020 20:24:41
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3110?src=pr&el=h1) Report > Merging [#3110](https://codecov.io/gh/huggingface/transformers/pull/3110?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5c5af879b6d45c879c987154f66d4ea978925fb2?src=pr&el=desc) will **increase** coverage by `<.01%`. > The diff coverage is `100%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3110/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3110?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3110 +/- ## ========================================== + Coverage 77.83% 77.84% +<.01% ========================================== Files 98 98 Lines 16422 16422 ========================================== + Hits 12782 12783 +1 + Misses 3640 3639 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3110?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3110/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.38% <100%> (+0.33%)` | :arrow_up: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3110/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.45% <0%> (-0.16%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3110?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3110?src=pr&el=footer). Last update [5c5af87...0893268](https://codecov.io/gh/huggingface/transformers/pull/3110?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,109
closed
[WIP] Add tests that ensure that copied functions remain in sync
Adding some tests that use ast to check that functions that were originally copied stay in sync.
03-03-2020 19:57:01
03-03-2020 19:57:01
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,108
closed
BART: <mask> token ID is outside vocab bounds
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): BART Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: ``` from transformers import BartForMaskedLM, BartTokenizer from transformers.configuration_bart import BartConfig config = BartConfig(vocab_size=50264, output_past=True) model = AutoModelWithLMHead.from_pretrained('bart-large-cnn', config=config) tokenizer = AutoTokenizer.from_pretrained('bart-large-cnn') ARTICLE_TO_SUMMARIZE = "My friends are <mask> but they eat too many carbs." inputs = tokenizer.batch_encode_plus([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors='pt') generated_ids = model.generate(inputs['input_ids'], attention_mask=inputs['attention_mask'], num_return_sequences=4) print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in generated_ids]) ``` ## Expected behavior I'd expect some sort of infilling to occur, but instead I see the error: ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-13-bad65359ada6> in <module> 10 inputs = tokenizer.batch_encode_plus([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors='pt') 11 ---> 12 generated_ids = model.generate(inputs['input_ids'], attention_mask=inputs['attention_mask'], num_return_sequences=4) 13 print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in generated_ids]) ~/.local/lib/python3.6/site-packages/torch/autograd/grad_mode.py in decorate_no_grad(*args, **kwargs) 47 def decorate_no_grad(*args, **kwargs): 48 with self: ---> 49 return func(*args, **kwargs) 50 return decorate_no_grad 51 ~/.local/lib/python3.6/site-packages/transformers/modeling_bart.py in generate(self, input_ids, attention_mask, max_length, num_beams, repetition_penalty, length_penalty, num_return_sequences, min_len, no_repeat_ngram_size) 1106 input_ids, decoder_cache, decoder_input_ids, attention_mask, 1107 ) -> 1108 outputs = self(**model_inputs) 1109 lprobs = F.log_softmax(outputs[0][:, -1, :], dim=-1) 1110 ~/.local/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 539 result = self._slow_forward(*input, **kwargs) 540 else: --> 541 result = self.forward(*input, **kwargs) 542 for hook in self._forward_hooks.values(): 543 hook_result = hook(self, input, result) ~/.local/lib/python3.6/site-packages/transformers/modeling_bart.py in forward(self, input_ids, attention_mask, encoder_outputs, decoder_input_ids, decoder_attention_mask, decoder_cached_states, lm_labels, **unused) 932 encoder_outputs=encoder_outputs, 933 decoder_attention_mask=decoder_attention_mask, --> 934 decoder_cached_states=decoder_cached_states, 935 ) 936 lm_logits = self.lm_head.forward(outputs[0]) ~/.local/lib/python3.6/site-packages/transformers/modeling_bart.py in forward(self, input_ids, attention_mask, decoder_input_ids, encoder_outputs, decoder_attention_mask, decoder_cached_states) 837 assert decoder_input_ids is not None 838 if encoder_outputs is None: --> 839 encoder_outputs = self.encoder.forward(input_ids=input_ids, attention_mask=attention_mask) 840 assert isinstance(encoder_outputs, tuple) 841 # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn) ~/.local/lib/python3.6/site-packages/transformers/modeling_bart.py in forward(self, input_ids, attention_mask) 272 During training might not be of length n_layers because of layer dropout. 273 """ --> 274 inputs_embeds = self.embed_tokens(input_ids) 275 embed_pos = self.embed_positions(input_ids) 276 x = inputs_embeds + embed_pos ~/.local/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 539 result = self._slow_forward(*input, **kwargs) 540 else: --> 541 result = self.forward(*input, **kwargs) 542 for hook in self._forward_hooks.values(): 543 hook_result = hook(self, input, result) ~/.local/lib/python3.6/site-packages/torch/nn/modules/sparse.py in forward(self, input) 112 return F.embedding( 113 input, self.weight, self.padding_idx, self.max_norm, --> 114 self.norm_type, self.scale_grad_by_freq, self.sparse) 115 116 def extra_repr(self): ~/.local/lib/python3.6/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 1482 # remove once script supports set_grad_enabled 1483 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 1484 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 1485 1486 RuntimeError: index out of range: Tried to access index 50264 out of table with 50263 rows. at /pytorch/aten/src/TH/generic/THTensorEvenMoreMath.cpp:418 ``` Looks to me like the `<mask>` token ID (50264) is out of bounds? ## Environment info - `transformers` version: a088d75e510d5641808ccd72f5dca4df36d95b8e - Platform: Ubuntu 18.04 - Python version: 3.6.9 - PyTorch version (GPU?): 1.3.1 (Y) - Tensorflow version (GPU?): N/A - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
03-03-2020 17:02:14
03-03-2020 17:02:14
if you install from master it seems to work on 'bart-large'. Seems like it's only an issue on 'bart-large-cnn' ``` tokenizer = BartTokenizer.from_pretrained('bart-large') model = BartForMaskedLM.from_pretrained('bart-large',output_past=True) ARTICLE_TO_SUMMARIZE = "My friends are <mask> but they eat too many carbs." inputs = tokenizer.batch_encode_plus([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors='pt') generated_ids = model.generate(inputs['input_ids'], attention_mask=inputs['attention_mask'], num_return_sequences=4) print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in generated_ids]) ``` output: ``` ['My kids are good, but they eat too many carbs. My friends are good.', 'My kids are good, but they eat too many carbs. My friends are good.', 'My kids are good, but they eat too many carbs. My friends are good.', 'My kids are good, but they eat too many carbs. My friends are good.'] ```<|||||>Bart-large-cnn doesn't have a mask_token_id, which is admittedly confusing. this is how I would do mask filling ```python model = BartForMaskedLM.from_pretrained('bart-large') tokenizer = AutoTokenizer.from_pretrained('bart-large') ARTICLE_TO_SUMMARIZE = "My friends are <mask> but they eat too many carbs." inputs = tokenizer.batch_encode_plus([ARTICLE_TO_SUMMARIZE], return_tensors='pt') input_ids = inputs['input_ids'] #generated_ids = model(, attention_mask=inputs['attention_mask'])[0] logits = model(input_ids)[0] masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item() probs = logits[0, masked_index].softmax(dim=0) values, predictions = probs.topk(10) tokenizer.decode(predictions).split() # ['good', 'great', 'all', 'really', 'very', 'healthy', 'also', 'not', 'the', 'doing'] ``` <|||||>One liner courtesy of @julien-c ``` from transformers import pipeline nlp = pipeline('fill-mask', 'bart-large') nlp("My friends are <mask> but they eat too many carbs.") ```<|||||>Thanks @sshleifer, that will do the trick! The following does work: ``` tokenizer = AutoTokenizer.from_pretrained('bart-large-cnn') tokenizer.mask_token_id >>> 50264 ``` ...which is a bit counterintuitive as it implies that `<mask>` _is_ available. It's also not clear from the docs that `bart-large` can be used successfully with `BartForMaskedLM`.
transformers
3,107
closed
BertTokenizer.save_pretrained() ignores do_lower_case
# 🐛 Bug ## Information When saving a tokenizer with the purpose of sharing, `init` arguments are not saved to a config. ## To reproduce Steps to reproduce the behavior: Initialize a tokenizer with `do_lower_case=False`, save pretrained, initialize from pretrained. The default `do_lower_case=True` will not be overwritten and further tokenization will be incorrect. ```python3 In[1]: import transformers In[2]: tokenizer = transformers.BertTokenizer('my/model/vocab.txt', do_lower_case=False) In[3]: tokenizer.basic_tokenizer.do_lower_case Out[3]: False In[4]: tokenizer.save_pretrained('dumped/model/') Out[4]: ('dumped/model/vocab.txt', 'dumped/model/special_tokens_map.json', 'dumped/model/added_tokens.json') In[5]: tokenizer = transformers.BertTokenizer.from_pretrained('dumped/model/') In[6]: tokenizer.basic_tokenizer.do_lower_case Out[6]: True ``` ## Expected behavior ```python3 In[1]: import transformers In[2]: tokenizer = transformers.BertTokenizer('my/model/vocab.txt', do_lower_case=False) In[3]: tokenizer.basic_tokenizer.do_lower_case Out[3]: False In[4]: tokenizer.save_pretrained('dumped/model/') Out[4]: ('dumped/model/vocab.txt', 'dumped/model/special_tokens_map.json', 'dumped/model/added_tokens.json') In[5]: tokenizer = transformers.BertTokenizer.from_pretrained('dumped/model/') In[6]: tokenizer.basic_tokenizer.do_lower_case Out[6]: False ``` ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.5.0
03-03-2020 16:16:21
03-03-2020 16:16:21
When I copy/paste your code, I get this in `dumped/model/`: `vocab.txt`, `tokenizer_config.json` and `special_tokens_map.json` The tokenizer config does save the lowercase, and the output of the code is two `False`. Below the code snippet I had to actually run it: ```py import transformers import os tokenizer = transformers.BertTokenizer.from_pretrained('bert-base-cased', do_lower_case=False) print(tokenizer.basic_tokenizer.do_lower_case) os.makedirs("~/dumped/model", exist_ok=True) tokenizer.save_pretrained('~/dumped/model') tokenizer = transformers.BertTokenizer.from_pretrained('~/dumped/model') print(tokenizer.basic_tokenizer.do_lower_case) ```<|||||>Hi @LysandreJik, It does work when you initialize the model `.from_pretrained('bert-base-cased')`, because the pretrained tokenizer already has a configuration that is saved afterwards. I was talking about a case when you do not have a config and load only the local `vocab.txt` file: ```python3 import os from transformers import BertTokenizer # Download the `bert-base-cased` tokenizer and initialized from pretrained tokenizer = BertTokenizer.from_pretrained('bert-base-cased') # A configuration is already there, `do_lower_case` is `False` print(tokenizer.basic_tokenizer.do_lower_case) os.makedirs("~/dumped/model", exist_ok=True) # Save it locally tokenizer.save_pretrained('~/dumped/model') # We can see that the config file has data with open("~/dumped/model/tokenizer_config.json") as f: print(f.read()) # Initialize as if we only have a local `vocab.txt` which is my case tokenizer = BertTokenizer('~/dumped/model/vocab.txt', do_lower_case=False) print(tokenizer.basic_tokenizer.do_lower_case) tokenizer.save_pretrained('~/dumped/model') # After saving the config is empty with open("~/dumped/model/tokenizer_config.json") as f: print(f.read()) # And after initializing from pretrained `do_lower_case` is `True` tokenizer = BertTokenizer.from_pretrained('~/dumped/model') print(tokenizer.basic_tokenizer.do_lower_case) ```<|||||>Hmm, that makes sense. Indeed, that seems problematic. Thanks for opening this issue, we're looking into it!<|||||>@LysandreJik Looked a bit into it quickly, and here is the deal: `.save_pretrained` from `BertTokenizer` is inherited from `PreTrainedTokenizer`, and save config based on the `self.init_kwargs` dict. However, `do_lower_case` [is not passed to the `super().__init__()`](https://github.com/huggingface/transformers/blob/e9e6efdc452b74947d40a5a2e8af2fc444c63b5b/src/transformers/tokenization_bert.py#L177) in the Bert tokenizer. Adding it to the `kwargs` passed to `super()` should do the trick. I can send a PR if that helps! <|||||>@RaphaelMeudec, it would not help: the config is empty even though `unt_token`, `sep_token`, etc., are passed to `super().__init__()`. Also, I've tried it :slightly_smiling_face: <|||||>@yoptar Indeed! What I can't explain is that `self.init_kwargs` is initialized [here](https://github.com/huggingface/transformers/blob/e9e6efdc452b74947d40a5a2e8af2fc444c63b5b/src/transformers/tokenization_utils.py#L327) but remains untouched in the rest of the code? I've been able to make your code work as expected by initializing `self.init_kwargs = kwargs` (instead of `{}`) and passing `do_lower_case=do_lower_case` in BertTokenizer `super()` resulting in: ``` False {"do_lower_case": false, "max_len": 512} False {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "do_lower_case": false} False ``` @LysandreJik Do you have more insights on how `self.init_kwargs` is modified in the current code?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,106
closed
fix beam_search behavior when sampling
This PR aims to fix the beam search behavior when sampling for language generation. For once, when doing beam_search decoding for language generation, one would usually do greedy decoding (do_sample=False), so this case should not be used very often, but it should be logical nevertheless. It's kind of hard to see what happens when doing beam_search decoding with sampling=True, so here a quick example. Running this code: ``` from transformers import GPT2LMHeadModel, GPT2Tokenizer model = GPT2LMHeadModel.from_pretrained('gpt2') tokenizer = GPT2Tokenizer.from_pretrained('gpt2') inputs_dict = tokenizer.encode_plus('The dog', return_tensors='pt') outputs = model.generate(inputs_dict['input_ids'], num_beams=3, max_length=10) ``` and putting the following print statement: `print("Sorted hyps: {}".format([x[1] for x in sorted_hyps]))` after line: https://github.com/huggingface/transformers/blob/a088d75e510d5641808ccd72f5dca4df36d95b8e/src/transformers/modeling_utils.py#L1087 would print the following beam hypothesis before this PR: ``` # printed sorted_hyps from line: 1088 # Beam_idx: 1 - tensor([ 464, 3290, 635, 468, 257, 2041, 3895, 326, 481, 1037]) # Beam_idx: 2 - tensor([ 464, 3290, 635, 468, 257, 2041, 3895, 326, 481, 1037]) # Beam_idx: 3 - tensor([ 464, 3290, 635, 468, 257, 2041, 3895, 326, 481, 1037]) => Result: best beam hypothesis: "The dog, named T.H., was recently" ``` It can be seen that they are all equal even the last word. And they will always be equal. The reason for this is that currently we sample only word_idx in the interval [0, vocab_size] (see https://github.com/huggingface/transformers/blob/a088d75e510d5641808ccd72f5dca4df36d95b8e/src/transformers/modeling_utils.py#L975) which forces that all beam_idx computed in this line: https://github.com/huggingface/transformers/blob/a088d75e510d5641808ccd72f5dca4df36d95b8e/src/transformers/modeling_utils.py#L1023 always all equal 0. This means that we only consider the first (0-idx) beam and disregard all other beams no matter what. After this PR: we sample from `[0, num_beams * vocab_size]` (as it's done in greedy decoding so that the beam_idx can be in the range `[0, num_beams]` - as it should be). Same print statement would produce: ``` # printed sorted_hyps from line: 1088 # Beam_idx: 1 - tensor([ 464, 3290, 373, 788, 3888, 284, 257, 6716, 2156, 351]) # Beam_idx: 2 - tensor([ 464, 3290, 373, 788, 3888, 284, 257, 6716, 2156, 11]) # Beam_idx: 3 - tensor([ 464, 3290, 373, 788, 3888, 284, 257, 6716, 2156, 1566]) => Result: best beam hypothesis: "The dog was then moved to a nearby house until" ``` I discussed with @thomwolf and think this is the best solution for beam_search sampling for language generation.
03-03-2020 15:56:19
03-03-2020 15:56:19
> is do_sample=True tested anywhere? Yes for the randomly initialized models, with dummy input and also for some Integration tests<|||||>Good to merge for me
transformers
3,105
closed
Change back pipeline signatures
As discussed with @julien-c in the merged #3055.
03-03-2020 15:31:24
03-03-2020 15:31:24
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3105?src=pr&el=h1) Report > Merging [#3105](https://codecov.io/gh/huggingface/transformers/pull/3105?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b31f7150190cdf13950607f8ee1efe11b352c909?src=pr&el=desc) will **increase** coverage by `0.02%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3105/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3105?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3105 +/- ## ========================================== + Coverage 77.6% 77.62% +0.02% ========================================== Files 98 98 Lines 16250 16230 -20 ========================================== - Hits 12611 12599 -12 + Misses 3639 3631 -8 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3105?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/3105/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `70.95% <ø> (+0.52%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3105?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3105?src=pr&el=footer). Last update [b31f715...da52b39](https://codecov.io/gh/huggingface/transformers/pull/3105?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,104
closed
BART -- RuntimeError: expected device cuda:0 but got device cpu
# 🐛 Bug @sshleifer I'm using BART model (bart-large), when I try to use the BartForMaskedLM i'm getting the above error. The reason is that in the _combine_masks (line 146 in modeling_bart) is creating a tensor without the device. so by default it is on CPU. To reproduce - simply use BartForMaskedLM model with GPU. can you help? am I missing anything? Additional details: ---------------------- - `transformers` version: 2.5.1 - Python version: 3.7.4 - PyTorch version (GPU?): 1.3.0 (with GPU) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no stuck trace: File "/specific/netapp5_2/gamir/adi/git/BERTese/lama/training.py", line 151, in train_and_eval outputs = model(b_in_tensor, lm_labels=b_label_tensor) File "/specific/netapp5_2/gamir/adi/miniconda3/envs/trans_py37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/specific/netapp5_2/gamir/adi/miniconda3/envs/trans_py37/lib/python3.7/site-packages/transformers/modeling_bart.py", line 925, in forward decoder_cached_states=decoder_cached_states, File "/specific/netapp5_2/gamir/adi/miniconda3/envs/trans_py37/lib/python3.7/site-packages/transformers/modeling_bart.py", line 844, in forward decoder_cached_states=decoder_cached_states, File "/specific/netapp5_2/gamir/adi/miniconda3/envs/trans_py37/lib/python3.7/site-packages/transformers/modeling_bart.py", line 499, in forward need_attn_weights=self.output_attentions, File "/specific/netapp5_2/gamir/adi/miniconda3/envs/trans_py37/lib/python3.7/site-packages/transformers/modeling_bart.py", line 372, in forward attn_mask=attention_mask, File "/specific/netapp5_2/gamir/adi/miniconda3/envs/trans_py37/lib/python3.7/site-packages/transformers/modeling_bart.py", line 629, in forward attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) + attn_mask RuntimeError: expected device cuda:0 but got device cpu Thanks, Adi.
03-03-2020 14:47:49
03-03-2020 14:47:49
Can you reproduce with the code on master? ``` git clone https://github.com/huggingface/transformers cd transformers pip install . ``` (duplicate of https://github.com/huggingface/transformers/issues/3079) <|||||>Closing for now, reopen if this is broken on the latest code. Otherwise, it will be in the next pip release. Thanks!
transformers
3,103
closed
Support keras JSON/HDF5 serialization of main layers
Fixes #3101
03-03-2020 14:14:22
03-03-2020 14:14:22
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3103?src=pr&el=h1) Report > Merging [#3103](https://codecov.io/gh/huggingface/transformers/pull/3103?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a088d75e510d5641808ccd72f5dca4df36d95b8e?src=pr&el=desc) will **increase** coverage by `0.05%`. > The diff coverage is `100%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3103/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3103?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3103 +/- ## ========================================== + Coverage 77.82% 77.88% +0.05% ========================================== Files 98 98 Lines 16422 16461 +39 ========================================== + Hits 12781 12821 +40 + Misses 3641 3640 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3103?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/3103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `85.53% <100%> (+0.08%)` | :arrow_up: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/3103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `96.16% <100%> (+0.02%)` | :arrow_up: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.84% <100%> (ø)` | :arrow_up: | | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `89.03% <100%> (+0.04%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.2% <100%> (+0.63%)` | :arrow_up: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/3103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.07% <100%> (+0.01%)` | :arrow_up: | | [src/transformers/modeling\_tf\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/3103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `91.18% <100%> (+0.04%)` | :arrow_up: | | [src/transformers/modeling\_tf\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `99.57% <100%> (ø)` | :arrow_up: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.61% <0%> (+0.15%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3103?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3103?src=pr&el=footer). Last update [a088d75...4c91a3a](https://codecov.io/gh/huggingface/transformers/pull/3103?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Crud, as the test shows, of course load doesn't work in this first-stab implementation, because the layer classes get instantiated with a `dict` instead of a `PretrainedConfig` instance. So needs more work.<|||||>With this change, seven of the 11 `TF*MainLayer` classes pass the test (saving, loading and producing same output for same input after the Keras save/load roundtrip). So I'm just not marking the other four `@keras_serializable` as I haven't gotten it working for those. Specifically: * TFT5MainLayer does not accept the same model inputs directly, as produced by `self.model_tester.prepare_config_and_inputs_for_common()`, so calling it fails * `TFXLMMainLayer`, `TFOpenAIGPTMainLayer`, and `TFDistilBertMainLayer` all fail the test (if I add the `@keras_serializable` attribute to them) in the same way: by outputting a tensor of shape `(7, 32)` after the save/load round-trip, which is just the first row of the `(13, 7, 32)` tensor that's output before. I haven't figured out the cause of this.<|||||>Ok this looks good to me. Do you want to make `make style` and `make quality` to pass the code quality checks (see our [contributor guide](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) if needed), and I guess we can merge it.<|||||>Fixed quality thing, added @functools.wraps (really shouldn't wrap without that as it causes confusing function metadata), added a docstring to the `keras_serializable` decorator, and changed to use a name specific to this library rather than the general name `config`, to be clearer where `transformers` is used along with other things in a Keras model. @thomwolf OK with all that?<|||||>Ok this is good to me. Thanks a lot for the awesome work on this. Merging @LysandreJik @julien-c <|||||>This landed in 2.6.0 but is missing in the release notes there https://github.com/huggingface/transformers/releases/tag/v2.6.0<|||||>Indeed, that is my fault, I'm sorry for missing it. I've added it to the v2.7.0 release notes of this morning: https://github.com/huggingface/transformers/releases/tag/v2.7.0 Thanks again for your contribution @gthb !
transformers
3,102
closed
bert-base-arabic model card
Please add model card file to the newly added asafaya/bert-base-arabic model
03-03-2020 13:31:39
03-03-2020 13:31:39
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3102?src=pr&el=h1) Report > Merging [#3102](https://codecov.io/gh/huggingface/transformers/pull/3102?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/eec5ec807135ae61fa2266f3c7ad947cc207abda?src=pr&el=desc) will **decrease** coverage by `<.01%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3102/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3102?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3102 +/- ## ========================================== - Coverage 77.59% 77.59% -0.01% ========================================== Files 98 98 Lines 16250 16250 ========================================== - Hits 12610 12609 -1 - Misses 3640 3641 +1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3102?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3102/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.86% <0%> (-0.16%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3102?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3102?src=pr&el=footer). Last update [eec5ec8...2538e35](https://codecov.io/gh/huggingface/transformers/pull/3102?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thanks for sharing this is awesome!
transformers
3,101
closed
Keras layers should override get_config to be JSON-serializable
# 🚀 Feature request Support JSON serialization of Keras layers by overriding `get_config`, so that they can be sent to Tensorboard to display a conceptual graph of the model. ## Motivation ### 1. Without this, can't write model graph to Tensorboard From https://github.com/tensorflow/tensorflow/blob/d1786ea19eb41922c0d433d71ca13b123b69b4be/tensorflow/python/ops/summary_ops_v2.py#L1004-L1009 > Writing the Keras model configuration allows the TensorBoard graph plugin to render a conceptual graph, as opposed to graph of ops. In case the model fails to serialze as JSON, it ignores and returns False. ### 2. Without this, can't save model with Keras `model.save` The base class `get_config` method actually refuses to run if the subclass initializer has positional arguments; from `tensorflow/python/keras/engine/base_layer.py`: ```python @base_layer_utils.default def get_config(self): [...] if len(extra_args) > 1 and hasattr(self.get_config, '_is_default'): raise NotImplementedError('Layer %s has arguments in `__init__` and ' 'therefore must override `get_config`.' % self.__class__.__name__) ``` and all the `TF*MainLayer` classes have a `config` positional argument, so this says they “must” all override `get_config`. And sure enough, if I make a simple Keras model using a TFBertMainLayer inside: ```python import tensorflow as tf from transformers import TFBertMainLayer, BertConfig def create_model(max_sequence_len: int) -> tf.keras.Model: cfg = BertConfig.from_pretrained('bert-base-cased') bert = TFBertMainLayer(cfg) input_ids = tf.keras.Input(shape=(max_sequence_len,), dtype=tf.int32, name='wp_input_token_ids') input_mask = tf.keras.Input(shape=(max_sequence_len,), dtype=tf.bool, name='wp_input_mask') pooled = bert(input_ids, input_mask)[1] out = tf.keras.layers.Dense(units=3, activation='softmax', kernel_initializer=tf.keras.initializers.glorot_uniform(), use_bias=False, name='classification' )(pooled) return tf.keras.Model(inputs=[input_ids, input_mask], outputs=[out]) model = create_model(40) model.save(filepath="tf_model.h5") ``` ... then `model.save` fails: ``` Traceback (most recent call last): File "trysave.py", line 32, in <module> model.save(filepath="tf_model.h5") File ".../tensorflow_core/python/keras/engine/network.py", line 1008, in save signatures, options) File ".../tensorflow_core/python/keras/saving/save.py", line 112, in save_model model, filepath, overwrite, include_optimizer) File ".../tensorflow_core/python/keras/saving/hdf5_format.py", line 99, in save_model_to_hdf5 model_metadata = saving_utils.model_metadata(model, include_optimizer) File ".../tensorflow_core/python/keras/saving/saving_utils.py", line 172, in model_metadata raise e File ".../tensorflow_core/python/keras/saving/saving_utils.py", line 169, in model_metadata model_config['config'] = model.get_config() File ".../tensorflow_core/python/keras/engine/network.py", line 918, in get_config return copy.deepcopy(get_network_config(self)) File ".../tensorflow_core/python/keras/engine/network.py", line 1993, in get_network_config layer_config = serialize_layer_fn(layer) File ".../tensorflow_core/python/keras/utils/generic_utils.py", line 198, in serialize_keras_object config = instance.get_config() File ".../tensorflow_core/python/keras/engine/base_layer.py", line 499, in get_config raise NotImplementedError('Layers with arguments in `__init__` must ' NotImplementedError: Layers with arguments in `__init__` must override `get_config`. ``` ## Your contribution I got this working for the one layer I was experimenting with, like this: ```patch diff --git a/src/transformers/modeling_tf_bert.py b/src/transformers/modeling_tf_bert.py index 19046235..74ad621c 100644 --- a/src/transformers/modeling_tf_bert.py +++ b/src/transformers/modeling_tf_bert.py @@ -21,6 +21,7 @@ import logging import numpy as np import tensorflow as tf +from . import PretrainedConfig from .configuration_bert import BertConfig from .file_utils import MULTIPLE_CHOICE_DUMMY_INPUTS, add_start_docstrings, add_start_docstrings_to_callable from .modeling_tf_utils import TFPreTrainedModel, get_initializer, shape_list @@ -474,12 +475,20 @@ class TFBertNSPHead(tf.keras.layers.Layer): class TFBertMainLayer(tf.keras.layers.Layer): def __init__(self, config, **kwargs): super().__init__(**kwargs) + if isinstance(config, dict): + config = PretrainedConfig.from_dict(config) + self.config = config self.num_hidden_layers = config.num_hidden_layers self.embeddings = TFBertEmbeddings(config, name="embeddings") self.encoder = TFBertEncoder(config, name="encoder") self.pooler = TFBertPooler(config, name="pooler") + def get_config(self): + cfg = super().get_config() + cfg['config'] = self.config.to_dict() + return cfg + def get_input_embeddings(self): return self.embeddings ``` and I didn't need to modify any other layer classes, just the main layer. So maybe it's enough to do this for all the `MainLayer` classes: ``` $ rg 'class .*MainLayer\(tf.keras.layers.Layer\)' src | cat src/transformers/modeling_tf_openai.py:class TFOpenAIGPTMainLayer(tf.keras.layers.Layer): src/transformers/modeling_tf_transfo_xl.py:class TFTransfoXLMainLayer(tf.keras.layers.Layer): src/transformers/modeling_tf_xlm.py:class TFXLMMainLayer(tf.keras.layers.Layer): src/transformers/modeling_tf_xlnet.py:class TFXLNetMainLayer(tf.keras.layers.Layer): src/transformers/modeling_tf_distilbert.py:class TFDistilBertMainLayer(tf.keras.layers.Layer): src/transformers/modeling_tf_bert.py:class TFBertMainLayer(tf.keras.layers.Layer): src/transformers/modeling_tf_albert.py:class TFAlbertMainLayer(tf.keras.layers.Layer): src/transformers/modeling_tf_ctrl.py:class TFCTRLMainLayer(tf.keras.layers.Layer): src/transformers/modeling_tf_t5.py:class TFT5MainLayer(tf.keras.layers.Layer): src/transformers/modeling_tf_gpt2.py:class TFGPT2MainLayer(tf.keras.layers.Layer): ``` ... or, neater, to extract a single `TFMainLayer(tf.keras.layers.Layer)` superclass for all of them, to do this in one place.
03-03-2020 13:30:02
03-03-2020 13:30:02
Thanks a lot for investigating this and submitting a fix, it's awesome. I respond in the PR it-self
transformers
3,100
closed
Fix QA models binding for Flaubert, XLNet and XLM.
Signed-off-by: Morgan Funtowicz <[email protected]> Fix #2893
03-03-2020 13:05:20
03-03-2020 13:05:20
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3100?src=pr&el=h1) Report > Merging [#3100](https://codecov.io/gh/huggingface/transformers/pull/3100?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/eec5ec807135ae61fa2266f3c7ad947cc207abda?src=pr&el=desc) will **decrease** coverage by `<.01%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3100/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3100?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3100 +/- ## ========================================== - Coverage 77.59% 77.59% -0.01% ========================================== Files 98 98 Lines 16250 16250 ========================================== - Hits 12610 12609 -1 - Misses 3640 3641 +1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3100?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/3100/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `75.47% <ø> (ø)` | :arrow_up: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3100/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.86% <0%> (-0.16%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3100?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3100?src=pr&el=footer). Last update [eec5ec8...44215a4](https://codecov.io/gh/huggingface/transformers/pull/3100?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,099
closed
Don't crash if fine-tuned model doesn't end with a number
That's the same fix applied in https://github.com/huggingface/transformers/issues/2258 , but for the GLUE example
03-03-2020 12:13:03
03-03-2020 12:13:03
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3099?src=pr&el=h1) Report > Merging [#3099](https://codecov.io/gh/huggingface/transformers/pull/3099?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/eec5ec807135ae61fa2266f3c7ad947cc207abda?src=pr&el=desc) will **decrease** coverage by `<.01%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3099/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3099?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3099 +/- ## ========================================== - Coverage 77.59% 77.59% -0.01% ========================================== Files 98 98 Lines 16250 16250 ========================================== - Hits 12610 12609 -1 - Misses 3640 3641 +1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3099?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3099/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.86% <0%> (-0.16%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3099?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3099?src=pr&el=footer). Last update [eec5ec8...de4dc15](https://codecov.io/gh/huggingface/transformers/pull/3099?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,098
closed
why BertModel' object has no attribute 'bias'
```py from torch.nn.functional import softmax from transformers import BertForNextSentencePrediction, BertTokenizer,BertConfig,BertModel seq_A = 'I like cookies !' seq_B = 'Do you like them ?' ''' model = BertForNextSentencePrediction.from_pretrained('bert-base-cased') ''' tokenizer = BertTokenizer.from_pretrained('bert-base-cased') config = BertConfig.from_json_file('E:\\work\\pycharm\\transformers-master\\tf_model\\bert_config.json') model = BertModel.from_pretrained('E:\\work\\pycharm\\transformers-master\\tf_model\\model.ckpt.index', from_tf=True, config=config) encoded = tokenizer.encode_plus(seq_A, text_pair=seq_B, return_tensors='pt') print(encoded) seq_relationship_logits = model(**encoded)[0] probs = softmax(seq_relationship_logits, dim=1) print(seq_relationship_logits) print(probs) the above demo can be used for next sentence prediction provided by BramVanroy,thank you again, Now I want to use my own bert pre-training model or google's model in this task ,so I find the examples in modeling_utils.py LINE 366 # Loading from a TF checkpoint file instead of a PyTorch model (slower) config = BertConfig.from_json_file('./tf_model/my_tf_model_config.json') model = BertModel.from_pretrained('./tf_model/my_tf_checkpoint.ckpt.index', from_tf=True, config=config) ``` bert_config.json : ``` { "attention_probs_dropout_prob": 0.1, "directionality": "bidi", "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "max_position_embeddings": 512, "num_attention_heads": 12, "num_hidden_layers": 12, "pooler_fc_size": 768, "pooler_num_attention_heads": 12, "pooler_num_fc_layers": 3, "pooler_size_per_head": 128, "pooler_type": "first_token_transform", "type_vocab_size": 2, "vocab_size": 21128 } ``` But when I run this demo,some errors happen, ``` AttributeError: 'BertModel' object has no attribute 'bias' Traceback (most recent call last): File "E:/work/pycharm/transformers-master/src/transformers/test.py", line 18, in <module> model = BertModel.from_pretrained('E:\\work\\pycharm\\transformers-master\\tf_model\\model.ckpt.index', from_tf=True, config=config) File "E:\work\pycharm\transformers-master\src\transformers\modeling_utils.py", line 485, in from_pretrained model = cls.load_tf_weights(model, config, resolved_archive_file[:-6]) # Remove the '.index' File "E:\work\pycharm\transformers-master\src\transformers\modeling_bert.py", line 106, in load_tf_weights_in_bert pointer = getattr(pointer, "bias") File "E:\Users\Administrator\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 585, in __getattr__ type(self).__name__, name)) AttributeError: 'BertModel' object has no attribute 'bias' ``` It took a long time to fix it, but failed ,can you help me to solve it.thanks a lot!!
03-03-2020 09:20:15
03-03-2020 09:20:15
Could you try with `BertForPreTraining` instead of `BertModel` ?<|||||>> Could you try with `BertForPreTraining` instead of `BertModel` ? @LysandreJik the error not occurs,thank you very much, but in the previous,when I use ```py model = BertForNextSentencePrediction.from_pretrained('bert-base-cased') ``` I can get seq_relationship_logits like ``` tensor([[ 4.6285, -4.9732]], grad_fn=<AddmmBackward>) ``` after softmax,I can get probs between seq_A and seq_B,whether seq_B is a continuation of seq_A like ``` tensor([[9.9993e-01, 6.7607e-05]], grad_fn=<SoftmaxBackward>) ``` but when I use BertForPreTraining , ```py model = BertForPreTraining.from_pretrained(.....) ``` I get seq_relationship_logits like : ``` tensor([[[ -7.3790, -7.2666, -7.4841, ..., -6.1682, -5.8256, -6.2910], [ -7.9165, -8.1490, -7.9572, ..., -6.5870, -6.3568, -6.8383], [-14.1834, -13.2084, -13.7673, ..., -9.0377, -9.7575, -9.4470], ..., [-12.9208, -12.8706, -13.0834, ..., -10.2187, -8.6429, -11.6360], [-13.2808, -13.3348, -13.2491, ..., -10.7655, -8.8089, -11.0420], [-15.5444, -15.2074, -15.8938, ..., -11.9712, -12.5488, -14.7295]]], grad_fn=<AddBackward0>) ``` how can I get probs between seq_A and seq_B ? thanks. <|||||>Hi, `BertForNextSentencePrediction` is a model that can only perform the NSP objective. `BertForPreTraining` is a model that can perform both NSP and traditional MLM. It, therefore, outputs tensors for both those tasks, the NSP result is the second value in the output tuple of `BertForPreTraining`: ```py from transformers import BertForNextSentencePrediction, BertForPreTraining, BertTokenizer nsp = BertForNextSentencePrediction.from_pretrained("bert-base-cased") bpt = BertForPreTraining.from_pretrained("bert-base-cased") tokenizer = BertTokenizer.from_pretrained("bert-base-cased") inputs = tokenizer.encode_plus("I like cats.", "I like dogs too.", return_tensors="pt") nsp_output = nsp(**inputs) bpt_outupt = bpt(**inputs)[1] print(nsp_output) print(bpt_outupt) ``` returns ``` (tensor([[ 5.1543, -5.8147]], grad_fn=<AddmmBackward>),) tensor([[ 5.1543, -5.8147]], grad_fn=<AddmmBackward>) ``` <|||||>> Hi, `BertForNextSentencePrediction` is a model that can only perform the NSP objective. `BertForPreTraining` is a model that can perform both NSP and traditional MLM. > > It, therefore, outputs tensors for both those tasks, the NSP result is the second value in the output tuple of `BertForPreTraining`: > > ```python > from transformers import BertForNextSentencePrediction, BertForPreTraining, BertTokenizer > > nsp = BertForNextSentencePrediction.from_pretrained("bert-base-cased") > bpt = BertForPreTraining.from_pretrained("bert-base-cased") > tokenizer = BertTokenizer.from_pretrained("bert-base-cased") > > inputs = tokenizer.encode_plus("I like cats.", "I like dogs too.", return_tensors="pt") > nsp_output = nsp(**inputs) > bpt_outupt = bpt(**inputs)[1] > > print(nsp_output) > print(bpt_outupt) > ``` > > returns > > ``` > (tensor([[ 5.1543, -5.8147]], grad_fn=<AddmmBackward>),) > tensor([[ 5.1543, -5.8147]], grad_fn=<AddmmBackward>) > ``` OK, I got it ,thanks a lot
transformers
3,097
closed
Some question about training BERT after change the Vocab.txt size
The original bert-base-chinese-vocab.txt size is 21128. I use my own vocab.txt size is 44900 When I try to train Fine-tune BERT Model using BertForMaskedLM it have some promble about size mismatch. I have try to change the BertEmbeddings its self.word_embeddings config.vocab_size to 44900 like this `self.word_embeddings = nn.Embedding(44900, config.hidden_size, padding_idx=0) ` But it still have problem like this ``` RuntimeError: Error(s) in loading state_dict for BertForMaskedLM: size mismatch for bert.embeddings.word_embeddings.weight: copying a param with shape torch.Size([21128, 768]) from checkpoint, the shape in current model is torch.Size([44900, 768]). size mismatch for cls.predictions.bias: copying a param with shape torch.Size([21128]) from checkpoint, the shape in current model is torch.Size([44900]). size mismatch for cls.predictions.decoder.weight: copying a param with shape torch.Size([21128, 768]) from checkpoint, the shape in current model is torch.Size([44900, 768]). ``` I do not sure how to fix it. I have think about if I need to change the pre trained BERT config.json file? Its vocab_size from 21128 to 44900
03-03-2020 07:16:38
03-03-2020 07:16:38
You can't really change the vocabulary without re-training the whole model. Is there some overlap between the BERT vocabulary and your custom vocabulary? If so, you can add the 20k+ tokens using `add_tokens` (which will probably slow down things, as that's a lot of added tokens).<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>you can now do this. Keras (tf-nightly version) has added a new util `keras.utils.warmstart_embedding_matrix`. Using this you can continuously train your model with changing vocabulary. https://www.tensorflow.org/api_docs/python/tf/keras/utils/warmstart_embedding_matrix
transformers
3,096
closed
BART BartForSequenceClassification example
# 🐛 Bug I'm trying the run the code on the documentation. https://huggingface.co/transformers/model_doc/bart.html#bartforsequenceclassification ``` from transformers import BartTokenizer, BartForSequenceClassification import torch tokenizer = BartTokenizer.from_pretrained('bart-large') model = BartForSequenceClassification.from_pretrained('bart-large') input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1 labels = torch.tensor([1]).unsqueeze(0) # Batch size 1 outputs = model(input_ids, labels=labels) loss, logits = outputs[:2] ``` output: ``` Traceback (most recent call last): File "/ssd-playpen/yixin1/projects/cleartheuncertainty/utest/utest_transformer/utest_bart.py", line 11, in <module> outputs = model(input_ids, labels=labels) File "/ssd-playpen/yixin1/projects/cleartheuncertainty/ENV/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/ssd-playpen/yixin1/projects/cleartheuncertainty/ENV/lib/python3.7/site-packages/transformers/modeling_bart.py", line 1327, in forward loss = F.cross_entropy(logits.view(-1, self.num_labels), labels.view(-1)) File "/ssd-playpen/yixin1/projects/cleartheuncertainty/ENV/lib/python3.7/site-packages/torch/nn/modules/module.py", line 576, in __getattr__ type(self).__name__, name)) AttributeError: 'BartForSequenceClassification' object has no attribute 'num_labels' ``` I guess this should be a quick fix or so.
03-03-2020 05:01:57
03-03-2020 05:01:57
Thanks for reporting this!
transformers
3,095
closed
Getting different topk results when using past + attention mask for more than 1 sentence
# ❓ Questions & Help Hi. I'm having some issues when using the past + attention mask functionality. Results are not the ones I was expecting to get... I'm using latest master as the latest release 2.5.1 is failing (`RuntimeError: The size of tensor a (6) must match the size of tensor b (3) at non-singleton dimension 3` at modeling_gpt2.py:150). With master there's no error event being thrown. My code works with a single sentence, but it doesn't work with more sentences because I get different predictions as I'm expecting to get the same topk results by using past vs no using past at all. Code snippet below: ``` from transformers.tokenization_gpt2 import GPT2Tokenizer from transformers.modeling_gpt2 import GPT2LMHeadModel import torch tokenizer = GPT2Tokenizer.from_pretrained('gpt2', pad_token='<|endoftext|>') model = GPT2LMHeadModel.from_pretrained('gpt2') # Complete phrases are: "I like to drink soda and" and "Please help me with this" docs = ["I like to", "Please help me"] # note: comment the above line and uncomment the following line to make it work with 1 document #docs = ["I like to"] docs_tensors = tokenizer.batch_encode_plus( [d for d in docs], pad_to_max_length=True, return_tensors='pt') docs_next = ["soda and ", "with this"] # note: comment the above line and uncomment the following line to make it work with 1 document #docs_next = ["soda and "] docs_next_tensors = tokenizer.batch_encode_plus( [d for d in docs_next], pad_to_max_length=True, return_tensors='pt') # predicting the first part of each phrase _, past = model(docs_tensors['input_ids'], attention_mask=docs_tensors['attention_mask']) # predicting the rest of the phrase attn_mask = torch.cat([docs_tensors['attention_mask'], docs_next_tensors['attention_mask']], dim=-1) logits, _ = model(docs_next_tensors['input_ids'], attention_mask=attn_mask, past=past) logits = logits[:, -1] _, top_indices_results = logits.topk(50) words = [tokenizer.decode([idx.item()]) for tir in top_indices_results for idx in tir] print("Results with past:", words) ##################### docs_full_tensors = tokenizer.batch_encode_plus( [d + n for d, n in zip(docs, docs_next)], pad_to_max_length=True, return_tensors='pt') logits, _ = model(docs_full_tensors['input_ids'], attention_mask=docs_full_tensors['attention_mask']) logits = logits[:, -1] _, top_indices_results = logits.topk(50) words = [tokenizer.decode([idx.item()]) for tir in top_indices_results for idx in tir] print("Results without past:", words) ``` Output results (please note the inconsistence between both results - I'm expecting them to match like the next test): ``` Results with past: [' I', ' the', ' a', ' t', ' s', ' k', ' it', ' c', ' my', ' other', ' to', ' b', ' d', ' we', ' make', ' then', ' o', ' m', ' have', ' you', ' do', ' all', ' l', ' some', ' so', ' can', ' i', ' j', ' p', ' that', ' be', ' get', ' he', ' take', ' st', ' this', ' also', ' n', ' ch', ' is', ' use', ' h', ' they', ' f', ' put', ' go', ' g', ' w', ' not', ' just', 'The', 'A', 'I', '"', 'This', 'In', 'It', 'B', 'As', 'S', 'We', 'M', 'P', 'C', 'There', 'If', 'T', '1', 'By', 'F', 'You', 'D', 'Image', 'An', 'When', '(', 'On', 'What', 'For', 'L', 'H', 'R', 'About', '[', 'From', 'G', 'After', 'E', 'One', 'K', 'With', 'Still', 'So', 'W', 'by', 'N', 'My', 'Please', 'How', 'O'] Results without past: [' I', ' the', ' a', ' t', ' s', ' k', ' it', ' c', ' my', ' other', ' to', ' b', ' d', ' we', ' make', ' then', ' o', ' m', ' have', ' you', ' do', ' all', ' l', ' some', ' so', ' can', ' i', ' j', ' p', ' that', ' be', ' get', ' he', ' take', ' st', ' this', ' also', ' n', ' ch', ' is', ' use', ' h', ' they', ' f', ' put', ' go', ' g', ' w', ' not', ' just', '.', ' site', ' page', '!', ' project', ',', ' website', ' article', ' thread', ':', ' story', ' one', ' and', ' message', ' post', ' issue', ' great', ' blog', ' amazing', ' thing', ' little', ' problem', '\n', '?', ' book', ' game', ' by', ' to', ' wonderful', ' in', ' awesome', ' guy', ' community', ' new', ' mod', ' information', ' web', ' beautiful', '...', ' man', ' stuff', ' work', ' place', ' video', '."', ' app', ' kind', ' piece', '!"', ' world'] ``` If I were uncommenting the lines stated in the code I would get same results (...but for only 1 sentence): ``` Results with past: [' I', ' the', ' a', ' t', ' s', ' k', ' it', ' c', ' my', ' other', ' to', ' b', ' d', ' we', ' make', ' then', ' o', ' m', ' have', ' you', ' do', ' all', ' l', ' some', ' so', ' can', ' i', ' j', ' p', ' that', ' be', ' get', ' he', ' take', ' st', ' this', ' also', ' n', ' ch', ' is', ' use', ' h', ' they', ' f', ' put', ' go', ' g', ' w', ' not', ' just'] Results without past: [' I', ' the', ' a', ' t', ' s', ' k', ' it', ' c', ' my', ' other', ' to', ' b', ' d', ' we', ' make', ' then', ' o', ' m', ' have', ' you', ' do', ' all', ' l', ' some', ' so', ' can', ' i', ' j', ' p', ' that', ' be', ' get', ' he', ' take', ' st', ' this', ' also', ' n', ' ch', ' is', ' use', ' h', ' they', ' f', ' put', ' go', ' g', ' w', ' not', ' just'] ``` ## Details <!-- Description of your issue --> Original stackoverflow question that @patrickvonplaten had answered initially: https://stackoverflow.com/questions/60459292/using-past-and-attention-mask-at-the-same-time-for-gpt2/
03-03-2020 04:09:59
03-03-2020 04:09:59
Hi @Damiox, First, you have to be careful if you do LM inference with batches of different lengths. Besides using an attention mask, you also have to change the position_ids when using GPT2 (see #3021 ). Considering batch language generation with GPT2 I highly recommend making sure that the input_ids batch is does not need padding (which is actually the case in your example above). If you do have to pad certain batch_idxs then make sure you change the input_position_ids as well and use an attention_mask, but even then it's not promised that GPT2 will generate good results. Second, if your prefix input_ids are padded, let's say: `[ [ I like cats ], [ I like ] ] -> encode_plus -> [ [0, 1, 2], [0, 1, <PAD> ] ]` and you then append something `-> [ [ that are ], [ dogs] ]`, you essentially put in `[ [0, 1, 2, 4, 5], [0, 1, <PAD>, 3, <PAD>] ]` into the model, which is different from what would happen if you `encode_plus` everything directly (`[ [ 0, 1, 2, 4, 5], [0, 1, 3, <PAD>, <PAD> ] ]`). That's why you a) have to make sure `input_postion_ids` are correct and also b) that you sample from the last not padded token for the next word, which is not that simple (again see #3021 ). Again, I recommend to only work with `input_ids` that are not padded or just do `batch_size=1` LM inference. Three, when appending words of lists make sure that you have a space `" "` between them. In your example above printing the list `[d + n for d, n in zip(docs, docs_next)]` actually gives: `['I like tosoda and ', 'Please help mewith this']`, which is probably not what you want. If you would change `docs_next = ["soda and ", "with this"]` to `docs_next = [" soda and ", " with this"] `both outputs actually produce the same results, but this is only because `docs_tensors['input_ids']` is not padded (same lengths for all batch_idxs) <|||||>Hey @patrickvonplaten thanks for all your help. Indeed, there was an issue in the code snippet I shared. Thanks! I see now that pads add complexity because it needs the position_ids to be accommodated, but if it's correctly implemented from my side... shouldn't it just work? I'm confused about your phrase in `it's not promised that GPT2 will generate good results`. <|||||>GPT2 was never trained on padding tokens so I'm not 100% sure whether you will get good results. But it's for sure worth trying out, if you absolutely have to use batch_size > 1<|||||>Alright. I think it should be fine by manipulating the tensors in the "past" as long as I keep consistency across the 24 items in the "past". Right? For instance... I see that if I generate a past for 1 input with 256 tokens, then I'm getting back a "past" which is a list of 24 tensors of 5 dimensions (e.g. (2, 1, 16, 256, 64)). On the other hand, if I generate a past for 2 inputs with 540 tokens, I'm getting back a "past" which is a list of 24 tensors of 5 dimensions too (e.g. (2, 2, 16, 540, 64)). So I think that if I wanted to exclude the last sentence from the last "past" I can simply manipulate the 2nd dimension in all the 24 items and drop the corresponding value from it. I guess... ? <|||||>> So I think that if I wanted to exclude the last sentence from the last "past" I can simply manipulate the 2nd dimension in all the 24 items and drop the corresponding value from it. I guess... ? I don't really understand this sentence. If you are concerned about sampling from a padded past, you don't have to manipulate the past variable I think. Let's say we forwarded:` ['I', 'live', 'in', <PAD>, <PAD>]` then the past variable will be of dimension `(2, 5, 16, num_tokens, 64)`. If you now want to sample from `past = key, values of ['I', 'live', 'in', <PAD>, <PAD>]` and `input_id = ['London'] ` then you just have to sample from the outputted token because the operation is the same as sampling from the last token of `['I', 'live', 'in', <PAD>, <PAD>, 'London']` and the last token is not a `<PAD>` token from which output logits it can be sampled from. <|||||>@patrickvonplaten I think from your above example, the past variable will be `(2, 1, 16, num_tokens, 64)` instead of `(2, 5, 16, num_tokens, 64`. Right? > So I think that if I wanted to exclude the last sentence from the last "past" I can simply manipulate the 2nd dimension in all the 24 items and drop the corresponding value from it. I guess... ? What I meant here is to manipulate the past tensors in case the tensor dimension I want to predict for the subsequent batches should exclude some sentence. I found removing information from the past tensors may be complicated. So please see my example above with another approach to use "past" I am experimenting now. For now to simplify this a bit, I'm not focusing on padding the past. Thus, it'll be generated from a batch of N docs. For instance below N=3: ``` ['I', 'live', 'in', 'NYC'] ['I', 'live', 'in', 'Barcelona'] ['I', 'live', 'in', 'Moscow'] ``` Note: I'm going to make sure to group docs by tokens length. I'm going to get a `past` from that. Past here will be a list of 24 elements with dimensions `(2, 3, 16, 4, 64)` Then I'm planning to use that past variable along with M suffix phrases. Those suffix phrases may have different lengths and below to different sentences that were calculated above, so I'm planning to add padding here first. Also another characteristic is that M will be always equal to or greater than N. For example, M=6 (it's a coincide that num_tokens is also 6 here): ``` ['and', 'I', 'speak', 'english', 'fluently', '<PAD>'] ['and', 'I', 'live', 'in', 'North', 'Manhattan'] ['and', 'I', 'like', 'football', '<PAD>', '<PAD>'] ['and', 'I', 'don\'t', 'speak', 'catalan', '<PAD>'] ['and', 'I', 'take', 'the', 'bus', '<PAD>'] ['and', 'I', 'drink', 'vodka', 'sometimes', 'you?'] ``` Note: I'm also building an attention mask properly as you indicated by concatenating a tensor full with 1s with length = 4 to make this work. For example: ``` ['1', '1', '1', '1', '1', '1', '1', '1', '1', '0'] ['1', '1', '1', '1', '1', '1', '1', '1', '1', '1'] ['1', '1', '1', '1', '1', '1', '1', '1', '0', '0'] ['1', '1', '1', '1', '1', '1', '1', '1', '1', '0'] ['1', '1', '1', '1', '1', '1', '1', '1', '1', '0'] ['1', '1', '1', '1', '1', '1', '1', '1', '1', '1'] ``` To make "past" fit here, I'm planning to expand the past variable. So for each tensor in the past array, expand the tensor results as if I had sent to gpt2 initially the following first batch to get the "past" for: ``` ['I', 'live', 'in', 'NYC'] ['I', 'live', 'in', 'NYC'] ['I', 'live', 'in', 'NYC'] ['I', 'live', 'in', 'Barcelona'] ['I', 'live', 'in', 'Barcelona'] ['I', 'live', 'in', 'Moscow'] ``` In this case, I'll build a past with dimensions: `(2, 6, 16, 4, 64)` instead of the original dimensions: `(2, 3, 16, 4, 64)`. I found https://pytorch.org/docs/stable/torch.html#torch.repeat_interleave very useful for this... Do you think this make sense? Any warning about this? Thanks<|||||>@patrickvonplaten can I not manipulate the `past` variable as explained above? Do the other dimensions contain some kind of aggregated data that makes the past to be immutable?<|||||>@patrickvonplaten just to confirm I'm on the right track...: Can I manipulate the dimensions for the layer tensors in the past array? It's working for me with latest release (2.6.0), but just wanted to make sure I'm on the right track if I go ahead with this. So basically I want to make sure I can expand the dimension 1 of every tensor in the past array from N to M. So that I can re-use that past with different predictions that will reuse that past. Hope it makes sense my question.
transformers
3,094
closed
[Bart] dont call .forward
03-03-2020 03:42:35
03-03-2020 03:42:35
@julien-c any idea why this would cause ``` FAILED tests/test_pipelines.py::MonoColumnInputTestCase::test_tf_feature_extraction ```<|||||>I think this one might sometimes fail randomly
transformers
3,093
closed
wrong 'label2id' and 'id2label' in config when loading from pretrained
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Bert Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. ```python from transformers import BertConfig config = BertConfig.from_pretrained('bert-base-cased', num_labels=3) print(config.id2label) ``` 2. Prints: {0: 'LABEL_0', 1: 'LABEL_1'} <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> prints {0: 'LABEL_0', 1: 'LABEL_1', 2: 'LABEL_2'} ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.5.1 - Platform: Ubuntu 16.04 - Python version: 3.7.6 - PyTorch version (GPU?): 1.4.0 - Using GPU in script?: no - Using distributed or parallel set-up in script?: no
03-03-2020 01:11:21
03-03-2020 01:11:21