repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
4,092
closed
Finue-tuning T5 model
Hi, I want to fine-tune T5 for a seq2seq task and I'm using the T5ForConditionalGeneration as it seems to have an LM decoder on top. As there's no code example for this, I have lots of questions: 1. Am I doing the right thing? 2. I'm using the Adam optimizer. Is it ok? 3. I'm a bit confused about the `forward` inputs in the training phase. I read [this](https://huggingface.co/transformers/model_doc/t5.html#training) explanation over and over again and I don't understand whether I should just use `input_ids` and `lm_labels` for the training or not. Also somewhere in [this issue ](https://github.com/huggingface/transformers/issues/2213#issuecomment-567090553) someone's mentioned that: > T5 input sequence should be formatted with [CLS] and [SEP] tokens So which one is right? I'm super confused.
04-30-2020 22:55:28
04-30-2020 22:55:28
+1. I'm also confused on how to structure the lm_labels and the decoder_input_ids.<|||||>Given T5's universal text-to-text objective, I'm under the impression that the [T5 summarization](https://github.com/huggingface/transformers/tree/master/examples/summarization/t5) example should be applicable for all T5 tasks, as long as the input and target sequences are correctly structured for the specified task. Hope this can be confirmed! Sample input and target structures for specific tasks can be found at Appendix D in the T5 [paper](https://arxiv.org/pdf/1910.10683.pdf).<|||||>To correctly train T5 one should follow the instructions at https://huggingface.co/transformers/model_doc/t5.html#training . For training, there is no need to provide the `decoder_input_ids` - they are created automatically. One only has to provide the `lm_labels`. As @enzoampil, Appendix D of the paper gives good input/output examples. <|||||>@patrickvonplaten What exactly would be the `lm_labels` for something like summarization? **Example Usecase** Text: "ABC" with maximum length 500 Summary: "XYZ" with maximum length 50 I understand that we can prepare `input_ids` and `attention_mask` like this for the document. ```python x = tokenizer.encode_plus(sentence, max_length=500, pad_to_max_length=True, return_tensors='pt') ``` Now for the lm_labels i.e. summary, **is simply doing this enough**? ```python lm_labels = tokenizer.encode(summary, return_tensors='pt', max_length=50, pad_to_max_length=True) ``` And the model as ```python model = T5ForConditionalGeneration.from_pretrained('t5-small') model(input_ids=..., lm_labels=lm_labels, attention_mask=...) ``` In your examples folder for summarization, I've seen some preprocessing like this for lm_labels. I didn't understand why this is being done. ```python y_ids = y[:, :-1].contiguous() lm_labels = y[:, 1:].clone() lm_labels[y[:, 1:] == tokenizer.pad_token_id] = -100 ```<|||||>Hi @amitness, For T5 summarization you will have to append the prefix "summarize: " to every input data. But you are more or less right. All you have to do is: 1. Prepare input data ```python x = tokenizer.encode_plus("summarize: " + sentence, max_length=500, pad_to_max_length=True, return_tensors='pt') ``` 2. Prepare labels ```python lm_labels = tokenizer.encode_plus(summary, return_tensors='pt', max_length=50, pad_to_max_length=True) ``` 3. For tokens that are padded (which is only relevant if you train with batch_size > 1) you need to make sure that no loss is calculated on those tokens, so ```python lm_labels[lm_labels == tokenizer.pad_token_id] = -100 ``` There is no need to shift the tokens as you show at the end of your comment because T5 does that automatically - see https://github.com/huggingface/transformers/blob/6af3306a1da0322f58861b1fbb62ce5223d97b8a/src/transformers/modeling_t5.py#L1063. This is also explained in https://huggingface.co/transformers/model_doc/t5.html#training .<|||||>Thanks for this clarification @patrickvonplaten ! Finally got it to work from my side 😄 Gotcha for me was that the `decoder_input_ids` at inference should be prepended by the padding token as stated in the [docs](https://huggingface.co/transformers/model_doc/t5.html#t5forconditionalgeneration) for `T5ForConditionalGeneration`.<|||||>@enzoampil Can you give an example code of what you meant by prepending padding token at inference time?<|||||>@patrickvonplaten Thank you. Besides the inbuilt prefix like summarize:, translate: etc, can I train with my own prefix? Let's say there is a prefix called "simplify:" and I have pairs of datasets. Is adding the prefix and preparing data in the format you mentioned above enough?<|||||>@amitness E.g. in your summarization case, it would look something like: ``` from transformers import T5Tokenizer, T5Model tokenizer = T5Tokenizer.from_pretrained('t5-small') model = T5Model.from_pretrained('t5-small') input_ids = tokenizer.encode("summarize: Hello, my dog is cute", return_tensors="pt") decoder_input_ids = tokenizer.encode("<pad>", return_tensors="pt") outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids) outputs[0] ``` Do note that `T5ForConditionalGeneration` already prepends the padding by default. Above is only necessary if you're doing a forward pass straight from `T5Model`. Regarding your question about making your own prefix, yes, you should be able to train on your own prefix. This is the whole point of T5's text-to-text approach. You should be able to specify any problem through this kind of approach (e.g. Appendix D in the T5 paper).<|||||>@enzoampil Makes sense. Thank you so much.<|||||>> @patrickvonplaten Thank you. > > Besides the inbuilt prefix like summarize:, translate: etc, can I train with my own prefix? Let's say there is a prefix called "simplify:" and I have pairs of datasets. Is adding the prefix and preparing data in the format you mentioned above enough? Sure, you can train with your own prefix.<|||||>> Thanks for this clarification @patrickvonplaten ! Finally got it to work from my side > > Gotcha for me was that the `decoder_input_ids` at inference should be prepended by the padding token as stated in the [docs](https://huggingface.co/transformers/model_doc/t5.html#t5forconditionalgeneration) for `T5ForConditionalGeneration`. Yeah that's actually a bit hidden in the code. So to clarify: During training, there is no need to prepend the padding token since this is done automatically in T5 when `lm_labels` is provided. During evaluation, one has to prepend the PAD token as you stated in your example. After training, the mode can be used with the `generate()` method (which actually powers the `summarization`, `translation` and `text-generation` pipeline). In the `generate()` method, the padding token is automatically prepended.<|||||>@patrickvonplaten One thing I've noticed is the discrepancy between huggingface's and the original google-research tokenization. In the official colab by the paper authors, they seem to add `</s>` when tokenizing to the end of each text. But, when we use tokenizers from hugging face, it is not added. Not sure if it is a problem or not. Here is an excerpt from their official [colab](https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/master/notebooks/t5-trivia.ipynb#scrollTo=I64TqHGxbOJ2) ```python 'inputs_plaintext': b'trivia question: what is the population of fayetteville north carolina?', 'inputs': array([22377, 822, 10, 125, 19, 8, 2074, 13, 3, 89, 9, 63, 1954, 1420, 3457, 443, 12057, 9, 58, 1]) ``` You can see 1 added at the end of the token_ids. But if we tokenize this same sentence with huggingface tokenizer, we don't get 1 at end. ```python tokenizer.encode('trivia question: what is the population of fayetteville north carolina?') # [22377, 822, 10, 125, 19, 8, 2074, 13, 3, 89, 9, 63, 1954, 1420, 3457, 443, 12057, 9, 58] ``` When I was prototyping with the models, I tried preparing data like this to solve it. This adds 1 to the end. Not sure if we need to do this or not. ``` tokenizer.encode("summarize: Hello world</s>", return_tensors="pt") ``` <|||||>Yes you are right, you should add the `</s>` token to the end of a sentence. I think this is also shown in the docs: https://huggingface.co/transformers/model_doc/t5.html#training. <|||||>Thanks to @patrickvonplaten for all clarification and others for their further questions that led to more details on the subject. <|||||>Hello everyone, I am currently working on finetuning the TFT5ForConditionalGeneration model on a parallel dataset. Questions: 1. Can I call model.fit like this - `model.fit([x,y])` where x is input_ids and y is lm_labels? If not, how do I pass in lm_labels and train with the model. Thanks. <|||||>@patrickvonplaten <|||||>For the tensorflow version you have to input `input_ids`, `decoder_input_ids` and `lm_labels` yourself. The model should work fine with the `keras` framework!<|||||>I will soon add more documentation for T5 for tensorflow. It's true that there is not enough documentation for TF at the moment.<|||||>Okay, I would appreciate that. So, do I add the `input_ids`, `decoder_input_ids `and `lm_labels` as keywords when calling `model.fit`(which I doubt) or when do I do that? <|||||>I have not looked trained tensorflow using keras `model.fit` function yet. The forward pass in tensorflow's T5 implementation needs both `input_ids` and `decoder_input_ids` as you can see when going through this function: https://github.com/huggingface/transformers/blob/fd2174664c8879c747ada3e6e0a2486858808421/src/transformers/modeling_tf_t5.py#L980 So, depending on your code you will have to create `input_ids`, `decoder_input_ids` and `lm_labels` yourself. Feel free to share your code here if you have a working training pipeline for TFT5 :-) <|||||>Hi Patrick. Got it to work with Pytorch. However, I have a question: Is it possible to use a different vocab size with this pretrained model? I have a trained sentence piece model and it only works with this pretrained t5 when I use a beam size of 1. I have manually changed the vocab size by setting `model.config.vocab_size = tokenizer.vocab_size` . However, the beam size problem still persists and it returns a shape mismatch error. Please let me know if this is possible, thanks. <|||||>@patrickvonplaten <|||||>I think it will work in case the targets pieces from the new vocab is the same in the old one. Besides, what is the benefit from the pretrained T5 if the sentence piece targets changed ?!!<|||||>Created a little repo for NMT finetuning https://github.com/keleog/finetune_huggingace_t5<|||||>> I have not looked trained tensorflow using keras `model.fit` function yet. The forward pass in tensorflow's T5 implementation needs both `input_ids` and `decoder_input_ids` as you can see when going through this function: > https://github.com/huggingface/transformers/blob/fd2174664c8879c747ada3e6e0a2486858808421/src/transformers/modeling_tf_t5.py#L980 > > So, depending on your code you will have to create `input_ids`, `decoder_input_ids` and `lm_labels` yourself. Feel free to share your code here if you have a working training pipeline for TFT5 :-) Hi @patrickvonplaten, I was able to create a data source with the input data and labels as you described. Now I'm trying to use that data for keras fit with the loss function `tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)` The shape of the labels is (batch_size, seq_len), and I would expect that the model `TFT5ForConditionalGeneration` would return the logits of shape (batch_size, seq_len, vocab_size). However its call method returns this: ` return decoder_outputs + encoder_outputs ` so I get an error: `ValueError: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 27 array(s), for inputs ['output_1', 'output_2', 'output_3', 'output_4', 'output_5', 'output_6', 'output_7', 'output_8', 'output_9', 'output_10', 'output_11', 'output_12', 'output_13', 'output_14', 'output_15', 'output_16', 'output_17', 'output_18', 'output_19', 'output_20', 'output_21', 'output_22', 'output_23', 'output_24', 'output_25', 'output_26', 'output_27'] but instead got the following list of 1 arrays: [<tf.Tensor 'args_4:0' shape=(32, 128) dtype=int32>]...` I can think of two solutions, neither sounds good: 1. override `call` method in a subclass and return only the decoder outputs 2. use a custom loss function that extracts the decoder outputs from the model output What would you advice?<|||||>Hi @patrickvonplaten , I am working on one question answering task using TFT5. I have done a text encoding step. My raw input is question and target is the answer shown in the below image ![image](https://user-images.githubusercontent.com/26653468/90598217-43055b00-e210-11ea-9dd3-ac0c8548322b.png) How should I configure input so that I can pass it in model.fit() method like this way!! I am able to get input_id and input mask. ``` model = TFT5ForConditionalGeneration.from_pretrained("t5-small") optimizer = keras.optimizers.Adam(lr=5e-5) model.compile(optimizer=optimizer) model.fit( x_train, y_train, epochs=1, verbose=2, batch_size=2, ) ``` Here is the [Colab Notebook](https://github.com/bhadreshpsavani/EfficientQAExperiments/blob/master/NaturalQAT5TF.ipynb)<|||||>I'll start working on a TFT5 notebook this week. Related issues: https://discuss.huggingface.co/t/how-to-train-tft5forconditionalgeneration-model/888 https://discuss.huggingface.co/t/how-to-train-t5-with-tensorflow/641/6 https://github.com/huggingface/transformers/issues/6876<|||||>Hello @patrickvonplaten I am working with T5 for paraphrase generation, but I wanted to know if there is a way to use my own custom defined loss function for training? <|||||>Sure! You can just output the language head logits with T5 and build your own loss with it :-) <|||||>Hello @patrickvonplaten, I am also new to implement T5 fine-tuning in summarization tasks. I read online tutorial from the website: https://shivanandroy.com/fine-tune-t5-transformer-with-pytorch/ I do not know why for the decoder_input_ids and labels, the author removed last and first token ID separately. See the following codes ``` for _, data in enumerate(loader, 0): y = data["target_ids"].to(device, dtype=torch.long) y_ids = y[:, :-1].contiguous() lm_labels = y[:, 1:].clone().detach() lm_labels[y[:, 1:] == tokenizer.pad_token_id] = -100 ids = data["source_ids"].to(device, dtype=torch.long) mask = data["source_mask"].to(device, dtype=torch.long) outputs = model( input_ids=ids, attention_mask=mask, decoder_input_ids=y_ids, labels=lm_labels, ) ``` Would it be possible to ask you? I googled it and read the original paper and hugging-face documents but I still do not understand.<|||||>Hey @tom192180, Could you maybe try to use the forum for this: https://discuss.huggingface.co/ - thanks! <|||||>> Hey @tom192180, > > Could you maybe try to use the forum for this: https://discuss.huggingface.co/ - thanks! Hey @patrickvonplaten! I posted to the forum. Thank you for the directing!<|||||>Hello ! @patrickvonplaten, i am new to NLP and want to use T5-base model for Casual Language Modeling. My goal is to using a specific corpus to fine-tune t5-base model with a casual language modeling, I find this [document](https://huggingface.co/docs/transformers/main/en/tasks/language_modeling#causal-language-modeling) and it use `AutoModelForCasualLM`, but this liabrary just not include series of t5 models. So my question is: 1. How should I do to finetune t5 model for CLM object? In my understanding, CLM is a process of predicting `token_2` from `token_1` , `token_3` from `token_1, token_2` until the end of input sequence, so i am confused how to finish this process myself. 2. I try to spilt one my train data into something like this (ti == token_i, 1 == eos_token): input_ids&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;labels - `[t1, 1, 1, 1, 1, 1, ...]`&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;`[t1, t2, 1, 1, 1, 1, ...]` - `[t1, t2, 1, 1, 1, 1, ...]`&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;`[t1, t2, t3, 1, 1, 1, ...]` - `[t1, t2, t3, 1, 1, 1, ...]`&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;`[t1, t2, t3, t4, 1, 1, ...]` - `[t1, t2, t3, t4, 1, 1, ...]`&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;`[t1, t2, t3, t4, t5, 1, ...]` The first problem is obvious, the expanded dataset is too large and requires more time to fine-tune; The second problem is that this seems strange, and I don't know if this fulfills the CLM's mission requirements. This is the only idea that i can catch up to solve this problem, does it work? I posted it to forum but reveived nothing, would it be possible to ask you?
transformers
4,091
closed
Add ForMultipleChoice for Electra and Albert [WIP]
Accidentaly hit enter while reading the guide, sorry
04-30-2020 21:05:27
04-30-2020 21:05:27
transformers
4,090
closed
Character level models?
Hi, are any character-level language models available? Transformer-XL mentions in their paper that they did both word level and character level stuff, yet here it seems only the word level one is available? Is that correct?
04-30-2020 20:05:20
04-30-2020 20:05:20
I don't think we have any character-level language models available on [huggingface.co/models](https://huggingface.co/models), but they should be pretty straightforward to train. BTW I'm sure you saw it already but [PyTorch's nn.Transformer tuto](https://pytorch.org/tutorials/beginner/transformer_tutorial.html) is a char-level language model.<|||||>Hi, that's actually is word-level model. Do you know of any pretrained bidirectional transformer-based character level language models that I can use?<|||||>No I'm not aware of any pretrained one.<|||||>We will soon have a pretrained ReformerLM model on character level <|||||>@patrickvonplaten just for my curiosity would this be the model trained on the "Crime and Punishment" from their Colab (but the vocab is not char-only), or do you have an own trained model 🤔 <|||||>Yeah that was my bad definition of char-only I guess :D. The vocab has 320 tokens, so it's more like on "very" small word units level. Example: ```python tok = ReformerTokenizer.from_pretrained("google/reformer-crime-and-punishment") tokens = tok.encode("This is a test sentence") # [108, 265, 24, 111, 4, 3, 249, 7, 76, 25, 69] print([tok.decode(token) for token in tokens]) # ['T', 'h', 'is', 'is', 'a', 't', 'est', 's', 'ent', 'en', 'ce'] ```<|||||>"True" char-level is cool because obviously the tokenizer is then pretty trivial :)<|||||>Btw, we now have a "True" char-level reformer model here: https://huggingface.co/google/reformer-enwik8 :-) <|||||>Any chance of having a TF version of the Reformer model?<|||||>Yes in ~2 month I would guess<|||||>@patrickvonplaten Is there any notebook/doc on how to fine tune the char level model using reformer-enwiki8 model? Having a doc showing how to pass training data to fine tune it would be helpful.<|||||>You should be able to leverage the code shown on the model card of reformer-enwik8 here: https://huggingface.co/google/reformer-enwik8#reformer-language-model-on-character-level-and-trained-on-enwik8 . It shows how data is passed to the model.<|||||>I don't understand how to use that code in place of a Tokenizer object. For example, to train a masked language model in [this example script](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py) the tokenizer is used in the data collator to do a lot more than just mapping to `input_ids`. I've tried loading tokenizers from character level models like CANINE but that doesn't work either. I could try instantiating an untrained tokenizer and passing all the characters in the corpus as special characters [like this person did](https://discuss.huggingface.co/t/character-level-tokenizer/12450) but it seems like I shouldn't have to do that to achieve something so basic.
transformers
4,089
closed
GePpeTto!
# 🌟 New model addition ## Model description GePpeTto is a GPT-2 model for italian. For further details checkout the paper: https://arxiv.org/abs/2004.14253 ## Open source status * [X ] the model implementation is available: we used huggingface's GPT-2 implementation. * [X ] the model weights are available: you can find the model here: https://github.com/LoreDema/GePpeTto * [X ] who are the authors: @LoreDema @michelecafagna26 @malvinanissim @Falice1977 @marcoguerini
04-30-2020 16:27:37
04-30-2020 16:27:37
Awesome! These instructions might help: https://github.com/huggingface/transformers#Quick-tour-of-model-sharing<|||||>Hi @LoreDema, thanks for sharing GePpeTto, it's really cool! Yes you can juste create a user account and an organization on huggingface.co and share the weights there. (and add a model card here if you can)<|||||>Awesome, just pushed the model and created a pull request for the card https://github.com/huggingface/transformers/pull/4099
transformers
4,088
closed
InvalidArgumentError: Incompatible shapes: [5,20] vs. [5,18] [Op:Less]
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): T5 - TFT5ForConditionalGeneration Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: Question-Answering ## To reproduce Steps to reproduce the behavior: 1. Download pre-trained T5 model & T5 tokenizer 2. Encode this sentence: `question: What is coronavirus? context: Coronavirus disease 2019 (COVID-19) is an infectious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The disease was first identified in December 2019 in Wuhan, the capital of China's Hubei province, and has since spread globally, resulting in the ongoing 2019–20 coronavirus pandemic. As of 30 April 2020,[update] more than 3.19 million cases have been reported across 185 countries and territories, resulting in more than 227,000 deaths. More than 972,000 people have recovered.` 3. Generate N answer (the number doesn't matter, in my case it was 5/7/10) Code: ``` from transformers import T5Tokenizer, TFT5ForConditionalGeneration model_str = "t5-base" hyperparams = dict( top_k = 50, top_p = 0.95, max_length = None, temperature = 0.7, num_return_sequences = 5, do_sample=True, use_cache=False) tokenizer = T5Tokenizer.from_pretrained(model_str) model = TFT5ForConditionalGeneration.from_pretrained(model_str) def generate(input_ids): outputs = model.generate(input_ids, **hyperparams) all_outputs = [] if outputs is not None and outputs.shape[0] == 1: outputs = tokenizer.decode(tf.squeeze(outputs), skip_special_tokens=True) all_outputs.append(outputs) elif outputs is not None: all_outputs.extend([tokenizer.decode(o, skip_special_tokens=True) for o in outputs]) return all_outputs sentence = """question: What is coronavirus? context: Coronavirus disease 2019 (COVID-19) is an infectious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The disease was first identified in December 2019 in Wuhan, the capital of China's Hubei province, and has since spread globally, resulting in the ongoing 2019–20 coronavirus pandemic. As of 30 April 2020,[update] more than 3.19 million cases have been reported across 185 countries and territories, resulting in more than 227,000 deaths. More than 972,000 people have recovered. """.replace("\n"," ") input_ids = tokenizer.encode(sentence,return_tensors="tf") generate(input_ids) ``` ## Expected behavior This error happens for some questions only. If you remove the question mark from the question you'll get an output. First I've thought that the question mark is the problem, but on other examples both with and without question mark resulted in the same error. ## Environment info - `transformers` version: 2.8.0 - Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic (Google Colab) - Python version: 3.6.9 - PyTorch version (GPU?): 1.5.0+cu101 (False) - Tensorflow version (GPU?): 2.1.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
04-30-2020 14:42:37
04-30-2020 14:42:37
Hey @zirlman, Thanks a lot for catching the error and the detailed error description. The PR that will fix the error is linked to the issue :-) <|||||>@patrickvonplaten that's great. Thank you 😁
transformers
4,087
closed
Added huseinzol05/gpt2-117M-bahasa-cased README.md
04-30-2020 12:42:17
04-30-2020 12:42:17
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4087?src=pr&el=h1) Report > Merging [#4087](https://codecov.io/gh/huggingface/transformers/pull/4087?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e73595bd649a149879816a59b56f57c8b37c73d0&el=desc) will **increase** coverage by `0.92%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4087/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4087?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4087 +/- ## ========================================== + Coverage 77.99% 78.92% +0.92% ========================================== Files 114 114 Lines 18667 18667 ========================================== + Hits 14559 14732 +173 + Misses 4108 3935 -173 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4087?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4087/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `43.90% <0.00%> (+0.69%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4087/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.61% <0.00%> (+0.82%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4087/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.78% <0.00%> (+1.37%)` | :arrow_up: | | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4087/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.73% <0.00%> (+2.29%)` | :arrow_up: | | [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4087/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `97.81% <0.00%> (+2.62%)` | :arrow_up: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4087/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <0.00%> (+10.00%)` | :arrow_up: | | [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4087/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0.00%> (+81.20%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4087?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4087?src=pr&el=footer). Last update [e73595b...a4a673a](https://codecov.io/gh/huggingface/transformers/pull/4087?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>You should rebase instead of merging `upstream/master`, this would make merging this easier :) Anyways, cherrypicked in 8829ace4aac6b2ebef0eb8753e6b58d7eafc7734, thanks!
transformers
4,086
closed
Not able to reproduce same CoLA result as huggingface defualt
# 🐛 Bug When I run your official `run_glue.py` script on CoLA, the performance is significantly lower than the ones in your website. Mine: ~10, Yours: ~50 ## Information Model I am using (Bert, XLNet ...): bert-base-uncased Language I am using the model on (English, Chinese ...): English The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Run my provided command line 2. 3. ``` export GLUE_DIR=downstream_datasets/glue export TASK_NAME=CoLA SEED=43 export CUDA_VISIBLE_DEVICES=0 python run_glue.py \ --model_type bert \ --model_name_or_path bert-base-uncased \ --task_name $TASK_NAME \ --do_train \ --save_steps 200 \ --do_eval \ --data_dir $GLUE_DIR/$TASK_NAME \ --max_seq_length 128 \ --per_gpu_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 30 \ --seed $SEED \ --output_dir tmp/"$TASK_NAME"_seed"$SEED"/ ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 1.17.2 - Platform: CentOS 7.7 64-bit - Python version: python 3.7 - PyTorch version (GPU?): 1.5.0+cuda9.2 - Tensorflow version (GPU?): - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No, single machine
04-30-2020 10:28:13
04-30-2020 10:28:13
Do you have a link to a TensorBoard or any other experiment tracking for your training? You can also add the `--evaluate_during_training` flag to see the evolution of the eval metric during training.<|||||>Have you tried with fewer epochs? (e.g ```--num_train_epochs 3.0``` as stated in the repo's example) Also what you need to consider is seed hyper-parameter. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,085
closed
Use roberta-med or scibert for fillmask
Hi I have been trying to use roberta-med for fillmask ` nlp = pipeline("fill-mask", model="allenai/biomed_roberta_base", tokenizer="allenai/biomed_roberta_base")` However , I get this **Model name 'allenai/biomed_roberta_base' was not found in model name list** (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, bert-base-japanese, bert-base-japanese-whole-word-masking, bert-base-japanese-char, bert-base-japanese-char-whole-word-masking, bert-base-finnish-cased-v1, bert-base-finnish-uncased-v1, bert-base-dutch-cased, openai-gpt, transfo-xl-wt103, gpt2, gpt2-medium, gpt2-large, gpt2-xl, distilgpt2, ctrl, xlnet-base-cased, xlnet-large-cased, xlm-mlm-en-2048, xlm-mlm-ende-1024, xlm-mlm-enfr-1024, xlm-mlm-enro-1024, xlm-mlm-tlm-xnli15-1024, xlm-mlm-xnli15-1024, xlm-clm-enfr-1024, xlm-clm-ende-1024, xlm-mlm-17-1280, xlm-mlm-100-1280, roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector, distilbert-base-uncased, distilbert-base-uncased-distilled-squad, distilbert-base-german-cased, distilbert-base-multilingual-cased, distilbert-base-uncased-finetuned-sst-2-english, albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2, camembert-base, umberto-commoncrawl-cased-v1, umberto-wikipedia-uncased-v1, t5-small, t5-base, t5-large, t5-3b, t5-11b, xlm-roberta-base, xlm-roberta-large, xlm-roberta-large-finetuned-conll02-dutch, xlm-roberta-large-finetuned-conll02-spanish, xlm-roberta-large-finetuned-conll03-english, xlm-roberta-large-finetuned-conll03-german, flaubert-small-cased, flaubert-base-uncased, flaubert-base-cased, flaubert-large-cased). We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/allenai/biomed_roberta_base/modelcard.json' was a path or url to a model card file named modelcard.json or a directory containing such a file but couldn't find any such file at this path or url. Anyway I could use it for fillmask ? Facing the same for scibert too
04-30-2020 09:36:17
04-30-2020 09:36:17
@patrickvonplaten <|||||>Can you install transformers from source?<|||||>Thanks worked for biomed-roberta However ``` nlp1 = pipeline("fill-mask", model="allenai/scibert_scivocab_cased", tokenizer="allenai/scibert_scivocab_cased") nlp1("coinfection by multiple quasispecies is not uncommon in human <mask>, and that passage to Vero cells may either generate new mutations at a low rate, or titrates out one quasispecies in the transition.") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/nas/home/tuhinc/miniconda3/envs/robertamed/lib/python3.7/site-packages/transformers/pipelines.py", line 743, in __call__ masked_index = (input_ids == self.tokenizer.mask_token_id).nonzero().item() ValueError: only one element tensors can be converted to Python scalars ```<|||||>It’s not using the same mask token :) Try [MASK] here
transformers
4,084
closed
feature-extraction pipeline is second last layer better representation than the last hidden layer
# 🐛 Bug I see that for feature-extraction pipeline, you output the last hidden layer of the transformer. Another approach I have seen is people using second to last layer as the output (Bert-as-a-service). The rationale being that the last layer is too close to the model task such as masked language model objective. I am a bit confused as to what would be an ideal representation layer for sentence embedding. Model I am using (Bert, XLNet ...): Bert ## Environment info - `transformers` version: 2.8.0 - Platform: CPU - Python version: 3.7 - PyTorch version (GPU?): 1.5.0 - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?:
04-30-2020 07:16:32
04-30-2020 07:16:32
This is quite a general question, and I fear there is no one right answer. You can see the "best layer to use" as another hyperparameter to tune in your task: the right value will depend on your task, dataset, and query. Oftentimes paper do some kind of evaluation of this. For instance, in the original [BERT paper](https://arxiv.org/pdf/1810.04805.pdf) they use BERT for feature extraction and evaluate which layer combinations worked best (Table 7). They find that the second-to-last layer performs better than the last layer, but that a concatenation of the last four layers is best. But again, that is using BERT, on a specific task, with a specific dataset. Your results may differ, so it's best to test and tune to your problem.<|||||>@BramVanroy @kaushaltrivedi How do you customize what layers go into the feature representation (via the pipeline)?
transformers
4,083
closed
Why is masking ID optional?
I see many models have masking ID as an optional input. To my understanding, the models need to know which tokens are padded to avoid doing attention with those tokens. How can the models figure this out if we do not provide masking ID? Do these models consider tokens with id 0 to be the masked tokens?
04-30-2020 01:43:58
04-30-2020 01:43:58
In my past experience, I noticed that masked tokens would get very low attention scores(that's what self attention's supposed to do) and would not contribute to the model's accuracy. I wonder how would it be in a transformer architecture.<|||||>Not all models requires this feature, mainly Roberta so it includes XLM-R, Camembert, etc.<|||||>I think your question is why are the `attention_mask` model inputs optional? This is because you need attention masks only when padding. If you're not batching your inputs, you do not need to pad and therefore do not need to specify an attention mask.<|||||>@LysandreJik I wonder if there is padding and the user does not provide `attention_mask`, does the model think there is no padding or there is a mechanism to figure it out uder the hood (maybe treat token ID 0 as padded token). I am just trying to fully understand the model's behavior. Thanks<|||||>No, if you don't provide any `attention_mask` then the model will attend to every token the same way. You absolutely should use the `attention_mask` as soon as you're padding, or the model will output flawed results!<|||||>Got it, thanks a lot.
transformers
4,082
closed
Zero-shot PPLs vary wildly across gpt2 model sizes
# ❓ Questions & Help ## Details We've been experimenting with multiple datasets for fine-tuning both OpenAI GPT and GPT-2. What we've noticed is that the zero-shot PPL of GPT-2 small and medium are in the order of `3e+40`. However, the zero-shot PPL of GPT-2 large and XL on the same datasets are much more reasonable, like ~30. The zero-shot PPL of OpenAI GPT on the same datasets are also reasonable, on the order of 200-300. We're trying to understand why the pre-trained GPT-2 small and medium models display such vastly different zero-shot PPLs in comparison to large, XL and OpenAI GPT. Has anyone else also noticed the same in their experiments? What might the cause for this be?
04-29-2020 23:50:59
04-29-2020 23:50:59
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,081
closed
Examples link to the master branch which seems to be ahead of the pip install version?
Documentation for training a language model points to https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py The import `from transformers import ( CONFIG_MAPPING, MODEL_WITH_LM_HEAD_MAPPING, AutoConfig, AutoModelWithLMHead, AutoTokenizer, DataCollatorForLanguageModeling, HfArgumentParser, LineByLineTextDataset, PreTrainedTokenizer, TextDataset, Trainer, TrainingArguments, set_seed, )` leads to "No name ___ in module transformers" errors. I installed transformers using pip. `pip install -U transformers` The import problems seem to be gone when I install from source.
04-29-2020 23:12:33
04-29-2020 23:12:33
I would use the version installed from source but training a tokenizer is substantially slower than the pip install version. When I run the code snippet below, the pip install version finishesin ~5 minutes but the install-from-source version takes 20+ (I didn't wait for it to finish). For some reason there's a new "Reading files" stage which takes forever to finish. ```import os from tokenizers import ByteLevelBPETokenizer, BertWordPieceTokenizer tokenizer = ByteLevelBPETokenizer(lowercase=False) paths = ["./data.txt"] tokenizer.train(files=paths, vocab_size=52000, min_frequency=2, special_tokens=[ "<s>", "<pad>", "</s>", "<unk>", "<mask>", ]) out_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), "token") os.makedirs(out_path, exist_ok=True) tokenizer.save(out_path)```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,080
closed
tokenizer.encode_plus stopped returning `attention_mask`
I have a codebase which was working fine but today when I was trying to run, I observed that `tokenizer.encode_plus` stopped returning `attention_mask`. Is it removed in the latest release? Or, I need to do something? The following piece of code was working for me. ``` encoded_dict = tokenizer.encode_plus( truncated_query, span_doc_tokens, max_length=max_seq_length, return_overflowing_tokens=True, pad_to_max_length=True, stride=max_seq_length - doc_stride - len(truncated_query) - sequence_pair_added_tokens, truncation_strategy="only_second", return_token_type_ids=True, return_attention_mask=True ) ``` But now, I get only `dict_keys(['input_ids', 'token_type_ids'])` from `encode_plus`. Also, I realized that the returned `input_ids` are not padded to `max_length`.
04-29-2020 22:25:24
04-29-2020 22:25:24
How did you resolve this? I am facing this issue as well.<|||||>@shikharsingla, what is your exact bug? Which tokenizer are you using, with which code?<|||||>Just saw your issue https://github.com/huggingface/transformers/issues/4868, looking into it.
transformers
4,079
closed
Load of pre-trained t5 model from Tf to HuggingFace
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> Hi everybody, I have the following problem: I want to convert my custom pre-trained t5 model(trained by following the example provided by Raffel et al on t5 GitHub page), however whenever I try to translate the pre-trained model from tf to pytorch with convert_t5_original_tf_checkpoint_to_pytorch.py script, after the instantiation of it I got the following error: INFO:transformers.modeling_t5:Loading TF weight encoder/block_011/layer_001/layer_norm/scale_slot_v with shape [768] INFO:transformers.modeling_t5:Loading TF weight encoder/final_layer_norm/scale with shape [768] INFO:transformers.modeling_t5:Loading TF weight encoder/final_layer_norm/scale_slot_v with shape [768] INFO:transformers.modeling_t5:Loading TF weight global_step with shape [] <b>File "/Users/antonio/opt/anaconda3/lib/python3.7/site-packages/tensorflow-2.2.0rc2-py3.7-macosx-10.9-x86_64.egg/tensorflow/python/training/py_checkpoint_reader.py", line 70, in get_tensor self, compat.as_bytes(tensor_str)) RuntimeError: /Users/antonio/Downloads/tf_path/model.ckpt-306400.data-00000-of-00002; No such file or directory</b> I can assume that the error is on global_step One more thing, since I have trained a t5 base model, Is fine to use your json config t5-base with it? or I should create one from scratch for my specific case? Thanks in advance and have a good day!
04-29-2020 21:53:03
04-29-2020 21:53:03
Hi @antoniomastro1996, Can you post your environment information here as written in the "Issue" template. Also are you sure that the path: `/Users/antonio/Downloads/tf_path/model.ckpt-306400.data-00000-of-00002` exist? Also can you add the link the Raffel example script? <|||||>Hi @patrickvonplaten this is my environment: <b> - `transformers` version: 2.8.0<br> - Platform: Darwin-19.4.0-x86_64-i386-64bit<br> - Python version: 3.7.6<br> - PyTorch version (GPU?): 1.4.0 (False)<br> - Tensorflow version (GPU?): 2.2.0-rc2 (False)<br> - Using GPU in script?: no<br> - Using distributed or parallel set-up in script?: no<br> </b> <br> About the training process, I have adapted the following Colab link: https://tiny.cc/t5-colab This is the one provided by Raffel et all on their GitHub page. About the path, yes is fine, it exists. To me seem strange the error manifest itself on this global_step param. Furthermore I have seen the t5_modeling code and this global_step alongside with other variables is in a sort of black list if we perform fine-tuning.<|||||>Hi @patrickvonplaten I've just solved, I changed the pre-trained model and worked fine. However, if I try to load the model with the following command: model = T5ForConditionalGeneration.from_pretrained("/content/drive/My Drive/t5_model.bin") I get the following error: <b>UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte</b> Like I said in the previous comment, I've used this config file https://s3.amazonaws.com/models.huggingface.co/bert/t5-small-config.json where I have just removed the part related to the task in such a way the final output appears like this: { "architectures": [ "T5WithLMHeadModel" ], "d_ff": 2048, "d_kv": 64, "d_model": 512, "decoder_start_token_id": 0, "dropout_rate": 0.1, "eos_token_id": 1, "initializer_factor": 1.0, "is_encoder_decoder": true, "layer_norm_epsilon": 1e-06, "model_type": "t5", "n_positions": 512, "num_heads": 8, "num_layers": 6, "output_past": true, "pad_token_id": 0, "relative_attention_num_buckets": 32, "vocab_size": 32128 } Where am I going wrong?<|||||>Sorry to answer this late! Loading the model via: ```python model = T5ForConditionalGeneration.from_pretrained("/content/drive/My Drive/") ``` should work :-)
transformers
4,078
closed
Using 'ner' task in pipeline with a non default model gives me entities as "LABEL-6" , "LABEL-8" instead of "I-ORG" and "I-LOC"
model - dbmdz/bert-base-cased-finetuned-conll03-english Language - english The problem arises when using: * [x] my own modified scripts The tasks I am working on is: * [x] an official GLUE/SQUaD task: ner ## To reproduce ``` tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-cased-finetuned-conll03-english") model = AutoModelForTokenClassification.from_pretrained("dbmdz/bert-base-cased-finetuned-conll03-english") ner_task = pipeline(task='ner', model=model, tokenizer=tokenizer) ner_task('Hugging Face is a French company based in New-York.') ```
04-29-2020 20:03:43
04-29-2020 20:03:43
Oh no, there's something wrong with the label alignment (can also be seen in the `config.json`): ```json "id2label": { "0": "LABEL_0", "1": "LABEL_1", "2": "LABEL_2", "3": "LABEL_3", "4": "LABEL_4", "5": "LABEL_5", "6": "LABEL_6", "7": "LABEL_7", "8": "LABEL_8" }, "initializer_range": 0.02, "intermediate_size": 3072, "label2id": { "LABEL_0": 0, "LABEL_1": 1, "LABEL_2": 2, "LABEL_3": 3, "LABEL_4": 4, "LABEL_5": 5, "LABEL_6": 6, "LABEL_7": 7, "LABEL_8": 8 }, ``` Thanks for reporting, I'll try to fix it :) In the meantime, you could use our large model `dbmdz/bert-large-cased-finetuned-conll03-english` 😅<|||||>Also I forgot to add, I see the [CLS], [SEP] tokens and also words like "is" , "a" in the NER results which are not present when I use the default dbmdz/bert-large-cased-finetuned-conll03-english.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/bert-base-cased-finetuned-conll03-english/config.json This file needs fixing.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>I'm having the same problem with the model: dbmdz/bert-base-multilingual-cased-finetuned-conll03-spanish<|||||>> I'm having the same problem with the model: dbmdz/bert-base-multilingual-cased-finetuned-conll03-spanish One fix is to download the model and config.json to your local and then just manually edit the "config.json" file. This is what it should actually look like :- https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/bert-large-cased-finetuned-conll03-english/config.json<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,077
closed
[docs] bad RST in PretrainedModel.generate docstring
![image](https://user-images.githubusercontent.com/6045025/80641266-96218b00-8a32-11ea-93d6-36484b3aad92.png) https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_utils.py#L857
04-29-2020 19:59:59
04-29-2020 19:59:59
fixed in Marian PR
transformers
4,076
closed
Remove double bias in TF Albert
closes #3386
04-29-2020 18:58:16
04-29-2020 18:58:16
Also remove the now-unused bias variable on https://github.com/huggingface/transformers/blob/50c9c3e98235d447a7a2d0efebc9ff4e426fce2d/src/transformers/modeling_tf_albert.py#L467<|||||>You're absolutely right, thanks @jarednielsen !
transformers
4,075
closed
How to using vocab other language in run_ner.py?
Help me, thank you!!!
04-29-2020 17:47:43
04-29-2020 17:47:43
Which language do you want to use? You can load a model that pre-trained with a certain language or you can use mBERT / XLM instead.<|||||>> Which language do you want to use? > You can load a model that pre-trained with a certain language or you can use mBERT / XLM instead. Thank for reply i want to use BERT for Vietnamese language<|||||>Unfortunately, there is no BERT for Vietnamese available for now. You will have to wait for somebody to pre-trained the Vietnamese model or you can use Multilingual-BERT (mBERT) instead: ```bert-base-multilingual-cased``` or ```bert-base-multilingual-uncased``` These models were pre-trained with [top 100 languages in Wikipedia](https://meta.wikimedia.org/wiki/List_of_Wikipedias).
transformers
4,074
closed
Issue with non-text files and bertabs example
Hello team, I've been running the bertabs example on some plain txt files using the latest version available from the repository (pulled on 4/28/2020). The folder where I placed the txt files for processing contains the configuration file automatically generated by the filesystem (".DS_store", as I'm running macos, but I suspect Windows would have similar issues). Because the configuration file is binary, the following piece of code in util_summarization.py (lines 56-59) fails with a UnicodeDecodeError exception: with open(document_path, encoding="utf-8") as source: raw_story = source.read() story_lines, summary_lines = process_story(raw_story) return document_name, story_lines, summary_lines I added a "try ... except" block to handle the UnicodeDecodeError exception and return an empty raw_story string if the exception is raised. This fixed the issue for me.
04-29-2020 15:26:41
04-29-2020 15:26:41
So the issue is fixed for you then?
transformers
4,073
closed
Save T5 model as H5 and convert it to model.json to be used in TensorflowJS
# ❓ Questions & Help How can I save a T5 model as HDF5 file? In the end, I want to load it in the browser via `tensorflow-converter `and `tensorflowjs` Code: ``` model_str = "t5-small" tokenizer = T5Tokenizer.from_pretrained(model_str) model = TFT5ForConditionalGeneration.from_pretrained(model_str) tokenizer_path = "/content/t5_tokenizer" model_path = "/content/t5_base_model" tfjs_path = "/content/t5_base_model_js" model.save(model_path,save_format="h5") ``` When I run that code I receive the following error: ``` --------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) <ipython-input-53-8343a8512f91> in <module>() ----> 1 model.save(model_path,save_format="h5") 1 frames /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/saving/save.py in save_model(model, filepath, overwrite, include_optimizer, save_format, signatures, options) 103 not isinstance(model, sequential.Sequential)): 104 raise NotImplementedError( --> 105 'Saving the model to HDF5 format requires the model to be a ' 106 'Functional model or a Sequential model. It does not work for ' 107 'subclassed models, because such models are defined via the body of ' NotImplementedError: Saving the model to HDF5 format requires the model to be a Functional model or a Sequential model. It does not work for subclassed models, because such models are defined via the body of a Python method, which isn't safely serializable. Consider saving to the Tensorflow SavedModel format (by setting save_format="tf") or using `save_weights`. ``` The error says that the model needs to be a Functional/Sequential model, but as far as I know, TF_Transformers are based on TF 2.0 so this shouldn't be a problem. Also when I run this code `isinstance(model, tf.keras.models.Model)` it returns True
04-29-2020 13:37:12
04-29-2020 13:37:12
Hi, our API has its own saving function: `model.save_pretrained("directory")`, which saves the model alongside its configuration file. Can you let me know if this method fits your use case?<|||||>Hi, using that method I've managed to save the model. Unfortunately, when I convert the model into `model.json` load it later in my JS file I receive the following error: ``` TypeError: Cannot read property 'model_config' of null at models.ts:287 at common.ts:14 at Object.next (common.ts:14) at a (common.ts:14) ``` Bellow you can steps I've followed (to try) to load the model into the JS file Model saving: ``` model_path = "/content/t5_base_model" tfjs_path = "/content/t5_base_model_js" model.save_pretrained(model_path) model_path += "/tf_model.h5" ``` Conversion: ``` !tensorflowjs_converter --input_format keras \ $model_path \ $tfjs_path ``` JS file: ``` var model; const model_path = "../libraries/models/t5_base_model_js/model.json"; document.querySelector("#msg").innerText = "[LOADING TF MODEL]"; tf.loadLayersModel(model_path) .then((loadedModel) => { model = loadedModel; document.querySelector("#msg").innerText = "[MODEL LOADED]"; }).catch(err => console.log(err)); ``` I assume the problem occurs because `model.save_pretrained(dir)` saves the config in a separate file. I've tried to figure out if maybe `tf-converter` has a parameter for additional files to bundle into model.json, but with no success. @LysandreJik any idea what I could do to fix this error? P.S. I've tried to load mobilenet from TF-Hub to make sure my script works fine and it successfully loaded it. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Any updates on this? Stuck on saving mobilebert as a tf layers model
transformers
4,072
closed
How can I continue finetuning from checkpoint using the NER script?
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> I'm trying to execute [this script](https://github.com/huggingface/transformers/tree/master/examples/ner) using `run_ner.py` but everything I tried to continue fine tuning from checkpoint failed. Any ideas? I run it using Google Colab. Hereafter the cell content I run: %cd "/content/drive/My Drive/Colab Notebooks/NER/Batteria/transformers-master_2020_04_27" %pip install . %pip install --upgrade . %pip install seqeval from fastai import * from transformers import * %cd "/content/drive/My Drive/Colab Notebooks/NER/Batteria/transformers-master_2020_04_27/examples/ner" !python "/content/drive/My Drive/Colab Notebooks/NER/Batteria/transformers-master_2020_04_27/examples/ner/run_ner.py" --data_dir ./ \ --model_type bert \ --labels ./labels.txt \ --model_name_or_path "/content/drive/My Drive/Colab Notebooks/NER/Batteria/transformers-master_2020_04_27/examples/ner/bert-base-256/checkpoint-10000" \ --output_dir "/content/drive/My Drive/Colab Notebooks/NER/Batteria/transformers-master_2020_04_27/examples/ner/bert-base-256/check" \ --max_seq_length "256" \ --num_train_epochs "5" \ --per_gpu_train_batch_size "4" \ --save_steps "10000" \ --seed "1" \ --do_train --do_eval --do_predict As you can see, I already tried to substitute model_name_or_path parameter value (that was "bert-base-cased") with checkpoint directory but several errors occurred, asking for the right model name and missing files. 04/28/2020 15:16:36 - INFO - transformers.tokenization_utils - Model name '/content/drive/My Drive/Colab Notebooks/NER/Batteria/transformers-master_2020_04_27/examples/ner/bert-base-256/checkpoint-10000' not found in model shortcut name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, bert-base-finnish-cased-v1, bert-base-finnish-uncased-v1, bert-base-dutch-cased). Assuming '/content/drive/My Drive/Colab Notebooks/NER/Batteria/transformers-master_2020_04_27/examples/ner/bert-base-256/checkpoint-10000' is a path, a model identifier, or url to a directory containing tokenizer files. 04/28/2020 15:16:36 - INFO - transformers.tokenization_utils - Didn't find file /content/drive/My Drive/Colab Notebooks/NER/Batteria/transformers-master_2020_04_27/examples/ner/bert-base-256/checkpoint-10000/vocab.txt. We won't load it. 04/28/2020 15:16:36 - INFO - transformers.tokenization_utils - Didn't find file /content/drive/My Drive/Colab Notebooks/NER/Batteria/transformers-master_2020_04_27/examples/ner/bert-base-256/checkpoint-10000/added_tokens.json. We won't load it. 04/28/2020 15:16:36 - INFO - transformers.tokenization_utils - Didn't find file /content/drive/My Drive/Colab Notebooks/NER/Batteria/transformers-master_2020_04_27/examples/ner/bert-base-256/checkpoint-10000/special_tokens_map.json. We won't load it. 04/28/2020 15:16:36 - INFO - transformers.tokenization_utils - Didn't find file /content/drive/My Drive/Colab Notebooks/NER/Batteria/transformers-master_2020_04_27/examples/ner/bert-base-256/checkpoint-10000/tokenizer_config.json. We won't load it. Traceback (most recent call last): File "/content/drive/My Drive/Colab Notebooks/NER/Batteria/transformers-master_2020_04_27/examples/ner/run_ner.py", line 290, in <module> main() File "/content/drive/My Drive/Colab Notebooks/NER/Batteria/transformers-master_2020_04_27/examples/ner/run_ner.py", line 149, in main use_fast=model_args.use_fast, File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_auto.py", line 197, in from_pretrained return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 868, in from_pretrained return cls._from_pretrained(*inputs, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 971, in _from_pretrained list(cls.vocab_files_names.values()), OSError: Model name '/content/drive/My Drive/Colab Notebooks/NER/Batteria/transformers-master_2020_04_27/examples/ner/bert-base-256/checkpoint-10000' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, bert-base-finnish-cased-v1, bert-base-finnish-uncased-v1, bert-base-dutch-cased). We assumed '/content/drive/My Drive/Colab Notebooks/NER/Batteria/transformers-master_2020_04_27/examples/ner/bert-base-256/checkpoint-10000' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url. Thank you in advance. <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**: https://stackoverflow.com/questions/61482518/how-can-i-continue-finetuning-from-checkpoint-using-the-ner-script
04-29-2020 13:25:12
04-29-2020 13:25:12
Hello! You're pointing the script to a file or folder `bert-base-256/checkpoint-10000`. Is that a folder containing model file, or is that a file in itself?<|||||>> Hello! You're pointing the script to a file or folder `bert-base-256/checkpoint-10000`. Is that a folder containing model file, or is that a file in itself? It is the folder created by the script containing these files: - optimizer.pt and scheduler.pt - pytorch_model.bin and training_args.bin - config.json <|||||>Ah, I think I know where the issue stems from. We've changed the paradigm for our examples, which now rely on a `Trainer` abstraction. Until c811526 five days ago, the tokenizer was unfortunately not saved, which is fixed now. I would recommend you use the lastest script so that it doesn't happen anymore. To fix this error, you could manually save your tokenizer in that folder as so: ```py from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained("bert-base-cased") tokenizer.save_pretrained("/content/drive/My Drive/Colab Notebooks/NER/Batteria/transformers-master_2020_04_27/examples/ner/bert-base-256/checkpoint-10000") ``` This will save the tokenizer file in the appropriate folder. Please keep in mind that this is to reload the original `bert-base-cased` tokenizer. If you have modified your tokenizer in any way, you should save that tokenizer in the aforementioned folder. Please let me know if I can be of further help!<|||||>> Ah, I think I know where the issue stems from. We've changed the paradigm for our examples, which now rely on a `Trainer` abstraction. Until [c811526](https://github.com/huggingface/transformers/commit/c81152600452ad1bec4ab705356788d29a3573ee) five days ago, the tokenizer was unfortunately not saved, which is fixed now. > > I would recommend you use the lastest script so that it doesn't happen anymore. > > To fix this error, you could manually save your tokenizer in that folder as so: > > ```python > from transformers import BertTokenizer > > tokenizer = BertTokenizer.from_pretrained("bert-base-cased") > tokenizer.save_pretrained("/content/drive/My Drive/Colab Notebooks/NER/Batteria/transformers-master_2020_04_27/examples/ner/bert-base-256/checkpoint-10000") > ``` > > This will save the tokenizer file in the appropriate folder. Please keep in mind that this is to reload the original `bert-base-cased` tokenizer. If you have modified your tokenizer in any way, you should save that tokenizer in the aforementioned folder. Please let me know if I can be of further help! I downloaded transformers library on 27 April, two days ago, after [c811526](https://github.com/huggingface/transformers/commit/c81152600452ad1bec4ab705356788d29a3573ee). So, does this problem still persist?<|||||>@ChessMateK This might sound a bit silly but did you check if the drive was mounted in colab when you ran that command ?<|||||>@patil-suraj Yes, I checked. Furthermore, if I insert as output path the folder where all the checkpoints are saved, even if they are not loaded, the "startup" time of the script increases considerably.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,071
closed
Native integration with pytorch/serve
First off, many thanks to @julien-c for sharing my [blog post](https://medium.com/analytics-vidhya/deploy-huggingface-s-bert-to-production-with-pytorch-serve-27b068026d18) for an example custom handler integration of a BERT sequence classifier model with [pytorch/serve](https://github.com/pytorch/serve/). # 🚀 Feature request Provide scripts to automatically generate a [pytorch/serve](https://github.com/pytorch/serve/) custom handler for transformers models to be able to serve models after training. Example usage could be: generate_serving_handler --output-filename transformers_ner_handler.py --handler-type "ner" /path/to/checkpoint_directory ## Motivation There is already an increasing need to serve transformers models with pytorch/serve, see for example: https://github.com/pytorch/serve/issues/249 https://github.com/pytorch/serve/issues/267 My understanding of the goal of transformers is to be able to support NLP users starting with model prototyping, ranging over model training and evaluation all the way into their productive system. Native integration with a serving framework seems like a logical step given this understanding. There is ongoing work in `pytorch/serve` to showcase how users can write custom handlers for their transformers models. However, handler code will have to differ quite significantly for different transformers applications (NER, QA, Summarization, Classification) which will put a heavy burden on the user. Technically, this feature could also be integrated on the `pytorch/serve` side, but I'm wondering if it would not be easier to maintain here as one could more easily test that new integrated models can still be served with the handler code. ## Your contribution I would be happy to support and submit a PR - for simple sentence classification I have handler code ready, other applications would have to be designed and integrated.
04-29-2020 13:12:43
04-29-2020 13:12:43
That would be awesome – and might be of interest to @mfuntowicz among others. We'll help you if needed.<|||||>Awesome! Thanks for the upvotes and interest, very appreciated @julien-c, thanks for your support of the idea. I will create a PR and get started on a basic first version and cross-reference it here. @all: Feel free to chime in on the PR to help shape the interface :)<|||||>@MFreidank I was curious if you had an update here! I've implemented a version of a PyTorch serve handler that operates on the https://huggingface.co/transformers/main_classes/pipelines.html abstraction for a currently unreleased project, as opposed to a distinct model or tokenizer. I think it's a nice abstraction for this use-case, what do you think? <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,070
closed
No CORS Policy for Write With Transformer Endpoints
# 🐛 Bug ## Information Write with Transformer is a very good place for trying available models. But, there is no CORS policy to secure it. This could potentially be used by an unauthorized third-parties or attackers to DDoS. ## To reproduce Steps to reproduce the behavior: POST request to `/autocomplete/distilgpt2/small` ``` curl -d '{"context":"See how a modern neural network auto-completes your text 🤗","model_size":"distilgpt2/small","top_p":0.9,"temperature":1,"max_time":1}' -H 'Content-Type: application/json' https://transformer.huggingface.co/autocomplete/distilgpt2/small ``` Potential endpoints: 1. `/autocomplete/distilgpt2/small` 1. `/autocomplete/gpt2/small` 1. `/autocomplete/gpt2/medium` 1. `/autocomplete/gpt2/large` 1. `/autocomplete/gpt2/xl` <- 504 error 1. `/autocomplete/pplm` <- 504 error <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> The endpoints could be used by unauthorized third-parties and risk of DDoS .
04-29-2020 09:22:09
04-29-2020 09:22:09
Hi @ivokun – this was actually on purpose as someone was using it as an API We'll secure it down the line if it gets abused.
transformers
4,069
closed
Delete batch = tuple(t.to(args.device) for t in batch) for it perform…
…s two times
04-29-2020 07:13:41
04-29-2020 07:13:41
This seems reasonable, what do you think @suvrat96?<|||||>(never mind the failing CI test)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,068
closed
Is there pre-train bert or xlnet from scratch code ?
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
04-29-2020 07:13:04
04-29-2020 07:13:04
Yes, you can use the `run_language_modeling.py` script for pre-training e.g. BERT from scratch: https://huggingface.co/transformers/examples.html#language-model-training (Just leave the `model_name_or_path` parameter empty for pre-training from scratch)<|||||>@stefan-it thank u.<|||||>@xealml Curious whether you managed to train XLNet? If so, any pointers you could share? <|||||>@jbmaxwell @xealml I am getting lost on this as well. How do you train XLNet from scratch? I can't find any field for inputting raw data to it
transformers
4,067
closed
Why GPT2LMHeadModel's loss mismatches accuracy?
# ❓ Questions & Help Hello, I'm pretraining GPT2LMHeadModel on my own data from scratch, however I find that loss decreases sharply, and the evaluation doesn't match the loss anymore. ## Details I used two different ways to evaluate the LM model: 1.input [1,2,3...] label [2,3,4...] this way will compare the output of whole sentence with label once 2.input [1] label [2] input [1,2] label [2,3] but we compare only the output of 2's position and label 3 input [1,2,3] label [2,3,4] but we compare only the output of 3's position and label 4 and so on (each time count only last position's accuracy) finally sum up them In my opinion, these two ways should give the same result, however when loss decreases sharply, these two results don't match anymore. This happens only in the case that learning rate is high, like 0.001 or even bigger. Could you help me explain how it happens? ![loss](https://user-images.githubusercontent.com/62126666/80560370-4089b600-8a13-11ea-9b7f-76ff158da59a.jpg)
04-29-2020 04:05:56
04-29-2020 04:05:56
Hi @lx-kika, Can you post a code snippet that calculates the different loss types you mention? Also, I think it should be fine to just use the first method to calculate the loss - I don't really see a reason as why to use the second method, no?
transformers
4,066
closed
create model_card camembert-base-wikipedia-4gb
04-29-2020 03:45:52
04-29-2020 03:45:52
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4066?src=pr&el=h1) Report > Merging [#4066](https://codecov.io/gh/huggingface/transformers/pull/4066?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6faca88ee0c472de8207e648b0999a1ee01ff127&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4066/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4066?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4066 +/- ## ======================================= Coverage 78.91% 78.92% ======================================= Files 114 114 Lines 18670 18670 ======================================= + Hits 14734 14735 +1 + Misses 3936 3935 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4066?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4066/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.93% <0.00%> (+0.16%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4066?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4066?src=pr&el=footer). Last update [6faca88...2af7f81](https://codecov.io/gh/huggingface/transformers/pull/4066?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,065
closed
Create model_card camembert-base-ccnet
04-29-2020 03:44:43
04-29-2020 03:44:43
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4065?src=pr&el=h1) Report > Merging [#4065](https://codecov.io/gh/huggingface/transformers/pull/4065?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6faca88ee0c472de8207e648b0999a1ee01ff127&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4065/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4065?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4065 +/- ## ======================================= Coverage 78.91% 78.91% ======================================= Files 114 114 Lines 18670 18670 ======================================= Hits 14734 14734 Misses 3936 3936 ``` ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4065?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4065?src=pr&el=footer). Last update [6faca88...5f06af6](https://codecov.io/gh/huggingface/transformers/pull/4065?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,064
closed
Create model_card camembert-base-ccnet-4gb
04-29-2020 03:43:07
04-29-2020 03:43:07
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4064?src=pr&el=h1) Report > Merging [#4064](https://codecov.io/gh/huggingface/transformers/pull/4064?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6faca88ee0c472de8207e648b0999a1ee01ff127&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4064/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4064?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4064 +/- ## ======================================= Coverage 78.91% 78.92% ======================================= Files 114 114 Lines 18670 18670 ======================================= + Hits 14734 14735 +1 + Misses 3936 3935 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4064?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4064/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.55% <0.00%> (+0.41%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4064?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4064?src=pr&el=footer). Last update [6faca88...0c16603](https://codecov.io/gh/huggingface/transformers/pull/4064?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,063
closed
Create README.md model_card camembert/camembert-base-oscar-4gb
04-29-2020 03:40:49
04-29-2020 03:40:49
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4063?src=pr&el=h1) Report > Merging [#4063](https://codecov.io/gh/huggingface/transformers/pull/4063?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6faca88ee0c472de8207e648b0999a1ee01ff127&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4063/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4063?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4063 +/- ## ======================================= Coverage 78.91% 78.91% ======================================= Files 114 114 Lines 18670 18670 ======================================= Hits 14734 14734 Misses 3936 3936 ``` ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4063?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4063?src=pr&el=footer). Last update [6faca88...4030892](https://codecov.io/gh/huggingface/transformers/pull/4063?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,062
closed
Create README.md
04-29-2020 03:31:56
04-29-2020 03:31:56
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4062?src=pr&el=h1) Report > Merging [#4062](https://codecov.io/gh/huggingface/transformers/pull/4062?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6faca88ee0c472de8207e648b0999a1ee01ff127&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4062/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4062?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4062 +/- ## ======================================= Coverage 78.91% 78.91% ======================================= Files 114 114 Lines 18670 18670 ======================================= Hits 14734 14734 Misses 3936 3936 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4062?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4062/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.61% <0.00%> (-0.17%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4062/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.04% <0.00%> (+0.12%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4062?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4062?src=pr&el=footer). Last update [6faca88...6f20b16](https://codecov.io/gh/huggingface/transformers/pull/4062?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,061
closed
KeyError ' '- run_ner.py - Transformers 2.8.0
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): BERT Language I am using the model on (English, Chinese ...): PT The problem arises when using: * [ X ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ X ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: I'm having trouble running a dataset in Portuguese. It is in the conll pattern Traceback (most recent call last): File "/home/lucasrodrigues/code/transformers-2.8.0/examples/ner/run_ner.py", line 292, in <module> main() File "/home/lucasrodrigues/code/transformers-2.8.0/examples/ner/run_ner.py", line 170, in main if training_args.do_train File "/home/lucasrodrigues/code/transformers-2.8.0/examples/ner/utils_ner.py", line 124, in __init__ pad_token_label_id=self.pad_token_label_id, File "/home/lucasrodrigues/code/transformers-2.8.0/examples/ner/utils_ner.py", line 207, in convert_examples_to_features print([label_map[label]] + [pad_token_label_id] * (len(word_tokens) - 1)) KeyError: '' ## Expected behavior Execution of the NER. ## Environment info - `transformers` version: 2.8.0 - Platform: Linux-4.15.0-76-generic-x86_64-with-debian-buster-sid - Python version: 3.6.10 - PyTorch version (GPU?): 1.3.1 (True) - Tensorflow version (GPU?): 2.0.0 (True) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in>
04-29-2020 03:10:37
04-29-2020 03:10:37
Have you got two blank lines consecutive? Look at the end of the file, maybe you got an extra blank line or an empty line at the end of the file.<|||||>Additionally, the dataset format must be in a `Token Label` (delimiter is a space) format. The error message shows, that an unexpected label ( `"`) was found, so I could imagine, that the dataset format is not consistent. To check this, just do the following on your training, development and test set: ```bash cat train.txt dev.txt test.txt | grep -v "^$" | cut -d " " -f 2 | sort | uniq ``` This should give you all labels. If you see other tokens, then there's something wrong in the dataset. <|||||>This problem also happened to me when I trained with my dataset. Possible causes: 1. There is an extra character in labels 1. The word and label is not separated by space but tab space `\t` <- my main culprit<|||||>Hi can I look into this issue. I am very interested to start contributing in this project<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,060
closed
Cannot Load bert-base-japanese tokenizer
# 🐛 Bug ## Information Model I am using BertJapaneseTokenizer: Language I am using the model on Japanese: The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: Just to load ## To reproduce ``` >>> from transformers import BertJapaneseTokenizer >>> tokenizer = BertJapaneseTokenizer.from_pretrained('bert-base-japanese') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/bayartsogtyadamsuren/DDAM-Projects/isid/myenv/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 393, in from_pretrained return cls._from_pretrained(*inputs, **kwargs) File "/Users/bayartsogtyadamsuren/DDAM-Projects/isid/myenv/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 496, in _from_pretrained list(cls.vocab_files_names.values()), OSError: Model name 'bert-base-japanese' was not found in tokenizers model name list (bert-base-japanese, bert-base-japanese-whole-word-masking, bert-base-japanese-char, bert-base-japanese-char-whole-word-masking). We assumed 'bert-base-japanese' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url. ``` ## Expected behavior To load ## Environment info - `transformers` version: 2.7.0 - Platform: Darwin-18.7.0-x86_64-i386-64bit - Python version: 3.7.4 - PyTorch version (GPU?): 1.3.1 - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no
04-29-2020 03:00:13
04-29-2020 03:00:13
Hi, I had the same issue and I solved it by downloading the required files locally with the steps below. 1. Download [vocab.txt](https://github.com/huggingface/transformers/blob/455c6390938a5c737fa63e78396cedae41e4e87e/src/transformers/tokenization_bert_japanese.py#L33), [config.json](https://github.com/huggingface/transformers/blob/455c6390938a5c737fa63e78396cedae41e4e87e/src/transformers/configuration_bert.py#L42), [pytorch_model.bin](https://github.com/huggingface/transformers/blob/455c6390938a5c737fa63e78396cedae41e4e87e/src/transformers/modeling_bert.py#L51) from the source URL 2. Enter the folder containing the three files in the from_pretrained method e.g. ``` model = BertModel.from_pretrained ('./models/bert-base-japanese/') config = BertConfig('./models/bert-base-japanese/') tokenizer = BertJapaneseTokenizer.from_pretrained('./models/bert-base-japanese/') ``` where ``` ─ models └- bert-base-japanese ├- vocab.txt       ├- config.json       └- pytorch_model.bin ``` I think this is probably an obstacle caused by a change in the path on S3 due to this commit. The version of transformers installed by pip is old and you may be pointing to the wrong path. https://github.com/huggingface/transformers/commit/455c6390938a5c737fa63e78396cedae41e4e87e Reinstall with the latest version of transformers and it should work. ``` git clone [email protected]: huggingface/transformers.git pip install ./transformers ```<|||||>I apologize, it's my fault. I `mv`ed files around instead of `copy`ing them as we do usually, so I broke backward compatibility for the `bert-base-japanese` models. As @reo11 said, you'll need to install from source for now. You can also do: `pip install git+git://github.com/huggingface/transformers.git` Sorry about that.<|||||>@reo11 Thank you so much! @julien-c Thank you for your response. Since a lot of us trying to use transformers in production too, please consider having stable workflow. (Anyways you guys doing great!)
transformers
4,059
closed
Dropout training
# ❓ Questions & Help outputs = self.bert(inputs, **kwargs) pooled_output = outputs[1] pooled_output = self.dropout(pooled_output, training=kwargs.get("training", False)) logits = self.classifier(pooled_output) outputs = (logits,) + outputs[2:] # add hidden states and attention if they are here I don't see any place where training of dropout are being used in examples, I was puzzled.So can someone tell me about this?
04-29-2020 02:34:17
04-29-2020 02:34:17
Hi @weiarqq, Where does the code come from? What is the application? What do you mean by "training of dropout"? the `training=kwargs.get("training", False)` is a flag for pytorch's dropout to switch it on and off depending on whether you are in training module "training=True" or not.
transformers
4,058
closed
How to run squad for chinese dataset?
I already change the SquadExamples origin token process, but it still not work for chinese. I get really low results
04-29-2020 02:33:16
04-29-2020 02:33:16
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,057
closed
Add AlbertForPreTraining and TFAlbertForPreTraining models.
I mirrored the structure of `BertForPreTraining` and `TFBertForPreTraining`. A few differences between the PyTorch and TensorFlow implementations of Bert I noticed along the way: - TFBertForSequenceClassification has dropout in the classifier head, but TFBertForPreTraining does not. I included dropout in the TFAlbertForPreTraining classifier head. - TFBertForPreTraining does not include an option to calculate the loss, but BertForPreTraining does. Fixing this will require more wrangling of the dataset and preprocessor, so I'll save that for a later PR. Mirrored the structure here. - BertForPreTraining has attributes: `self.bert`, `self.cls.predictions`, `self.cls.seq_relationship`. TFBertForPreTraining has attributes: `self.bert`, `self.dropout`, `self.classifier`. I chose to unify these attributes in AlbertForPreTraining and TFAlbertForPreTraining under `self.albert`, `self.cls.predictions`, and `self.cls.sop_classifier`. - TFAlbertMLMHead adds two biases to the final output: `self.decoder_bias` and `self.bias`. TFBertMLMHead only adds one bias: `self.bias`.
04-28-2020 20:45:45
04-28-2020 20:45:45
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4057?src=pr&el=h1) Report > Merging [#4057](https://codecov.io/gh/huggingface/transformers/pull/4057?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c99fe0386be118bceaab1c85cdb8309eb8cb8208&el=desc) will **increase** coverage by `0.06%`. > The diff coverage is `96.87%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4057/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4057?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4057 +/- ## ========================================== + Coverage 78.31% 78.37% +0.06% ========================================== Files 120 120 Lines 19854 19916 +62 ========================================== + Hits 15549 15610 +61 - Misses 4305 4306 +1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4057?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/4057/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.10% <ø> (ø)` | | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4057/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `77.23% <ø> (ø)` | | | [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4057/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `68.62% <ø> (ø)` | | | [src/transformers/modeling\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4057/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `77.20% <94.73%> (+1.89%)` | :arrow_up: | | [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4057/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `87.19% <100.00%> (+0.93%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4057?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4057?src=pr&el=footer). Last update [c99fe03...0171b91](https://codecov.io/gh/huggingface/transformers/pull/4057?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Hi! This is really cool, thanks for taking the time to do that! A few notes on your remarks: > TFBertForPreTraining does not include an option to calculate the loss, but BertForPreTraining does. This is not a bug, it's done on purpose. We try to respect as much as possible each library's paradigm. With PyTorch you can compute the loss both inside and outside the model, so we make it easy for the user to supply the labels and compute a standard loss. For TensorFlow however, the loss is usually computed outside the model, especially with Keras. None of our TensorFlow models computes the loss for that very reason. > TFAlbertMLMHead adds two biases to the final output: self.decoder_bias and self.bias. TFBertMLMHead only adds one bias: self.bias. That's, unfortunately, a mistake on my side. https://github.com/huggingface/transformers/pull/4076 I'm taking a look at your PR, and will see if we can upload weights on S3 containing the SOP weights as well.<|||||>Hi @LysandreJik, any updates on your side? I know last week was ICLR. We're prepping a code release and blog post, so it would be great to have this merged in soon :)<|||||>Hi @jarednielsen, I took a look today, and it's in good shape! There's a few things to update, however, as this is an opportunity to update our checkpoints to include the SOP. In order to do this, I had to do a few changes, especially removing the `AlbertPreTrainingHeads` class so that the masked lm layer (previously `AlbertForPreTraining.cls.predictions`) could be at the same level than the `AlbertForMaskedLM` class: `AlbertForMaskedLM.predictions`. There's a few other changes that need to be done in the conversion utility. I'll look into doing TensorFlow tomorrow and will try merge tomorrow evening + upload the updated checkpoints on the S3; is this timeline okay with you? Do you mind if I push directly on your fork, so that all the changes are visible in this PR?<|||||>Thanks for the prompt response! Yes, that timeline sounds great; feel free to push directly onto my fork.<|||||>Cool, will do that. We're releasing transformers `v2.9` today so it won't be in it, but it will be in the next version which should be released sometimes next week.<|||||>Thanks a lot for your contribution! Will upload the weights on S3 in the coming days.
transformers
4,056
closed
Minor Readme Fixes
Added contact info and fixed typos.
04-28-2020 19:58:26
04-28-2020 19:58:26
transformers
4,055
closed
[Naming] lm_labels -> labels ; masked_lm_labels -> masked_labels
This PR is a simple find and replace of the regex "lm_labels" into "labels" over all files. This way we can use the Trainer class for all models. It does break backward compatibility a bit though, since some people will have to rename the `lm_labels` argument in their code.
04-28-2020 19:19:42
04-28-2020 19:19:42
Yeah that's quite a big breaking change 🤣 we may want to be a little bit careful on backward compatibility<|||||>I think this would be a very welcome update. Maybe adding an alias to the method signature would be sufficient to maintain compatibility? We could deprecate the old ones as well, with a warning and remove them in a future version.<|||||>Yes adding an alias would be good for me. One possible option to deprecate progressively (that Keras use for instance) is to move the old parameters in `**kwargs` so they are not advocated anymore and add a deprecation warning later on.
transformers
4,054
closed
Changes to fix working for transformer-xl
Update arguments (`lm_labels` to `labels`) Use labels as data; labels are shifted inside the model (https://github.com/bajajahsaas/transformers/blob/master/src/transformers/modeling_transfo_xl.py#L855)
04-28-2020 18:43:50
04-28-2020 18:43:50
Sorry, it's been a while, but this has since been fixed in #3716.
transformers
4,053
closed
[isort] add known 3rd party to setup.cfg
Resolve local/circleci isort discrepancy by adding sacrebleu, rouge_score to `known_third_party`. Previously, if you had either package installed on your system, they would be `known_third_party`. Now they are `known_third_party` on everybody's system.
04-28-2020 17:40:03
04-28-2020 17:40:03
transformers
4,052
closed
Checking new isort
04-28-2020 17:12:15
04-28-2020 17:12:15
transformers
4,051
closed
Fix TF input docstrings to refer to tf.Tensor rather than torch.Float…
…Tensor.
04-28-2020 16:52:11
04-28-2020 16:52:11
Hi @jarednielsen - thanks a lot for the clean-up :-)
transformers
4,050
closed
Remove jitted method so that our models are pickable.
closes https://github.com/huggingface/transformers/issues/4038
04-28-2020 16:36:52
04-28-2020 16:36:52
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4050?src=pr&el=h1) Report > Merging [#4050](https://codecov.io/gh/huggingface/transformers/pull/4050?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9a0a8c1c6f4f2f0c80ff07d36713a3ada785eec5&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4050/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4050?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4050 +/- ## ======================================= Coverage 78.87% 78.88% ======================================= Files 111 111 Lines 18536 18533 -3 ======================================= - Hits 14621 14620 -1 + Misses 3915 3913 -2 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4050?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/4050/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `88.88% <ø> (+7.93%)` | :arrow_up: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4050/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.02% <0.00%> (ø)` | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4050?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4050?src=pr&el=footer). Last update [9a0a8c1...16e9e66](https://codecov.io/gh/huggingface/transformers/pull/4050?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,049
closed
p_mask in SQuAD pre-processing
In question answering, the `p_mask` is a mask with 1 for tokens than cannot be in the answer (0 for token which can be in an answer). Currently the p_mask construction fails for RoBERTa, and is overall not very robust as it depends directly on the `token_type_ids`, while some models do not make use of them and therefore do not generate them (best example being RoBERTa). This PR changes the way the p_mask is built, and slightly changes it as well. Up to now, here's what was done with the `p_mask`, I believe with the original implementation being google-research BERT's SQuAD script: - Consider every token that is not the question to be a possible answer. This includes special tokens, and padding tokens. This PR changes this by removing those special tokens and padding tokens from the possible answer pool. It does still keep the classification token as a possible answer, given that it's sometimes used as the `impossible` token. closes https://github.com/huggingface/transformers/issues/2788
04-28-2020 16:16:35
04-28-2020 16:16:35
transformers
4,048
closed
Why is the pooler output used for sequence classification (if it does not represent the input semantic well)?
# ❓ Questions & Help ## Details In the [documentation](https://huggingface.co/transformers/model_doc/bert.html#tfbertmodel) of `TFBertModel`, it is stated that the `pooler_output` is not a good semantic representation of input (emphasis mine): > `pooler_output (tf.Tensor of shape (batch_size, hidden_size))`: > Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during Bert pretraining. **This output is usually not a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence.** However, if we look at the [source code][1] of `TFBertForSequenceClassification`, we see that it's the pooler output which is used for classification of input sequence: ```python outputs = self.bert(inputs, **kwargs) pooled_output = outputs[1] pooled_output = self.dropout(pooled_output, training=kwargs.get("training", False)) logits = self.classifier(pooled_output) ``` **I was wondering why this is the case, and doesn't this contradict what is stated in the documentation?** FYI, to try the approach suggested in the documentation (i.e. averaging or pooling hidden state), I also created two custom classifiers where one of them is using only the `CLS` token representation, i.e. `outputs[0][:,0]`, and the other is using the concatenation of `CLS` token representation and the average of last hidden state, i.e. `mean(outputs[0], axis=1)`. However, in the experiments on my dataset, both performed poorly (around 73% validation accuracy, with 99% train accuracy) compared to using the built-in `TFBertForSequenceClassification` (which achieved around 79% validation accuracy, with 99% train accuracy). Even I tried the concatenation of `CLS` token representation for all the last four layers (as suggested in BERT paper); however, it also performed poorly (around 70% validation accuracy, with 99% train accuracy). I am (almost) certain that my implementation is correct; however, I include it here for reference: ```python config = BertConfig.from_pretrained("dbmdz/bert-base-german-cased", num_labels=len(le.classes_), output_hidden_states=True) tokenizer = BertTokenizer.from_pretrained("dbmdz/bert-base-german-cased") bert_model = TFBertModel.from_pretrained("dbmdz/bert-base-german-cased", config=config, trainable=True) seq_max_len = 128 input_ids_layer = tf.keras.layers.Input(shape=(seq_max_len,), dtype='int32') attention_mask_layer = tf.keras.layers.Input(shape=(seq_max_len,), dtype='int32') inputs = [input_ids_layer, attention_mask_layer] outputs = bert_model(inputs) hidden_states = outputs[2] last_four_hidden_states = tf.keras.layers.concatenate(list(hidden_states[-4:])) cls_vector = tf.keras.layers.Lambda(lambda x: x[:,0])(last_four_hidden_states) sentence_vector = tf.keras.layers.Dropout(0.1)(cls_vector) out = tf.keras.layers.Dense(len(le.classes_), name="classifier")(sentence_vector) model = tf.keras.models.Model(inputs, out) opt = tf.keras.optimizers.Adam(learning_rate=1e-5) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) metric = tf.keras.metrics.SparseCategoricalAccuracy("accuracy") model.compile(loss=loss, optimizer=opt, metrics=[metric]) history = model.fit([train_input_ids, train_attention_mask], train_labels, epochs=30, batch_size=16, validation_data=([test_input_ids, test_attention_maks], test_labels)) ``` **A link to original question on Stack Overflow**: This question is more related to theoretical aspects of machine learning and therefore is considered as off-topic on SO. [1]: https://github.com/huggingface/transformers/blob/fa49b9afeab5545f14b3661b35195b829fcf8ef5/src/transformers/modeling_tf_bert.py#L923-L926
04-28-2020 15:51:16
04-28-2020 15:51:16
@mkaze hello, what's your tensorflow version and transformers version?<|||||>@etveritas When I was working on this, I was using the latest versions at the time, i.e. TF 2.1 and Transformers 2.8. <|||||>@mkaze oh, thank you~<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>> the concatenation of CLS token representation for all the last four layers (as suggested in BERT paper); Why are you using this 4-layer implementation suggestion from the paper? The paper clearly recommends this logic only for feature-based version of the classifier (which is mentioned only for comparison purposes), and not for the primary fine-tuning version of the model. I don't really get it.
transformers
4,047
closed
GPT2LMHeadModel Documentation Mismatch for labels
# 🐛 Bug ## Information This is a very small discrepancy between the docs and code for the `GPT2LMHeadModel`. Docs mention setting `lm_labels = input_ids` https://github.com/huggingface/transformers/blob/520e7f211926e07b2059bc8e21b668db4372e4db/src/transformers/modeling_gpt2.py#L547-L552 Should this read `labels = input_ids` instead ? Since this is the single LM Model head it only has one `labels` param unlike the double head model?
04-28-2020 14:32:08
04-28-2020 14:32:08
I guess you are right. The same is true for transformer-xl as well. Code: https://github.com/bajajahsaas/transformers/blob/master/src/transformers/modeling_transfo_xl.py#L855<|||||>Yes, I think you guys are right! Thanks for pointing this out. I think we should do a general name cleaning here. See linked PR.<|||||>This was fixed by #4711
transformers
4,046
closed
[EncoderDecoder Tests] Improve tests
Currently when executing: ``pytest tests/test_modeling_encoder_decoder.py`` all BERT tests are run as well due to problems with the import of BertModelTester (see PR #3383). This PR takes the PR: #4027 and does some minimal changes in the encoder decoder test file. I think @sshleifer found a great solution to circumvent that by moving the `BertModelTester` class out of the BertModelTest class. What do you think @julien-c , @LysandreJik ?
04-28-2020 14:25:34
04-28-2020 14:25:34
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4046?src=pr&el=h1) Report > Merging [#4046](https://codecov.io/gh/huggingface/transformers/pull/4046?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6af3306a1da0322f58861b1fbb62ce5223d97b8a&el=desc) will **increase** coverage by `0.01%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4046/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4046?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4046 +/- ## ========================================== + Coverage 78.84% 78.85% +0.01% ========================================== Files 114 114 Lines 18689 18689 ========================================== + Hits 14735 14737 +2 + Misses 3954 3952 -2 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4046?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4046/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.93% <0.00%> (+0.16%)` | :arrow_up: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4046/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.55% <0.00%> (+0.41%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4046?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4046?src=pr&el=footer). Last update [6af3306...0ed64db](https://codecov.io/gh/huggingface/transformers/pull/4046?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>By the way, do you guys want to do this for all other ModelTester objects @patrickvonplaten and @sshleifer? And maybe (if relevant) define a common abstract class they inherit from?<|||||>> By the way, do you guys want to do this for all other ModelTester objects @patrickvonplaten and @sshleifer? And maybe (if relevant) define a common abstract class they inherit from? I think this would be a good idea. We can probably clean up the tests a lot this way! I will note it down on my To-Do-List<|||||>LMK if you want to split it up @patrickvonplaten <|||||>> LMK if you want to split it up @patrickvonplaten I tihnk I won't find time in the next 2 weeks to do this - feel free to start working on it if you want :-)
transformers
4,045
closed
[EncoderDecoder] Add working examples to docstring
04-28-2020 14:21:18
04-28-2020 14:21:18
transformers
4,044
closed
Email Alerts: Run failed: Torch hub integration
Possible to disable @julien-c?
04-28-2020 14:19:29
04-28-2020 14:19:29
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,043
closed
daigo/bert-base-japanese-sentiment
I create japanese binary classification.
04-28-2020 12:59:36
04-28-2020 12:59:36
transformers
4,042
closed
info about loading file None is not informative
When file_path is None do not say loading file None
04-28-2020 12:35:37
04-28-2020 12:35:37
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4042?src=pr&el=h1) Report > Merging [#4042](https://codecov.io/gh/huggingface/transformers/pull/4042?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/180585741cf3cdd6890cb99610923a8ae9691220&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4042/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4042?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4042 +/- ## ======================================= Coverage 78.50% 78.50% ======================================= Files 111 111 Lines 18492 18492 ======================================= + Hits 14517 14518 +1 + Misses 3975 3974 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4042?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4042/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.72% <100.00%> (ø)` | | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4042/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `72.61% <0.00%> (-0.42%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4042/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <0.00%> (+0.12%)` | :arrow_up: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4042/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `43.90% <0.00%> (+0.34%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4042?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4042?src=pr&el=footer). Last update [1805857...789019b](https://codecov.io/gh/huggingface/transformers/pull/4042?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,041
closed
[wip] more fp16 test coverage
04-28-2020 12:20:52
04-28-2020 12:20:52
transformers
4,040
closed
Small cosmetic changes to CamemBERT model card
Very minor cosmetic changes to the CamemBERT model card. @benjamin-mlr will soon merge model cards for the other models :)
04-28-2020 12:12:14
04-28-2020 12:12:14
Thanks @louismartin :)
transformers
4,039
closed
Add BPE dropout to tokenizers
# 🚀 Feature request Hi! Thank you for the great library! Several days ago I was trying to fine-tune the GPT2 model with a small dataset. It was a bit hard because the model overfitted every time. So, after it, I found a great paper with simple sub-word regularization, that outperformed classic BPE on text generation tasks (like NMT). [Original paper](https://arxiv.org/pdf/1910.13267.pdf). Unfortunately, I can't find any python implementation, only C++ [here](https://github.com/VKCOM/YouTokenToMe/blob/master/youtokentome/cpp/bpe.cpp). Metrics from the paper: ![pic](https://drive.google.com/uc?export=view&id=1DPbGxppTBj0GG7nfrxmCx2CZxtyHap0v)
04-28-2020 10:55:27
04-28-2020 10:55:27
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,038
closed
GPT-2 models are unpickable
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): GPT-2 Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: Hi, I'm trying to train a GPT-2 Double Heads Model (based on your transfer-learning-conv-ai guide) using Pytorch Lightning. However I have a problem when trying to train the model on ddp distributed backend : the `GPT2DoubleHeadsModel` class seems to be unpickable and my training script fails with the following error : `TypeError: can't pickle torch._C.ScriptFunction objects` ## To reproduce Run : ``` from transformers import GPT2DoubleHeadsModel import pickle model = GPT2DoubleHeadsModel.from_pretrained("gpt2-medium") pickle.dump(model,open("test.bin","wb")) ```` The problem does not occur when using `bert-base-uncased` for example. I tried to search which part of GPT-2 class contains ` torch._C.ScriptFunction objects` without success. Do you have an idea to avoid this error ? Thanks in advance. - `transformers` version: 2.8 - Platform: Ubuntu 18.04 - Python version: 3.7 - PyTorch version (GPU?): 1.5 Cuda 10.2 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes, ddp
04-28-2020 09:34:18
04-28-2020 09:34:18
Hi! I think this may come from the [activation functions](https://github.com/huggingface/transformers/blob/master/src/transformers/activations.py), some of which are jitted. This allows better performance. In what situation would you use `pickle` rather than our API `save_pretrained` ? ```py from transformers import GPT2DoubleHeadsModel import pickle model = GPT2DoubleHeadsModel.from_pretrained("gpt2-medium") model.save_pretrained("test") ```<|||||>Hi Lysandre, Thanks for your answer ! I use [Pytorch Lightning](https://github.com/PyTorchLightning/pytorch-lightning) which requires the model to be picklable (https://pytorch-lightning.readthedocs.io/en/stable/multi_gpu.html#make-model-picklable). I see from your link that if the PyTorch version is <1.4, the activation function is not jitted : https://github.com/huggingface/transformers/blob/fa49b9afeab5545f14b3661b35195b829fcf8ef5/src/transformers/activations.py#L32-L44 I think i'll test again downgrading my PyTorch version to 1.3.1.<|||||>I understand, this is indeed an issue. We've had other several issues due to this, I think it would be better to revert to picklable option. Will open a PR with this objective in a bit.<|||||>Thank you for your responsiveness !<|||||>It should be fixed on master now. Could you let me know if running the code from the master branch solves your issue?<|||||>Yes, it works, thank you 👌<|||||>I am getting the same error trying to save AlbertForTokenClassification() model using `torch.save(model, 'temp.pt')` But `torch.save(model.state_dict(), 'temp.pt')` works.<|||||>@paramansh Do you mind opening an issue with your specific problem, your software versions and code sample so that we may debug on our side? Thanks.
transformers
4,037
closed
torch num_samples=0 error on XLMnet @ run_language_modeling.py
# 🐛 Bug While finetuning a language model on an XLNET: ``` python transformers/examples/run_language_modeling.py \ --output_dir=lms/xlnet_custom \ --model_type=xlnet \ --model_name_or_path=xlnet-base-cased \ --do_train \ --train_data_file=$TRAIN_FILE \ --per_gpu_train_batch_size 1 \ ``` at the step after the features are produced, I get the error: ``` Traceback (most recent call last): File "transformers/examples/run_language_modeling.py", line 284, in <module> main() File "transformers/examples/run_language_modeling.py", line 254, in main trainer.train(model_path=model_path) File "/home/ysig/miniconda3/envs/hug/lib/python3.7/site-packages/transformers/trainer.py", line 218, in train train_dataloader = self.get_train_dataloader() File "/home/ysig/miniconda3/envs/hug/lib/python3.7/site-packages/transformers/trainer.py", line 160, in get_train_dataloader RandomSampler(self.train_dataset) if self.args.local_rank == -1 else DistributedSampler(self.train_dataset) File "/home/ysig/miniconda3/envs/hug/lib/python3.7/site-packages/torch/utils/data/sampler.py", line 94, in __init__ "value, but got num_samples={}".format(self.num_samples)) ValueError: num_samples should be a positive integer value, but got num_samples=0 ``` My text is a raw text of the size of MBs. I am running this on a CUDA 10.2 in a python 3.7.6 conda environment with a torch version of 1.5.0. Is there something I am possibly doing wrong?
04-28-2020 08:23:41
04-28-2020 08:23:41
Hi! Are you using a multi-GPU setup ?<|||||>No I am running it on a single GPU.<|||||>Several other threads discuss this issue: - Either your datasets are too small: https://github.com/kaushaltrivedi/fast-bert/issues/181#issuecomment-596462172 - Or you haven't set `--block_size`: https://github.com/huggingface/transformers/issues/2380#issuecomment-572762207 <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,036
closed
BertForSequenceClassification producing same output during evaluation
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details I'm doing mutli-class classification over 50 classes on the Reuters 5050 dataset. The task is to identify the author that wrote the text. The texts can get up to 1700 tokens long but I'm truncating them down to 512. I've noticed that during evaluation, the model 99% of the time outputs the same label regardless of the input. I say 99% because once in a while it outputs something different. This **does not** happen during training. Things I've tried: - Removed `~/.cache/torch/transformers/` directory - Tested learning rates of `[2e-5, 1e-5, 5e-6, 1e-6, 1e-7, 1e-8]` - Reducing batch size to 1 - Setting: ``` hidden_dropout_prob=0.5 'use_cached_eval_features': False, 'no_cache': True, 'overwrite_output_dir': True, 'reprocess_input_data': True, ``` **Tokenising and encoding:** ``` MAX_LEN = 512 def get_encodings(texts): token_ids = [] attention_masks = [] for text in texts: token_id = tokenizer.encode(text, add_special_tokens=True, max_length=MAX_LEN, pad_to_max_length=True) token_ids.append(token_id) return token_ids def get_attention_masks(padded_encodings): attention_masks = [] for encoding in padded_encodings: attention_mask = [int(token_id > 0) for token_id in encoding] attention_masks.append(attention_mask) return attention_masks train_encodings = get_encodings(train_df.text.values) train_attention_masks = get_attention_masks(train_encodings) test_encodings = get_encodings(test_df.text.values) test_attention_masks = get_attention_masks(test_encodings) ``` **Packing into datasets and dataloaders:** ``` batch_size = 1 train_data = TensorDataset(X_train, train_masks, y_train) train_sampler = RandomSampler(train_data) train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=batch_size) validation_data = TensorDataset(X_test, test_masks, y_test) validation_sampler = SequentialSampler(validation_data) validation_dataloader = DataLoader(validation_data, sampler=validation_sampler, batch_size=batch_size) ``` **Model setup:** ``` config = BertConfig.from_pretrained( 'bert-base-uncased', num_labels = 50, output_attentions = False, output_hidden_states = False, max_position_embeddings=MAX_LEN, hidden_dropout_prob=0.5, args={ 'use_cached_eval_features': False, 'no_cache': True, 'overwrite_output_dir': True, 'reprocess_input_data': True, } ) model = BertForSequenceClassification(config) model.to(device) optimizer = AdamW(model.parameters(), lr = 1e-5 eps = 1e-8 ) epochs = 4 total_steps = len(train_dataloader) * epochs scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps = 0, num_training_steps = total_steps) ``` **Training:** ``` loss_values = [] print("Training...") for epoch_i in range(0, epochs): print("") print('======== Epoch {:} / {:} ========'.format(epoch_i + 1, epochs)) print('Training...') t0 = time.time() total_loss = 0 model.train() for step, batch in enumerate(train_dataloader): if step % 40 == 0 and not step == 0: elapsed = format_time(time.time() - t0) print(' Batch {:>5,} of {:>5,}. Elapsed: {:}.'.format(step, len(train_dataloader), elapsed)) b_texts = batch[0].to(device) b_attention_masks = batch[1].to(device) b_authors = batch[2].to(device) model.zero_grad() outputs = model(b_texts, token_type_ids=None, attention_mask=b_attention_masks, labels=b_authors) loss = outputs[0] total_loss += loss.item() loss.backward() # Clip the norm of the gradients to 1.0. # This is to help prevent the "exploding gradients" problem. torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) optimizer.step() scheduler.step() avg_train_loss = total_loss / len(train_dataloader) # Store the loss value for plotting the learning curve. loss_values.append(avg_train_loss) print("") print(" Average training loss: {0:.2f}".format(avg_train_loss)) print(" Training epcoh took: {:}".format(format_time(time.time() - t0))) ``` **Evaluation:** ``` print("") print("Running Validation...") t0 = time.time() # Put the model in evaluation mode--the dropout layers behave differently # during evaluation. model.eval() # Tracking variables eval_loss, eval_accuracy = 0, 0 nb_eval_steps, nb_eval_examples = 0, 0 for batch in validation_dataloader: b_texts = batch[0].to(device) b_attention_masks = batch[1].to(device) b_authors = batch[2].to(device) # Telling the model not to compute or store gradients, saving memory and # speeding up validation with torch.no_grad(): # Forward pass, calculate logit predictions. # This will return the logits rather than the loss because we have # not provided labels. # token_type_ids is the same as the "segment ids", which # differentiates sentence 1 and 2 in 2-sentence tasks. # The documentation for this `model` function is here: # https://huggingface.co/transformers/v2.2.0/model_doc/bert.html#transformers.BertForSequenceClassification outputs = model(b_texts, token_type_ids=None, attention_mask=b_attention_masks) # Get the "logits" output by the model. The "logits" are the output # values prior to applying an activation function like the softmax. logits = outputs[0] # Move logits and labels to CPU logits = logits.detach().cpu().numpy() author_ids = b_authors.to('cpu').numpy() print("Pred: {}, label: {}".format(np.argmax(logits).flatten(), author_ids.flatten())) # Calculate the accuracy for this batch of test sentences. tmp_eval_accuracy = flat_accuracy(logits, author_ids) # Accumulate the total accuracy. eval_accuracy += tmp_eval_accuracy # Track the number of batches nb_eval_steps += 1 # Report the final accuracy for this validation run. print(" Accuracy: {0:.2f}".format(eval_accuracy/nb_eval_steps)) print(" Validation took: {:}".format(format_time(time.time() - t0))) ``` <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
04-28-2020 07:53:26
04-28-2020 07:53:26
Hi, have you solved this problem? I have a similar problem. I'm doing a multi-classification task over 8 classes. My task is to classify the entity type of short texts(usually < 32, much shorter than yours). These classes are something like school name, company name, job title, etc. Besides, I'm using the tf2 version model `TFBertForSequenceClassification` for my task. However, at most of the time(99% just as you describe), my fine-tuned model gives a same output distribution over my 8 classes whatever text I feed.<|||||>Yes, turns out I was not constructing the model properly. All I had to do was call ` model = BertForSequenceClassification.from_pretrained(config)` instead.
transformers
4,035
closed
Model card for roberta-base-squad2-covid
04-28-2020 07:51:33
04-28-2020 07:51:33
Great [dataset](https://github.com/deepset-ai/COVID-QA), thanks for sharing a model card [model card](https://huggingface.co/deepset/roberta-base-squad2-covid)
transformers
4,034
closed
🐛 Saving TF model : Expected Operation, Variable, or Tensor, got None
# 🐛 Bug I'm training TFElectraModel. Training goes well, but when I try to save the model, I met the following error : `TypeError: Expected Operation, Variable, or Tensor, got None` ## Minimal reproducible example ``` !pip install transformers from transformers import TFElectraModel model = TFElectraModel.from_pretrained("google/electra-base-discriminator") !mkdir swag model.save("swag") ``` ## Related #2336 https://github.com/tensorflow/tensorflow/issues/35432
04-28-2020 07:26:33
04-28-2020 07:26:33
Hi! We prefer using our own API for this, using `from_pretrained` which also takes care of the configuration: ```py from transformers import TFElectraModel model = TFElectraModel.from_pretrained("google/electra-base-discriminator") !mkdir swag model.save_pretrained("swag") ``` Is there a reason why you would rather use `.save` rather than `.save_pretrained`?<|||||>I was using `.save()` because I made a custom model, with additional layers on top of Electra, and I want to save these layers as well. Finally, I simply sub-classed `TFElectraPreTrainedModel` and add my custom layers there, so I can use `.save_pretrained`. Thanks for the help !<|||||>I am also facing similar issue. The reason for using save method is i want to host the saved model using tensorflow model server<|||||>@nirajkale did you manage to solve this? I'm facing the same issue with other pre-trained.
transformers
4,033
closed
Model Card: gaochangkuan README.md
04-28-2020 07:06:22
04-28-2020 07:06:22
As discussed over email, can you move this file to https://huggingface.co/gaochangkuan/model_dir ? Thanks!<|||||>> As discussed over email, can you move this file to https://huggingface.co/gaochangkuan/model_dir ? > > Thanks! Ok, I will do it later~<|||||>I meant the file path of the README.md file :) I've done it on your branch directly. Thank you for sharing: [model page](https://huggingface.co/gaochangkuan/model_dir).
transformers
4,032
closed
[experimental] rename torch_device -> default_device
04-28-2020 02:18:24
04-28-2020 02:18:24
transformers
4,031
closed
[gitignore] fixtures created by unit tests
04-28-2020 01:48:52
04-28-2020 01:48:52
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4031?src=pr&el=h1) Report > Merging [#4031](https://codecov.io/gh/huggingface/transformers/pull/4031?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4e817ff41885063e08bb3bcd63e5adfd835b9911&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4031/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4031?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4031 +/- ## ======================================= Coverage 78.44% 78.44% ======================================= Files 111 111 Lines 18518 18518 ======================================= Hits 14527 14527 Misses 3991 3991 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4031?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4031/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `68.49% <0.00%> (-0.37%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4031/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <0.00%> (+0.12%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4031?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4031?src=pr&el=footer). Last update [4e817ff...361724e](https://codecov.io/gh/huggingface/transformers/pull/4031?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Which tests yield those fixtures? I just ran all the tests and I have none of those<|||||>It comes from adding a failing test so we don't need it. My bad.
transformers
4,030
closed
CDN urls
Use CDN urls for weights (not for tokenizer or config files)
04-28-2020 00:51:40
04-28-2020 00:51:40
This also gives us a nice speedup on CI which is always nice
transformers
4,029
closed
TF2 - how to access intermediate layers of pre-trained bert model?
(I'm following [this](https://mccormickml.com/2019/05/14/BERT-word-embeddings-tutorial/) pytorch tutorial about BERT word embeddings, and in the tutorial the author is access the intermediate layers of the BERT model.) What I want is to acess the last, lets say, 4 last layers of a single token input to the BERT model in tensorflow2 code. Because each layer output a vector of length 764 - so last 4 layers will have a shape of 4*768=3072 (for each token). How can I implement in TF/keras/TF2, to get the intermediate layers of pretrained model for a token input? (later I will try to get the tokens for each token in a sentence, but for now one token is enough). I'm using the huggingface's BERT model: !pip install transformers from transformers import (TFBertModel, BertTokenizer) bert_model = TFBertModel.from_pretrained("bert-base-uncased") # Automatically loads the config bert_tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") sentence_marked = "hello" tokenized_text = bert_tokenizer.tokenize(sentence_marked) indexed_tokens = bert_tokenizer.convert_tokens_to_ids(tokenized_text) print (indexed_tokens) >> prints [7592] The output is a token ([7592]), which should be the input of the for the BERT model
04-27-2020 23:12:28
04-27-2020 23:12:28
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,028
closed
CamembertForSequenceClassification not initialized from pretrained model
Model I am using **CamembertForSequenceClassification** for a relation classification task. I'm trying to adapt the model following the implementation proposed by [wang](https://github.com/wang-h/bert-relation-classification) for this [paper](https://arxiv.org/pdf/1905.08284.pdf) When I try to load the pre trained model `loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/camembert-base-pytorch_model.bin ` to fine-tune it on my task, I get the following INFO message. I find it weird because it seems that a lot of model's params are not initialized (contrary to what I see for BERT model) : `04/28/2020 00:56:31 - INFO - transformers.modeling_utils - Weights of CamembertForSequenceClassification not initialized from pretrained model: ['latent_type', 'classifier.weight', 'classifier.bias', 'bert.embeddings.word_embeddings.weight', 'bert.embeddings.position_embeddings.weight', 'bert.embeddings.token_type_embeddings.weight', 'bert.embeddings.LayerNorm.weight', 'bert.embeddings.LayerNorm.bias', 'bert.encoder.layer.0.attention.self.query.weight', 'bert.encoder.layer.0.attention.self.query.bias', 'bert.encoder.layer.0.attention.self.key.weight', 'bert.encoder.layer.0.attention.self.key.bias', 'bert.encoder.layer.0.attention.self.value.weight', 'bert.encoder.layer.0.attention.self.value.bias', 'bert.encoder.layer.0.attention.output.dense.weight', 'bert.encoder.layer.0.attention.output.dense.bias', 'bert.encoder.layer.0.attention.output.LayerNorm.weight', 'bert.encoder.layer.0.attention.output.LayerNorm.bias', 'bert.encoder.layer.0.intermediate.dense.weight', 'bert.encoder.layer.0.intermediate.dense.bias', 'bert.encoder.layer.0.output.dense.weight', 'bert.encoder.layer.0.output.dense.bias', 'bert.encoder.layer.0.output.LayerNorm.weight', 'bert.encoder.layer.0.output.LayerNorm.bias', 'bert.encoder.layer.1.attention.self.query.weight', 'bert.encoder.layer.1.attention.self.query.bias', 'bert.encoder.layer.1.attention.self.key.weight', 'bert.encoder.layer.1.attention.self.key.bias', 'bert.encoder.layer.1.attention.self.value.weight', 'bert.encoder.layer.1.attention.self.value.bias', 'bert.encoder.layer.1.attention.output.dense.weight', 'bert.encoder.layer.1.attention.output.dense.bias', 'bert.encoder.layer.1.attention.output.LayerNorm.weight', 'bert.encoder.layer.1.attention.output.LayerNorm.bias', 'bert.encoder.layer.1.intermediate.dense.weight', 'bert.encoder.layer.1.intermediate.dense.bias', 'bert.encoder.layer.1.output.dense.weight', 'bert.encoder.layer.1.output.dense.bias', 'bert.encoder.layer.1.output.LayerNorm.weight', 'bert.encoder.layer.1.output.LayerNorm.bias', 'bert.encoder.layer.2.attention.self.query.weight', 'bert.encoder.layer.2.attention.self.query.bias', 'bert.encoder.layer.2.attention.self.key.weight', 'bert.encoder.layer.2.attention.self.key.bias', 'bert.encoder.layer.2.attention.self.value.weight', 'bert.encoder.layer.2.attention.self.value.bias', 'bert.encoder.layer.2.attention.output.dense.weight', 'bert.encoder.layer.2.attention.output.dense.bias', 'bert.encoder.layer.2.attention.output.LayerNorm.weight', 'bert.encoder.layer.2.attention.output.LayerNorm.bias', 'bert.encoder.layer.2.intermediate.dense.weight', 'bert.encoder.layer.2.intermediate.dense.bias', 'bert.encoder.layer.2.output.dense.weight', 'bert.encoder.layer.2.output.dense.bias', 'bert.encoder.layer.2.output.LayerNorm.weight', 'bert.encoder.layer.2.output.LayerNorm.bias', 'bert.encoder.layer.3.attention.self.query.weight', 'bert.encoder.layer.3.attention.self.query.bias', 'bert.encoder.layer.3.attention.self.key.weight', 'bert.encoder.layer.3.attention.self.key.bias', 'bert.encoder.layer.3.attention.self.value.weight', 'bert.encoder.layer.3.attention.self.value.bias', 'bert.encoder.layer.3.attention.output.dense.weight', 'bert.encoder.layer.3.attention.output.dense.bias', 'bert.encoder.layer.3.attention.output.LayerNorm.weight', 'bert.encoder.layer.3.attention.output.LayerNorm.bias', 'bert.encoder.layer.3.intermediate.dense.weight', 'bert.encoder.layer.3.intermediate.dense.bias', 'bert.encoder.layer.3.output.dense.weight', 'bert.encoder.layer.3.output.dense.bias', 'bert.encoder.layer.3.output.LayerNorm.weight', 'bert.encoder.layer.3.output.LayerNorm.bias', 'bert.encoder.layer.4.attention.self.query.weight', 'bert.encoder.layer.4.attention.self.query.bias', 'bert.encoder.layer.4.attention.self.key.weight', 'bert.encoder.layer.4.attention.self.key.bias', 'bert.encoder.layer.4.attention.self.value.weight', 'bert.encoder.layer.4.attention.self.value.bias', 'bert.encoder.layer.4.attention.output.dense.weight', 'bert.encoder.layer.4.attention.output.dense.bias', 'bert.encoder.layer.4.attention.output.LayerNorm.weight', 'bert.encoder.layer.4.attention.output.LayerNorm.bias', 'bert.encoder.layer.4.intermediate.dense.weight', 'bert.encoder.layer.4.intermediate.dense.bias', 'bert.encoder.layer.4.output.dense.weight', 'bert.encoder.layer.4.output.dense.bias', 'bert.encoder.layer.4.output.LayerNorm.weight', 'bert.encoder.layer.4.output.LayerNorm.bias', 'bert.encoder.layer.5.attention.self.query.weight', 'bert.encoder.layer.5.attention.self.query.bias', 'bert.encoder.layer.5.attention.self.key.weight', 'bert.encoder.layer.5.attention.self.key.bias', 'bert.encoder.layer.5.attention.self.value.weight', 'bert.encoder.layer.5.attention.self.value.bias', 'bert.encoder.layer.5.attention.output.dense.weight', 'bert.encoder.layer.5.attention.output.dense.bias', 'bert.encoder.layer.5.attention.output.LayerNorm.weight', 'bert.encoder.layer.5.attention.output.LayerNorm.bias', 'bert.encoder.layer.5.intermediate.dense.weight', 'bert.encoder.layer.5.intermediate.dense.bias', 'bert.encoder.layer.5.output.dense.weight', 'bert.encoder.layer.5.output.dense.bias', 'bert.encoder.layer.5.output.LayerNorm.weight', 'bert.encoder.layer.5.output.LayerNorm.bias', 'bert.encoder.layer.6.attention.self.query.weight', 'bert.encoder.layer.6.attention.self.query.bias', 'bert.encoder.layer.6.attention.self.key.weight', 'bert.encoder.layer.6.attention.self.key.bias', 'bert.encoder.layer.6.attention.self.value.weight', 'bert.encoder.layer.6.attention.self.value.bias', 'bert.encoder.layer.6.attention.output.dense.weight', 'bert.encoder.layer.6.attention.output.dense.bias', 'bert.encoder.layer.6.attention.output.LayerNorm.weight', 'bert.encoder.layer.6.attention.output.LayerNorm.bias', 'bert.encoder.layer.6.intermediate.dense.weight', 'bert.encoder.layer.6.intermediate.dense.bias', 'bert.encoder.layer.6.output.dense.weight', 'bert.encoder.layer.6.output.dense.bias', 'bert.encoder.layer.6.output.LayerNorm.weight', 'bert.encoder.layer.6.output.LayerNorm.bias', 'bert.encoder.layer.7.attention.self.query.weight', 'bert.encoder.layer.7.attention.self.query.bias', 'bert.encoder.layer.7.attention.self.key.weight', 'bert.encoder.layer.7.attention.self.key.bias', 'bert.encoder.layer.7.attention.self.value.weight', 'bert.encoder.layer.7.attention.self.value.bias', 'bert.encoder.layer.7.attention.output.dense.weight', 'bert.encoder.layer.7.attention.output.dense.bias', 'bert.encoder.layer.7.attention.output.LayerNorm.weight', 'bert.encoder.layer.7.attention.output.LayerNorm.bias', 'bert.encoder.layer.7.intermediate.dense.weight', 'bert.encoder.layer.7.intermediate.dense.bias', 'bert.encoder.layer.7.output.dense.weight', 'bert.encoder.layer.7.output.dense.bias', 'bert.encoder.layer.7.output.LayerNorm.weight', 'bert.encoder.layer.7.output.LayerNorm.bias', 'bert.encoder.layer.8.attention.self.query.weight', 'bert.encoder.layer.8.attention.self.query.bias', 'bert.encoder.layer.8.attention.self.key.weight', 'bert.encoder.layer.8.attention.self.key.bias', 'bert.encoder.layer.8.attention.self.value.weight', 'bert.encoder.layer.8.attention.self.value.bias', 'bert.encoder.layer.8.attention.output.dense.weight', 'bert.encoder.layer.8.attention.output.dense.bias', 'bert.encoder.layer.8.attention.output.LayerNorm.weight', 'bert.encoder.layer.8.attention.output.LayerNorm.bias', 'bert.encoder.layer.8.intermediate.dense.weight', 'bert.encoder.layer.8.intermediate.dense.bias', 'bert.encoder.layer.8.output.dense.weight', 'bert.encoder.layer.8.output.dense.bias', 'bert.encoder.layer.8.output.LayerNorm.weight', 'bert.encoder.layer.8.output.LayerNorm.bias', 'bert.encoder.layer.9.attention.self.query.weight', 'bert.encoder.layer.9.attention.self.query.bias', 'bert.encoder.layer.9.attention.self.key.weight', 'bert.encoder.layer.9.attention.self.key.bias', 'bert.encoder.layer.9.attention.self.value.weight', 'bert.encoder.layer.9.attention.self.value.bias', 'bert.encoder.layer.9.attention.output.dense.weight', 'bert.encoder.layer.9.attention.output.dense.bias', 'bert.encoder.layer.9.attention.output.LayerNorm.weight', 'bert.encoder.layer.9.attention.output.LayerNorm.bias', 'bert.encoder.layer.9.intermediate.dense.weight', 'bert.encoder.layer.9.intermediate.dense.bias', 'bert.encoder.layer.9.output.dense.weight', 'bert.encoder.layer.9.output.dense.bias', 'bert.encoder.layer.9.output.LayerNorm.weight', 'bert.encoder.layer.9.output.LayerNorm.bias', 'bert.encoder.layer.10.attention.self.query.weight', 'bert.encoder.layer.10.attention.self.query.bias', 'bert.encoder.layer.10.attention.self.key.weight', 'bert.encoder.layer.10.attention.self.key.bias', 'bert.encoder.layer.10.attention.self.value.weight', 'bert.encoder.layer.10.attention.self.value.bias', 'bert.encoder.layer.10.attention.output.dense.weight', 'bert.encoder.layer.10.attention.output.dense.bias', 'bert.encoder.layer.10.attention.output.LayerNorm.weight', 'bert.encoder.layer.10.attention.output.LayerNorm.bias', 'bert.encoder.layer.10.intermediate.dense.weight', 'bert.encoder.layer.10.intermediate.dense.bias', 'bert.encoder.layer.10.output.dense.weight', 'bert.encoder.layer.10.output.dense.bias', 'bert.encoder.layer.10.output.LayerNorm.weight', 'bert.encoder.layer.10.output.LayerNorm.bias', 'bert.encoder.layer.11.attention.self.query.weight', 'bert.encoder.layer.11.attention.self.query.bias', 'bert.encoder.layer.11.attention.self.key.weight', 'bert.encoder.layer.11.attention.self.key.bias', 'bert.encoder.layer.11.attention.self.value.weight', 'bert.encoder.layer.11.attention.self.value.bias', 'bert.encoder.layer.11.attention.output.dense.weight', 'bert.encoder.layer.11.attention.output.dense.bias', 'bert.encoder.layer.11.attention.output.LayerNorm.weight', 'bert.encoder.layer.11.attention.output.LayerNorm.bias', 'bert.encoder.layer.11.intermediate.dense.weight', 'bert.encoder.layer.11.intermediate.dense.bias', 'bert.encoder.layer.11.output.dense.weight', 'bert.encoder.layer.11.output.dense.bias', 'bert.encoder.layer.11.output.LayerN` Any thoughts about that ? Thanks in advance
04-27-2020 23:12:12
04-27-2020 23:12:12
Hi, do you mind showing how you're loading the model?<|||||>Hello, Here is how I'm loading the model : `model = CamembertForSequenceClassification.from_pretrained("https://s3.amazonaws.com/models.huggingface.co/bert/camembert-base-pytorch_model.bin", config=bertconfig)` I'm giving the link to the model because when using the name 'camembert-base' it triggers an error saying it can't find the model at the following link : `https://s3.amazonaws.com/models.huggingface.co/bert/camembert-base/pytorch_model.bin` I just would like to point out that the class `CamembertForSequenceClassification` , I'm using, is a re-implementation for the HuggingFace one following [wang's](https://github.com/wang-h/bert-relation-classification) logic, to re-adapt if for relation extraction, as follow : ``` class CamembertForSequenceClassification(RobertaForSequenceClassification): def __init__(self, config): super(CamembertForSequenceClassification, self).__init__(config) self.num_labels = config.num_labels self.l2_reg_lambda = config.l2_reg_lambda self.bert = CamembertModel(config) self.latent_entity_typing = config.latent_entity_typing self.latent_size = config.hidden_size self.latent_type = nn.Parameter(torch.FloatTensor( 3, config.hidden_size), requires_grad=True) if self.latent_entity_typing: classifier_size += config.hidden_size*2 self.dropout = nn.Dropout(config.hidden_dropout_prob) classifier_size = config.hidden_size*3 self.classifier = nn.Linear( classifier_size, self.config.num_labels) self.init_weights() def forward(self, input_ids, token_type_ids=None, attention_mask=None, e1_mask=None, e2_mask=None, labels=None, position_ids=None, head_mask=None): outputs = self.bert(input_ids, attention_mask=attention_mask,token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask) pooled_output = outputs[1] sequence_output = outputs[0] def extract_entity(sequence_output, e_mask): extended_e_mask = e_mask.unsqueeze(1) extended_e_mask = torch.bmm( extended_e_mask.float(), sequence_output).squeeze(1) return extended_e_mask.float() e1_h = extract_entity(sequence_output, e1_mask) e2_h = extract_entity(sequence_output, e2_mask) context = self.dropout(pooled_output) pooled_output = torch.cat([context, e1_h, e2_h], dim=-1) logits = self.classifier(pooled_output) outputs = (logits,) + outputs[2:] device = logits.get_device() l2 = l2_loss(self.parameters()) # print(l2) if device >= 0: l2 = l2.to(device) loss = l2 * self.l2_reg_lambda if labels is not None: if self.num_labels == 1: # We are doing regression loss_fct = MSELoss() loss += loss_fct(logits.view(-1), labels.view(-1)) else: probabilities = F.softmax(logits, dim=-1) log_probs = F.log_softmax(logits, dim=-1) one_hot_labels = F.one_hot(labels, num_classes=self.num_labels) if device >= 0: one_hot_labels = one_hot_labels.to(device) dist = one_hot_labels[:, 1:].float() * log_probs[:, 1:] example_loss_except_other, _ = dist.min(dim=-1) per_example_loss = - example_loss_except_other.mean() rc_probabilities = probabilities - probabilities * one_hot_labels.float() second_pre, _ = rc_probabilities[:, 1:].max(dim=-1) rc_loss = - (1 - second_pre).log().mean() #print(loss, per_example_loss, rc_loss) loss += per_example_loss + 5 * rc_loss outputs = (loss,) + outputs return outputs ````<|||||>any solution for my problem please ? am I loading the pre-trained model the wrong way ? <|||||>That's because your `CamembertForSequenceClassification` does not conform to our model, regarding naming. `CamembertForSequenceClassification` inherits from `RobertaForSequenceClassification` and therefore the transformer model should be named `self.roberta`, and not `self.bert`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,027
closed
Hoist bert model tester for patrick
04-27-2020 22:53:22
04-27-2020 22:53:22
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4027?src=pr&el=h1) Report > Merging [#4027](https://codecov.io/gh/huggingface/transformers/pull/4027?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4e817ff41885063e08bb3bcd63e5adfd835b9911&el=desc) will **decrease** coverage by `0.07%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4027/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4027?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4027 +/- ## ========================================== - Coverage 78.44% 78.37% -0.08% ========================================== Files 111 111 Lines 18518 18518 ========================================== - Hits 14527 14513 -14 - Misses 3991 4005 +14 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4027?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4027/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `90.46% <0.00%> (-2.31%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4027?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4027?src=pr&el=footer). Last update [4e817ff...7189fff](https://codecov.io/gh/huggingface/transformers/pull/4027?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,026
closed
Training a new language model with custom loss and input representation
# ❓ Questions & Help I'm following https://huggingface.co/blog/how-to-train which describe an overview of training a new language model. However, the file the guide points to run_language_modeling.py abstracts away a lot of things. It's not clear if it's possible to train with a custom loss/input representation. For example, what if I want to train using 3 sequences concatenated together instead of 2 as in the original Bert paper? (e.g. [context, context, question] or [sent1, sent2, sent3] where the task is whether sentences sent1, sent2, sent3 are 3 consecutive sentences or not.) Do I need to modify the source code to achieve this? Is there any documentation to modify the underlying model or loss functions?
04-27-2020 22:47:14
04-27-2020 22:47:14
We recently implemented a new `Trainer` which should allow to easily change the training loop. We don't have example scripts showing how to override this yet. Here's the [trainer](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py) file. If I wanted to modify the way the loss is handled, what I would do is create a specific trainer that inherits from `Trainer`. I would then simply override the `_training_step` method to handle the loss/losses. In order to modify the input representation to build inputs made up of three sequences, the best would be for you to create a Dataset similar to [`TextDataset`](https://github.com/huggingface/transformers/blob/fa49b9afeab5545f14b3661b35195b829fcf8ef5/src/transformers/data/datasets/language_modeling.py#L16), which builds your inputs as you wish. You can then modify the `run_language_modeling.py` file to use your dataset. Let me know if I can help further!<|||||>Hi, this is super useful advice for getting me started! After looking at the files you pointed out, it seems like in order for me to implement the input representation and custom loss function, I need to modify transformers.modeling_bert.py. I have 2 questions. 1. If I implement my own local version of modeling_bert.py, how should I instantiate the BertForMaskedLM class? The way the example does it is with AutoModelWithLMHead.from_pretrained - this obscures how to actually instantiate a particular model class. 2. For concatenating the 3 sequences in the input, how would I make sure a [SEP] token is inserted between each sequence? My line_by_line data file looks as follows: sequence 1 \t sequence 2 \t sequence 3 \n sequence 1 \t sequence 2 \t sequence 3 \n sequence 1 \t sequence 2 \t sequence 3 \n sequence 1 \t sequence 2 \t sequence 3 \n . . . I think my desired input looks like this: [sequence 1's tokens] [sep] [sequence 2's tokens] [sep] [sequence 3's tokens] and I'd like to apply position embedding to each sequence 1, 2, 3.<|||||>I don't think you would have to modify the `modeling_bert.py` file. You may be thinking that because the models like `BertForMaskedLM` can [compute the losses](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L947-L958) if you give them labels. However, all those classes only compute the losses if you hand them the `labels`, and the loss function cannot be changed. I don't know what is your specific use-case, but if you want to use a custom loss function you could retrieve the model's hidden states, and compute your loss then (outside the model). The tensor containing the model's last hidden states is the first value in the model's output tuple if you don't specify labels. Seeing as the way you construct your dataset is different from the "traditional" way of handling sequences, I think you would have to build your own encoding method relying on `encode`, `encode_plus` or `batch_encode_plus`. Unfortunately, the logic we have for building inputs is very specific to the model's pre-trainings and sequence classification, so these work only for two sequences. We did discuss at one point handling more than 2 sequences when building inputs but never got to it. Our models handle creating position embeddings on their own so except if you use a different type of positional embeddings than the model's, you shouldn't have to do anything to handle those.<|||||>Thanks for the pointers. I'll have an attempt at implementing encode/encode_plus/batch_encode_plus. One question before I do so. It seems like a lot of changes have been made in the previous 3 weeks since 2.8.0 came out. These changes seem to be affecting the files I want to manipulate. Specifically, I don't think trainer.py was even used by run_language_modeling.py 3 weeks ago. Do you recommend moving forward with my project using the latest code changes on the master branch, or using the March 02 2020 snapshot (which I'm guessing is the 2.8.0 release snapshot)? The files you referred me to were all on master. It seems like you can't run them unless transformer is installed from source (pip install version isn't compatible). I'm a bit concerned with using master - I tried training a tokenizer on it and it seemed slower which impleis the latest changes don't seem to have gone through thorough testing.<|||||>Hi, I've thought over your advice a bit more and I think there's an easier solution. Suppose the 3 sequences of my input have disjoint vocabulary (I think this is a decent assumption for my particular dataset/usecase). E.g. each line is (seq1, seq2, seq3). seq1 is english, seq2 is french, seq3 is spanish. Could I just train 3 different tokenizers and tell BertForMaskedLM the total vocab size is the sum of the 3 tokenizer's vocab sizes? I realized there's the token_type_ids parameter in the BertEmbeddings which has been implemented for an arbitrary value of config.type_vocab_size. It seems like I can then just set config.type_vocab_size=3 and pass in token_type_ids=[0, 0, ... 1, 1, ... 2, 2]. Does this seem reasonable? Thanks so much for your help!<|||||>Indeed, there has been a lot of changes in the last few weeks! Since the last release, we've moved to a new `Trainer` class, which abstracts most of the code that isn't especially important for users (fp16, multi-GPU, gradient accumulation etc). We've also completely moved to using `tokenizers`, which uses rust as a backend for tokenizing. Until now the support was ok, now it's fully supported. You would use the previous snapshot only if you want to have a script that does not rely on any abstraction. The previous scripts were mostly standalone, which means that it would be a good learning experience to understand every small detail regarding training. However, it may be lengthy to adapt to your particular usecase. That's what we're trying to fix with the new `Trainer`, where re-implementing a few methods is simple. I find it weird that you tried to train a tokenizer on `master` and it was slower. We **do** test thoroughly, and the new tokenizers have undergone a lot of trial and error and a lot of tests to be in the position they are now. I think training three different tokenizers would work, but keep in mind that this would require a lot of memory. The embeddings take up a big portion of your model's memory, so beware of the total vocabulary size. You would also need to think about how you want to separate the three tokenizers, and especially how to make sure you have no overlap. Using separate tokenizers for each sequence, and then shifting the token indices would probably be the most robust (cc @n1t0).<|||||>Yup, that's the implementation I had in mind for separating the tokenizers! As a new user, I think the new abstractions that were introduced makes calling the API/running the script a lot easier but it obscures some of the underlying code - especially someone who doesn't have experience with the abstraction libraries you are using. I think I will go with the previous snapshot and probably switch over to master if I get stuck. Please disregard the comment on training the tokenizer is slower, I did something wrong on my end.<|||||>Indeed, we plan to add examples showing how to use the `Trainer` to custom tasks down the road!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,025
closed
Minor fix in Transformers Notebook
Minor fix for the German example sentence in notebook :)
04-27-2020 22:47:04
04-27-2020 22:47:04
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4025?src=pr&el=h1) Report > Merging [#4025](https://codecov.io/gh/huggingface/transformers/pull/4025?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2fade302ac55a289f00c61b0947a8e00dadf41fe&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4025/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4025?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4025 +/- ## ======================================= Coverage 78.51% 78.51% ======================================= Files 111 111 Lines 18486 18486 ======================================= + Hits 14514 14515 +1 + Misses 3972 3971 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4025?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4025/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.59% <0.00%> (-0.17%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4025/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <0.00%> (+0.12%)` | :arrow_up: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4025/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.02% <0.00%> (+0.41%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4025?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4025?src=pr&el=footer). Last update [2fade30...3a3c460](https://codecov.io/gh/huggingface/transformers/pull/4025?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Perfekt, vielen Dank haha :-)
transformers
4,024
closed
Fix for #3846
Fix for #3846 . PretrainedTokenizer mapped " do not" to " don't" when .decode(...) is called. Removed the " do not" --> " don't" mapping from clean_up_tokenization(...).
04-27-2020 22:37:53
04-27-2020 22:37:53
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4024?src=pr&el=h1) Report > Merging [#4024](https://codecov.io/gh/huggingface/transformers/pull/4024?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ab90353f1abfd15f8d21f99395658d060679a08c&el=desc) will **decrease** coverage by `0.08%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4024/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4024?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4024 +/- ## ========================================== - Coverage 78.01% 77.93% -0.09% ========================================== Files 114 114 Lines 18671 18671 ========================================== - Hits 14566 14551 -15 - Misses 4105 4120 +15 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4024?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.36% <ø> (ø)` | | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `90.47% <0.00%> (-2.47%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4024?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4024?src=pr&el=footer). Last update [ab90353...333778c](https://codecov.io/gh/huggingface/transformers/pull/4024?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>This looks good to me :-) Thanks for the PR! What do you think @thomwolf @n1t0 ?<|||||>Hi @lukovnikov, was the last commit by accident? I don't think that this one is needed to fix the issue no?<|||||>Thanks for pointing that out, should be good now.
transformers
4,023
closed
TFBert: Out of memory error when acting on a strided slice of input.
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): TF Bert & TFBertForSequenceEncoding Language I am using the model on (English, Chinese ...): EN The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## Code To reproduce ``` batch_size = 10 window = 50 # seq_len = 512 # not good to override unless the sequence truncatting is disabled. from transformers import TFBertModel, TFBertForSequenceClassification, BertTokenizer # configuration = BertConfig() def build_model(num_labels = 10, max_seq_len=seq_len, use_logits=True, batch_size = batch_size, label_size = None): print(f"Number of labels: {num_labels}") optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-06, clipnorm=1.0) loss = tf.keras.losses.CategoricalCrossentropy(from_logits=use_logits) macc = tf.keras.metrics.CategoricalAccuracy('accuracy') print(f"Optimizer used: {optimizer}") print(f"Loss used: {loss}") print(f"Acc used: {macc}") num_labels = len(unique_classes) # 9 unique classes bert_config = BertConfig.from_pretrained("bert-base-cased", num_labels=len(unique_classes), output_hidden_states=False, output_attentions=True) bert_model = TFBertForSequenceClassification.from_pretrained("bert-base-cased", config=bert_config) # [ ENCODINGS, ATTN_MASK] agent_input = Input(shape=[2, seq_len], batch_size=batch_size, name='agent_input', dtype='int32') print("Agent Input:",agent_input) print('Stacking Inputs') agent_encodings = agent_input[...,0,:] agent_attn = agent_input[...,1,:] print('Agent Encodings:', agent_encodings) print('Agent Attn: ', agent_attn) print("Building Bert Model") agent_outputs = bert_model([agent_encodings, agent_attn]) agent_predictions = agent_outputs[0] agent_attn = agent_outputs[1] print('agent_outputs', agent_outputs) model = Model(inputs=agent_input, outputs=agent_predictions) model.compile(optimizer=optimizer, loss=loss, metrics=[macc]) return model #_________________________ print('shape for train data', agent_encodings_train.shape) batch_size = 100 bert_model = build_model(len(unique_classes), seq_len, True, batch_size) bert_model.summary() #_________________________ bert_model.evaluate(agent_encodings_train[:100], agent_class_train[:100], batch_size=batch_size) ``` ![image](https://user-images.githubusercontent.com/14914/80419128-2aa6b480-889e-11ea-8a91-a7025a07cedc.png) ![image](https://user-images.githubusercontent.com/14914/80419229-5cb81680-889e-11ea-89b1-96f051fa58d6.png) agent_encodings_train contains a (batch_size, 2, sequence_len) shape of tensors containing seq_len encoded bert tokens and seq_len bert attention tokens. I am slicing them in the model into each token part and passing them to bert. You will notice in the summary. there are 2 strided slices being passed to BERT. The model compiles, and evaluates, but when calling model.fit I constantly run into memory problems as seen below. ``` bert_model.summary() history = bert_model.fit(agent_encodings_train[:100], agent_class_train[:100], epochs=1, batch_size=batch_size, class_weights=class_weights, shuffle=False) ``` ``` --------------------------------------------------------------------------- ResourceExhaustedError Traceback (most recent call last) <ipython-input-115-712b9a078eba> in <module> 10 # validation_data=(agent_encodings_test, agent_class_test), 11 class_weights=class_weights, ---> 12 shuffle=False) ~\Anaconda3\envs\cat2_nightly\lib\site-packages\tensorflow\python\keras\engine\training.py in _method_wrapper(self, *args, **kwargs) 63 def _method_wrapper(self, *args, **kwargs): 64 if not self._in_multi_worker_mode(): # pylint: disable=protected-access ---> 65 return method(self, *args, **kwargs) 66 67 # Running inside `run_distribute_coordinator` already. ~\Anaconda3\envs\cat2_nightly\lib\site-packages\tensorflow\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs) 781 batch_size=batch_size): 782 callbacks.on_train_batch_begin(step) --> 783 tmp_logs = train_function(iterator) 784 # Catch OutOfRangeError for Datasets of unknown size. 785 # This blocks until the batch has finished executing. ~\Anaconda3\envs\cat2_nightly\lib\site-packages\tensorflow\python\eager\def_function.py in __call__(self, *args, **kwds) 577 xla_context.Exit() 578 else: --> 579 result = self._call(*args, **kwds) 580 581 if tracing_count == self._get_tracing_count(): ~\Anaconda3\envs\cat2_nightly\lib\site-packages\tensorflow\python\eager\def_function.py in _call(self, *args, **kwds) 641 # Lifting succeeded, so variables are initialized and we can run the 642 # stateless function. --> 643 return self._stateless_fn(*args, **kwds) 644 else: 645 canon_args, canon_kwds = \ ~\Anaconda3\envs\cat2_nightly\lib\site-packages\tensorflow\python\eager\function.py in __call__(self, *args, **kwargs) 2418 with self._lock: 2419 graph_function, args, kwargs = self._maybe_define_function(args, kwargs) -> 2420 return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access 2421 2422 @property ~\Anaconda3\envs\cat2_nightly\lib\site-packages\tensorflow\python\eager\function.py in _filtered_call(self, args, kwargs) 1663 if isinstance(t, (ops.Tensor, 1664 resource_variable_ops.BaseResourceVariable))), -> 1665 self.captured_inputs) 1666 1667 def _call_flat(self, args, captured_inputs, cancellation_manager=None): ~\Anaconda3\envs\cat2_nightly\lib\site-packages\tensorflow\python\eager\function.py in _call_flat(self, args, captured_inputs, cancellation_manager) 1744 # No tape is watching; skip to running the function. 1745 return self._build_call_outputs(self._inference_function.call( -> 1746 ctx, args, cancellation_manager=cancellation_manager)) 1747 forward_backward = self._select_forward_and_backward_functions( 1748 args, ~\Anaconda3\envs\cat2_nightly\lib\site-packages\tensorflow\python\eager\function.py in call(self, ctx, args, cancellation_manager) 596 inputs=args, 597 attrs=attrs, --> 598 ctx=ctx) 599 else: 600 outputs = execute.execute_with_cancellation( ~\Anaconda3\envs\cat2_nightly\lib\site-packages\tensorflow\python\eager\execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name) 58 ctx.ensure_initialized() 59 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, ---> 60 inputs, attrs, num_outputs) 61 except core._NotOkStatusException as e: 62 if name is not None: ResourceExhaustedError: 2 root error(s) found. (0) Resource exhausted: OOM when allocating tensor with shape[100,86,3072] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[node model_11/tf_bert_for_sequence_classification_31/bert/encoder/layer_._0/intermediate/activation/truediv (defined at C:\Users\codeninja\Anaconda3\envs\cat2_nightly\lib\site-packages\transformers\modeling_tf_bert.py:63) ]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. [[clip_by_norm_2/truediv/_30]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. (1) Resource exhausted: OOM when allocating tensor with shape[100,86,3072] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[node model_11/tf_bert_for_sequence_classification_31/bert/encoder/layer_._0/intermediate/activation/truediv (defined at C:\Users\codeninja\Anaconda3\envs\cat2_nightly\lib\site-packages\transformers\modeling_tf_bert.py:63) ]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. 0 successful operations. 0 derived errors ignored. [Op:__inference_train_function_253267] Errors may have originated from an input operation. Input Source operations connected to node model_11/tf_bert_for_sequence_classification_31/bert/encoder/layer_._0/intermediate/activation/truediv: model_11/tf_bert_for_sequence_classification_31/bert/encoder/layer_._0/intermediate/dense/BiasAdd (defined at C:\Users\codeninja\Anaconda3\envs\cat2_nightly\lib\site-packages\transformers\modeling_tf_bert.py:319) Input Source operations connected to node model_11/tf_bert_for_sequence_classification_31/bert/encoder/layer_._0/intermediate/activation/truediv: model_11/tf_bert_for_sequence_classification_31/bert/encoder/layer_._0/intermediate/dense/BiasAdd (defined at C:\Users\codeninja\Anaconda3\envs\cat2_nightly\lib\site-packages\transformers\modeling_tf_bert.py:319) Function call stack: train_function -> train_function ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `transformers-cli env ` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> [env.txt](https://github.com/huggingface/transformers/files/4542124/env.txt) - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?:
04-27-2020 21:20:28
04-27-2020 21:20:28
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,022
closed
NER Pipeline with CamemBERT not showing entities
Hi, Using NER pipeline with CamemBERT, I am only getting "LABEL_O" and "LABEL_I" as entities. ``` from transformers import pipeline import torch nlp = pipeline(task='ner', model="camembert-base", tokenizer="camembert-base", framework='pt', device=0) sequence = "Lundi, David ira au magasin Printemps de Lille pour acheter du vin." print(nlp(sequence)) ``` output: `[{'word': '<s>', 'score': 0.5307311415672302, 'entity': 'LABEL_0'}, {'word': 'Lundi', 'score': 0.537407636642456, 'entity': 'LABEL_1'}, {'word': ',', 'score': 0.5144394040107727, 'entity': 'LABEL_1'}, {'word': 'David', 'score': 0.5270906090736389, 'entity': 'LABEL_1'}, {'word': 'ira', 'score': 0.5355848073959351, 'entity': 'LABEL_1'}, {'word': 'au', 'score': 0.5498790740966797, 'entity': 'LABEL_1'}, {'word': 'magasin', 'score': 0.5076472163200378, 'entity': 'LABEL_1'}, {'word': 'Printemps', 'score': 0.530289351940155, 'entity': 'LABEL_1'}, {'word': 'de', 'score': 0.5026782155036926, 'entity': 'LABEL_0'}, {'word': 'Lille', 'score': 0.5144190192222595, 'entity': 'LABEL_1'}, {'word': 'pour', 'score': 0.5344067215919495, 'entity': 'LABEL_1'}, {'word': 'acheter', 'score': 0.550661563873291, 'entity': 'LABEL_1'}, {'word': 'du', 'score': 0.5307605266571045, 'entity': 'LABEL_1'}, {'word': 'vin', 'score': 0.5279666781425476, 'entity': 'LABEL_1'}, {'word': '.', 'score': 0.5378196835517883, 'entity': 'LABEL_0'}, {'word': '</s>', 'score': 0.523155927658081, 'entity': 'LABEL_0'}] ` Entity only takes the LABEL_0 and LABEL_1 values. I would have expected entity to be: "Person", "Location", "Organisation" etc or something along those lines. set up: transformers: 2.5.0 (similar outcomes with versions 2.7 and 2.8, but for some reason the word tagging is cleaner in 2.5, a "_" is added in front of the words in 2.7 and 2.8) Python: 3.6.9 torch: 1.5.0 Linux Ubuntu 18.04 Many thanks for your help!
04-27-2020 20:35:35
04-27-2020 20:35:35
I think this is because Camembert has not been fine-tuned on a NER task yet. In the documentation they specify : "The models that this pipeline (ie. ner) can use are models that have been fine-tuned on a token classification task. See the up-to-date list of available models on huggingface.co/models." If you need to use Camembert for a named entity recognition task you must wait for a model to be uploaded or you can fine-tune it on your own/open data. <|||||>I had missed that point. Thanks a lot!<|||||>@ColleterVi Do you have any resouces on how to fine tune camemBert for NER task on a specifie domain dataset.<|||||>@MouadBH I don't have a link but you can just follow the three steps ## Step 1: Create config object according to dataset labels from transformers import AutoConfig, AutoTokenizer, AutoModelForTokenClassification config = AutoConfig.from_pretrained("Jean-Baptiste/camembert-base") label2id = { "I-LOC": 1, "I-MISC": 3, "I-ORG": 4, "I-PER": 2, "O": 0, } id2label = {y:x for x,y in label2id.items()} config.__setattr__("id2label", id2label) config.__setattr__("label2id", label2id) config.__setattr__("num_labels", 5) ## Step 2: Change the tokenizer config attribute tokenizer = AutoTokenizer.from_pretrained("camembert-base") tokenizer.config = config ## Step 3: Change the model config attribute model = AutoModelForTokenClassification.from_pretrained("Jean-Baptiste/camembert-base", num_labels=6) model.config = config
transformers
4,021
closed
T5 Tokenization of unique masked tokens (<extra_id_1>) is incorrect
Hi! Thanks for the awesome library! I am trying to tokenize the following text using the T5Tokenizer `tokenizer = T5Tokenizer.from_pretrained("t5-base")` `text = "The dog <extra_id_1> in the park"` `tokenized_text = tokenizer.tokenize(text)` `print (tokenized_text)` And get the following output: `['▁The', '▁dog', '▁', '<', 'extra', '_', 'i', 'd', '_', '1', '>', '▁in', '▁the', '▁park']` The `<extra_id_1>` is tokenized incorrectly. Any idea on how to solve this issue? I am using transformers version 2.8.0 and tokenizers version 0.7.0
04-27-2020 19:43:16
04-27-2020 19:43:16
I tried the following code: ```python tokenizer = T5Tokenizer.from_pretrained("t5-base") text = "The dog <extra_id_1> in the park" tokenized_text = tokenizer.tokenize(text) print (tokenized_text) ``` With the same settings, it works fine with the following output: ```python ['▁The', '▁dog', '<extra_id_1>', '▁in', '▁the', '▁park'] ``` FYI, I just installed transformers on Google Colab using `pip install transformers`.<|||||>@girishponkiya thanks for comment! which version of tokenizers and transformers are you using ?<|||||>torch: 1.5.0+cu101 transformers: 2.8.0 tokenizers: 0.7.0<|||||>I tried with the following environment as well: - torch: 1.5.0+cu101 - transformers: 2.8.0 - tokenizers: **0.5.2** ..and got the following output: ``` ['▁The', '▁dog', '<extra_id_1>', '▁in', '▁the', '▁park'] ```<|||||>We found the same problem. By gitting bisect, the following commit raises this ? ``` * commit 96ab75b8dd48a9384a74ba4307a4ebfb197343cd | Author: Funtowicz Morgan <[email protected]> | Date: Mon Apr 6 22:29:15 2020 +0000 (Pull Request # 3185) ``` In src/transformers/tokenization_utils.py # 272 (at the current master, Line 500), https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils.py#L500 Did you forget "setattr(self, key, value)" after assert. See: https://github.com/huggingface/transformers/commit/96ab75b8dd48a9384a74ba4307a4ebfb197343cd#diff-66606105b55d0ec62ae34112ea3e3d20R272 (please scroll down tokenization_utils.py and "Load diff") Best, P.S.) bisect results ``` $ git bisect log git bisect start # bad: [b1ff0b2ae7d368b7db3a8a8472a29cc195d278d8] Fix bug in examples: double wrap into DataParallel during eval git bisect bad b1ff0b2ae7d368b7db3a8a8472a29cc195d278d8 # good: [e52d1258e010b88d3507a4f527c6201616c119ad] Fix RoBERTa/XLNet Pad Token in run_multiple_choice.py (#3631) git bisect good e52d1258e010b88d3507a4f527c6201616c119ad # bad: [c59b1e682d6ebaf7295c63418d4570228904e690] [examples] unit test for run_bart_sum (#3544) git bisect bad c59b1e682d6ebaf7295c63418d4570228904e690 # bad: [31baeed614bf7f65aafde545f20a95e84cd293b4] Update quotes git bisect bad 31baeed614bf7f65aafde545f20a95e84cd293b4 # bad: [e344e3d4021421ec0d631d076daf17f8a4e82e69] [examples] SummarizationDataset cleanup (#3451) git bisect bad e344e3d4021421ec0d631d076daf17f8a4e82e69 # bad: [5aa8a278a3f13b8f83a0deb9b6d743f159cea23c] Fix roberta checkpoint conversion script (#3642) git bisect bad 5aa8a278a3f13b8f83a0deb9b6d743f159cea23c # bad: [0a9d09b42a9c7c1ccc00da48486a1188078e8594] fixed TransfoXLLMHeadModel documentation (#3661) git bisect bad 0a9d09b42a9c7c1ccc00da48486a1188078e8594 # bad: [96ab75b8dd48a9384a74ba4307a4ebfb197343cd] Tokenizers v3.0.0 (#3185) git bisect bad 96ab75b8dd48a9384a74ba4307a4ebfb197343cd # first bad commit: [96ab75b8dd48a9384a74ba4307a4ebfb197343cd] Tokenizers v3.0.0 (#3185) ```<|||||>@takahiro971 Here is the output of me running `git bisect log` ``` git bisect start # bad: [b1ff0b2ae7d368b7db3a8a8472a29cc195d278d8] Fix bug in examples: double wrap into DataParallel during eval git bisect bad b1ff0b2ae7d368b7db3a8a8472a29cc195d278d8 # good: [e52d1258e010b88d3507a4f527c6201616c119ad] Fix RoBERTa/XLNet Pad Token in run_multiple_choice.py (#3631) git bisect good e52d1258e010b88d3507a4f527c6201616c119ad ``` Can you point out how git bisect solves this issue? Thanks!<|||||>@mansimov Hi! The 'git bisect' is a tool for determine when (which commit) contains bugs. This issue is not still solved. Old version (before at Apr 6) is right. But the commit 96ab75b8dd48a9384a74ba4307a4ebfb197343cd have a bug and raises this issue. Also, the current master (latest source) have the same problem. BTW, How did you installed transformer? clone from GitHub and 'pip install --editable .' ? or 'pip install transformers==2.8.0' ? If first one (and use newer version), it may have the bug. We can three choices: 1) Use older version (before above commit on Apr 6). or 'pip install transformers==2.8.0' is also OK, because it seems older than the above commit. 2) Clone repo to local and, Hack the transformers code. 3) Wait until somebody fix this bug, and release next version. Best, <|||||>FYI; my hack is like: ``` $ g log -p (snip) Date: Fri May 8 10:52:13 2020 +0900 不具合修正 diff --git a/src/transformers/tokenization_utils.py b/src/transformers/tokenization_utils.py index a2d258a..91f345b 100644 --- a/src/transformers/tokenization_utils.py +++ b/src/transformers/tokenization_utils.py @@ -498,6 +498,7 @@ class SpecialTokensMixin: if key in self.SPECIAL_TOKENS_ATTRIBUTES: if key == "additional_special_tokens": assert isinstance(value, (list, tuple)) and all(isinstance(t, str) for t in value) + setattr(self, key, value) elif isinstance(value, AddedTokenFast): setattr(self, key, str(value)) elif isinstance(value, str): ``` Be careful, the line number may be changed in the different revision.<|||||>Thanks @takahiro971 this solved my issue!<|||||>I think this issue is solved with #4353
transformers
4,020
closed
Pass existing tensorboard SummaryWriter to Trainer PR (#4019)
Implements feature request for issue #4019. You should be able to pass in you're own `SummaryWriter`s to `Trainer` via the `tb_writer` parameter to the `__init__` function: ``` tb_writer = SummaryWriter(log_dir="my_log_dir") tb_writer.add_hparams(my_hparams_dict, my_metrics_dict) trainer = Trainer( model = model, args = training_args, train_dataset = train_dataset, tb_writer = tb_writer ) trainer.train() ```
04-27-2020 19:33:12
04-27-2020 19:33:12
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4020?src=pr&el=h1) Report > Merging [#4020](https://codecov.io/gh/huggingface/transformers/pull/4020?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4e817ff41885063e08bb3bcd63e5adfd835b9911&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `66.66%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4020/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4020?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4020 +/- ## ======================================= Coverage 78.44% 78.45% ======================================= Files 111 111 Lines 18518 18520 +2 ======================================= + Hits 14527 14529 +2 Misses 3991 3991 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4020?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4020/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `43.94% <66.66%> (+0.04%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4020/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.76% <0.00%> (ø)` | | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4020/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <0.00%> (+0.12%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4020?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4020?src=pr&el=footer). Last update [4e817ff...62dbe2d](https://codecov.io/gh/huggingface/transformers/pull/4020?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Another consideration is that the user might want to add extra details to the log after the training has finished. Right now, the end of the training loop closes the `tb_writer`, preventing any further logging. Not sure what a reasonable workaround for this is, maybe add a param for the train function that if set to True, will close `tb_writer` and if False, will keep it open (default: True).<|||||>Looks good, thanks
transformers
4,019
closed
Pass existing tensorboard SummaryWriter to Trainer.
# Feature request The [`Trainer`](https://github.com/huggingface/transformers/blob/41750a6cff55e401364568868d619747de3db037/src/transformers/trainer.py#L102) class should let us pass our own `tb_writer` (Currently, a new `tb_writer` is created using the `args.logging_dir` argument if it exists). ## Motivation This let's us have more detailed logs since we can add add things like hyper-params, histographs, and whatnot if we choose so.
04-27-2020 19:29:12
04-27-2020 19:29:12
transformers
4,018
closed
String Format should be more pythonic
Instead of using %s, %d, I think that we should use f-string or format.
04-27-2020 16:58:07
04-27-2020 16:58:07
Until very recently, we were supporting Python >= 3.5, which is why we were not using f-strings. We're slowly but surely replacing them now!
transformers
4,017
closed
TF version of the trainer
Tensorflow version of the PyTorch Trainer found [here](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py). It follow as much as possible the same API, usage and internal method names. An example can be found [here](https://github.com/jplu/transformers/blob/tf-trainer/examples/ner/run_tf_ner.py). The code contains some hacks mostly for handling datasets see for example the [`TFDataset`](https://github.com/jplu/transformers/blob/tf-trainer/examples/ner/utils_ner.py#L141) that tries to imitate the `Dataset` type from PyTorch.
04-27-2020 16:00:41
04-27-2020 16:00:41
Thanks a lot for these reviews :) I'm currently running the trainer over all the GLUE tasks and few NER datasets. I still have few bugs solved that will be pushed soon, that will include your reviews. I do have some logs in Tensorboard yes, I can share them but I suggest you do some test on your side as well to be sure it is ok.<|||||>@julien-c As asked here few Tensorboards: - Sequence Classification example with [GLUE mrpc](https://tensorboard.dev/experiment/xJt1nogwRDO1RkebX6Qspg/#scalars) - Regression example with [GLUE sts-b](https://tensorboard.dev/experiment/szdpv9BzTVq8euJKBZpuAQ/#scalars) - Token Classification example with [Germeval](https://tensorboard.dev/experiment/JMNV2qqbRRamq9EbPLo3Pg/#scalars) The final commit will arrive during the weekend 😄 <|||||>@julien-c I'm now ok with this PR, the trainer has been tested over all the GLUE tasks + Germeval + CoNLL2002 and 2003. There is stil a non blocking minor bug for regression tasks (a raised exception that cannot be catched each time the graph is computed) that I do not really understand, so I will wrap it up into a small piece of code and open an issue in the TF repo. Do you want to do a last check?<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4017?src=pr&el=h1) Report > Merging [#4017](https://codecov.io/gh/huggingface/transformers/pull/4017?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1cdd2ad2afb73f6af185aafecb7dd7941a90c4d1&el=desc) will **decrease** coverage by `0.70%`. > The diff coverage is `34.36%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4017/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4017?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4017 +/- ## ========================================== - Coverage 78.85% 78.15% -0.71% ========================================== Files 114 117 +3 Lines 18688 18938 +250 ========================================== + Hits 14737 14801 +64 - Misses 3951 4137 +186 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4017?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4017/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `18.26% <18.26%> (ø)` | | | [src/transformers/training\_args\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4017/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzX3RmLnB5) | `58.53% <58.53%> (ø)` | | | [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4017/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `79.24% <83.33%> (-0.04%)` | :arrow_down: | | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/4017/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.08% <100.00%> (+0.02%)` | :arrow_up: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4017/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `41.87% <100.00%> (-2.03%)` | :arrow_down: | | [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4017/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `100.00% <100.00%> (ø)` | | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4017/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.61% <0.00%> (-0.17%)` | :arrow_down: | | ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/4017/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4017?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4017?src=pr&el=footer). Last update [1cdd2ad...456a024](https://codecov.io/gh/huggingface/transformers/pull/4017?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Awesome work @jplu! 💪💪💪
transformers
4,016
closed
Text Generation generated <|endoftext|>
The generated text include a lot of <|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|> Model I am using GPT2: Language I am using the model on English: The problem arises when using: `python run_generation.py --model_type=gpt2 --model_name_or_path=gpt2-large --length=500 --num_return_sequences=1 ` I am using the latest transfomer from github and build from source code and use PyTorch. Also is there anyway to generate a different text each run?
04-27-2020 15:37:37
04-27-2020 15:37:37
Hi @chuanhhoang, Running the same command you did actually gives me quite decent results, which are different each time I run the script (`do_sample=True` in the script which means that the models samples every time differently). An output I got was: ``` F=== GENERATED SEQUENCE 1 === <|endoftext|>NOW REBOOTING ONCE EVERY 5 MINUTES Before posting a rant, feel free to read the rest of this article It's been on this strange path for a long time now: from an ostensibly well meaning project to an out of control piece of crap. When the whole thing started back in 2012, I started down the path I'm currently on with calling it Icarus. Being a fan of NGE, I felt like I should do something with the FEMINIST archive I'd be putting online, so I took the ideas I was getting from NGE and combined them into a short movie. When I was finished, I needed a way to get it on to some sort of an internet scene to spread the word, so I wrote it up as a 30-minute film. That became Icarus, basically. It's probably the easiest project I've done so far, the easiest to get some sales for, and the easiest to get friends involved with. Thing is, with IGE and all its tropes, you can already see the script is being shown at cons and getting more and more ridiculous with each iteration. But like, two days after I made the video, I was doing 25-hour and 30-hour runs at Cineplex in Halifax that I was writing the screenplay for. This isn't easy to do; we've all seen it, can recognize it, and just kind of go back to your normal routine for a little while. It's a marathon, but it's a marathon. For those who've read the script, there's a lot of filler story "I don't care about you", "You'll get what's coming to you", "Listen to this I don't care if you're scared", and more exposition like that. It's not pretty at all. Of course, theres always the fic awards, well, that guy. The one that gives the light at the end of the tunnel. It's fine; the fan base on here and elsewhere know this, so the fans are going to let the judges decide. But yeah, for now, I'm playing that game. Any other reviews are written and will be under embargo. I've removed most and all of my reviews because all I've done is label each review "a movie review". That ``` Note that we now also have generation pipeline, so it shoud be easy to play around with the parameters there. Also here is a blog that might be useful to see how you can influence the generation: https://huggingface.co/blog/how-to-generate<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,015
closed
Fast Tokenizers: `batch_encode_plus` error
Hi, I just want report the following error using fast tokenizers for better tracking. The combination of: * `batch_encode_plus` method * `is_pretokenized=True` and * `return_tensors=True` returns the following error message: ```bash ~/.venvs/flair-2/lib/python3.8/site-packages/transformers/tokenization_utils.py in batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, max_length, stride, truncation_strategy, pad_to_max_length, is_pretokenized, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_lengths, **kwargs) 2501 stack = tf.stack(stack, axis=0) 2502 elif return_tensors == "pt": -> 2503 stack = torch.stack(stack, dim=0) 2504 # elif not return_tensors and len(stack) == 1: 2505 # stack = stack[0] RuntimeError: stack expects each tensor to be equal size, but got [34] at entry 0 and [8] at entry 1 ``` The code for re-producing this error is: ```python from transformers import BertTokenizerFast model_name = "bert-base-cased" tokenizer = BertTokenizerFast.from_pretrained(model_name) sentences = ["Schloß Nymphenburg is a nice castle in Munich".split(), "Berlin and Munich are cool .".split()] output = tokenizer.batch_encode_plus(batch_text_or_text_pairs=sentences, is_pretokenized=True, return_tensors="pt", ) ``` Notice: This bug only occurs, when input sentences have different lenghts and the `return_tensors="pt"` is used! ---- Versions, that I've installed: * Latest Transformers version from `master`, 4e817ff41885063e08bb3bcd63e5adfd835b9911 * Tokenizers in version *0.7.0* * PyTorch *1.5.0*
04-27-2020 14:41:46
04-27-2020 14:41:46
This issue should be submitted in another repo. https://github.com/huggingface/tokenizers.
transformers
4,014
closed
[Fix common tests on GPU] send model, ids to torch_device
04-27-2020 14:09:33
04-27-2020 14:09:33
~@patrickvonplaten any idea why tf tests dont run? https://app.circleci.com/pipelines/github/huggingface/transformers/5734/workflows/7986a344-3a5f-4f4f-a5e7-f8fd82ef7a2d/jobs/33381/steps seems like test_modeling_common.py only used by torch.~ resolved: had a `PretrainedConfig` type hint 😲 <|||||>Great, thanks for fixing this :-)
transformers
4,013
closed
test_lm_head_model_random_*_generate fail on GPU
04-27-2020 14:06:55
04-27-2020 14:06:55
transformers
4,012
closed
Attribute error while using run_language_modeling.py with Transformer-XL
python3 /home/username/.local/lib/python3.7/site-packages/transformers/transformers/examples/run_language_modeling.py \ --output_dir=_outpath_ --model_type=Transformer-XL\ --model_name_or_path=transfo-xl-wt103 \ --do_train \ --train_data_file=_pathtodataset_ --per_gpu_train_batch_size=1\ --gradient_accumulation_steps=30\ --train_epochs=50 Every time I run the above script I get the following error: Traceback (most recent call last): File "/content/Essay/run_language_modeling.py", line 284, in <module> main() File "/content/Essay/run_language_modeling.py", line 208, in main model.resize_token_embeddings(len(tokenizer)) File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py", line 336, in resize_token_embeddings model_embeds = base_model._resize_token_embeddings(new_num_tokens) File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py", line 351, in _resize_token_embeddings new_embeddings = self._get_resized_embeddings(old_embeddings, new_num_tokens) File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py", line 372, in _get_resized_embeddings old_num_tokens, old_embedding_dim = old_embeddings.weight.size() File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 576, in __getattr__ type(self).__name__, name)) **AttributeError: 'AdaptiveEmbedding' object has no attribute 'weight'** | | | |----|---| |OS |Fedora 29/Google Colab| |Python|3.6| |Pytorch|1.4.0/1.5.0| The problem arises in each 4 cases with the XL model but not with other models.
04-27-2020 12:30:49
04-27-2020 12:30:49
Hey, sorry, it's been a while, but this was fixed in #4759 since.
transformers
4,011
closed
Report minimum system requirements for each architecture.
Hi, It would be nice if you wrote some minimum requirements in all of your models. Like the size of the GPU (or the number of GPUs) for a batch size of 1 (or n_gpu). This would make it really helpful for trying new staff. For example what are the requirements of re-training or generating with CTRL? Thanks in advance!
04-27-2020 11:22:35
04-27-2020 11:22:35
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,010
closed
fairseq-preprocess, Why are the number of rows in src and trg different?
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> After I used subword_nmt for bpe, when using preprocess, I found that the number of rows read src and trg are not equal, but I checked the data after bpe, the number of rows of src and trg are equal, although trg has dozens of Blank lines, the results here show that the gap between the number of lines in src and trg is much larger than the number of blank lines. ![image](https://user-images.githubusercontent.com/35246891/80357931-431ecb00-88ae-11ea-8cce-5fd4cb9b75f9.png) ![image](https://user-images.githubusercontent.com/35246891/80358019-60ec3000-88ae-11ea-9f0f-5ae6181292c3.png) <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
04-27-2020 09:42:16
04-27-2020 09:42:16
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>have you fixed this problem?
transformers
4,009
closed
Implemented lazy line-by-line text data set loading for LM example script
See PR #3388. Master changed substantially, requiring relocation of code into previously untouched files etc. Instead, here is a new PR using the same code but refactored to fit in to the new more modular structure of the scripts in `examples`.
04-27-2020 08:38:46
04-27-2020 08:38:46
Used this for training a model, worked great! Would love to see this integrated<|||||>@GCHQResearcher92457 @BramVanroy Does it work for you if we tweak the PR on your fork's branch so that we can remove the force_pad_token option and update a few things? PS: Sorry about the super long review time:)<|||||>> @GCHQResearcher92457 @BramVanroy Does it work for you if we tweak the PR on your fork's branch so that we can remove the force_pad_token option and update a few things? > > PS: Sorry about the super long review time:) Sure. I think the GPT thing was a bit of rabbit hole. I added the hacks with pad tokens because I thought I'd introduced a problem with lazy loading, without realising that the problem was in fact already there with line-by-line.<|||||>> @GCHQResearcher92457 @BramVanroy Does it work for you if we tweak the PR on your fork's branch so that we can remove the force_pad_token option and update a few things? > > PS: Sorry about the super long review time:) Yes, definitely seems lik a good way to go!<|||||>Hello everyone, I think this PR will be a huge addition to Transformers. Is there any plans to finish it soon? Thanks!<|||||>> Hello everyone, I think this PR will be a huge addition to Transformers. > Is there any plans to finish it soon? > Thanks! This is in the hands of @julien-c now, but I think he's on holiday at the moment.<|||||>Isn't this superseded by `huggingface/nlp` now? I'll let others chime in.<|||||>> Isn't this superseded by `huggingface/nlp` now? I'll let others chime in. Are all examples now fully using `nlp`? If so, then yes and this can be closed. But if the examples are still using the trainer/dataset of `transformers`, then this seems a separate issue.<|||||>I have no objection to merge this temporarily, if remarks from the comments are taken into accounts, merge conflicts handled and deprecated API (the data collator should implement `__call__` and `tokenizer.batch_encode_plus` should not be used, just the tokenizer `__call__`) replaced. That may be a lot of work for something that will eventually be handled by nlp though. Moving the examples to nlp is on my TODO for the near-future @BramVanroy, and I think @thomwolf is also planning on working on this.<|||||>When I try to run this code following the example [here ](https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb#scrollTo=ltXgXyCbAJLY)I get the below error: ``` Traceback (most recent call last): File "bla.py", line 209, in <module> trainer.train() File "/home/edraff/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 492, in train for step, inputs in enumerate(epoch_iterator): File "/home/edraff/anaconda3/lib/python3.7/site-packages/tqdm/std.py", line 1107, in __iter__ for obj in iterable: File "/home/edraff/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 345, in __next__ data = self._next_data() File "/home/edraff/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 385, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/home/edraff/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch return self.collate_fn(data) File "/home/edraff/anaconda3/lib/python3.7/site-packages/transformers/data/data_collator.py", line 83, in __call__ inputs, labels = self.mask_tokens(batch) File "/home/edraff/anaconda3/lib/python3.7/site-packages/transformers/data/data_collator.py", line 113, in mask_tokens labels = inputs.clone() AttributeError: 'tuple' object has no attribute 'clone' Epoch: 0%| | 0/1 [00:23<?, ?it/s]Iteration: 0%| | 0/976243 [00:23<?, ?it/s] ``` <|||||>> When I try to run this code following the example [here ](https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb#scrollTo=ltXgXyCbAJLY)I get the below error: > > ``` > Traceback (most recent call last): > File "bla.py", line 209, in <module> > trainer.train() > File "/home/edraff/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 492, in train > for step, inputs in enumerate(epoch_iterator): > File "/home/edraff/anaconda3/lib/python3.7/site-packages/tqdm/std.py", line 1107, in __iter__ > for obj in iterable: > File "/home/edraff/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 345, in __next__ > data = self._next_data() > File "/home/edraff/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 385, in _next_data > data = self._dataset_fetcher.fetch(index) # may raise StopIteration > File "/home/edraff/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch > return self.collate_fn(data) > File "/home/edraff/anaconda3/lib/python3.7/site-packages/transformers/data/data_collator.py", line 83, in __call__ > inputs, labels = self.mask_tokens(batch) > File "/home/edraff/anaconda3/lib/python3.7/site-packages/transformers/data/data_collator.py", line 113, in mask_tokens > labels = inputs.clone() > AttributeError: 'tuple' object has no attribute 'clone' > Epoch: 0%| | 0/1 [00:23<?, ?it/s]Iteration: 0%| | 0/976243 [00:23<?, ?it/s] > ``` Not sure but I think this PR hasn't been updated to reflect recent changes.<|||||>Hi @GCHQResearcher92457 , Thanks for your great work. I am trying to use this lazy loading pre-training script to train a RoBERTa from scratch. I tested it many times. It works well when the training data less than 100 million lines. But the script is always killed at `linecache.getline(...)`, if my training set is more than 100M lines (e.g., 1 billion). Error is: ``` died with <Signals.SIGKILL: 9>. ``` I checked my CPU and GPU usage, they are not full. I also changed size of `_get_n_lines(...)` function and the batch size. But it still doesn't work. I don't believe this is out of memory issues. I cloned your transformers repo and use the branch `lazy-text-dataset-loading-for-lm` to install transformers library. Could you please give me any idea about this problem? Thanks, Chiyu More info: Python: 3.6.8 Torch Version: 1.4.0 tensorflow Version: 2.3.0 I am also using distributed training to run the model. <|||||>@chiyuzhang94 You probably have a program killer running. This is a background process that monitors the memory usage of the individual processes. If the system is about to run out of memory, it will kill the abusive process. My hunch is that Colab uses something similar. The high memory usage occurs because linecache reads as much of the file into memory as it can, to have the optimal experience. Not all OS's seem to like this - although I have not have had any issues with this approach on my systems. Here's a good article: https://dev.to/rrampage/surviving-the-linux-oom-killer-2ki9<|||||>> @chiyuzhang94 You probably have a program killer running. This is a background process that monitors the memory usage of the individual processes. If the system is about to run out of memory, it will kill the abusive process. My hunch is that Colab uses something similar. > > The high memory usage occurs because linecache reads as much of the file into memory as it can, to have the optimal experience. Not all OS's seem to like this - although I have not have had any issues with this approach on my systems. > > Here's a good article: https://dev.to/rrampage/surviving-the-linux-oom-killer-2ki9 Thanks, @BramVanroy. I think it is hard for me to change the `oom_score_adj` because I need to submit a job to PBS job to run model. I am wondering whether I can control the size of files that linecache reads. I think the size in function `def _get_n_lines(fin, size=65536):` is the controller. But it still doesn't work if I decrease the size. <|||||>@chiyuzhang94 No, that function is not related to the caching. It is a function that very quickly can read through files to figure out how many lines there are in that file. The size is the chunks in bytes to read sequentially, which is much faster than reading line-per-line. But again, nothing to do with caching. One option that I can think of, is allowing for an argument `max_memory_usage`, that will check at every `__getitem__` call the current memory usage (either system memory usage or current script memory usage), and if the memory usage is more than `max_memory_usage` the script should call `linecache.clearcache()`. This will be _slow_ when you have little memory or a low max value, but it should work.<|||||>> @chiyuzhang94 No, that function is not related to the caching. It is a function that very quickly can read through files to figure out how many lines there are in that file. The size is the chunks in bytes to read sequentially, which is much faster than reading line-per-line. But again, nothing to do with caching. > > One option that I can think of, is allowing for an argument `max_memory_usage`, that will check at every `__getitem__` call the current memory usage (either system memory usage or current script memory usage), and if the memory usage is more than `max_memory_usage` the script should call `linecache.clearcache()`. This will be _slow_ when you have little memory or a low max value, but it should work. Thanks, @BramVanroy , I tried your suggestion: ``` def __getitem__(self, idx): # Basic Memory checking from https://stackoverflow.com/a/48397534 with open ('/proc/self/status') as f: memusage = f.read().split('VmRSS:')[1].split('\n')[0][:-3] logger.info(" memusage each time: %s", memusage) # If our memory usage exceeds a limit flush the cache to prevent OOM situations if int(memusage.strip()) > self.max_memory_usage and self.max_memory_usage > 0: logger.info(" memusage before: %s", memusage) linecache.clearcache() logger.info(" memusage after: %s", memusage) # linecache starts counting from one, not zero, +1 the given index return linecache.getline(self.file_path, idx + 1).rstrip() ``` But I found the linecache.clearcache() doesn't help based on the log. ``` Iteration: 0%| | 0/1097530 [00:00<?, ?it/s]I0826 18:51:17.077926 47405170347712 I0826 18:51:17.080428 47405170347712 ARC_run_language_modeling_emohash.py:166] memusage before: 38945572 I0826 18:51:17.081127 47405170347712 ARC_run_language_modeling_emohash.py:169] memusage after: 38945572 I0826 18:51:27.666305 47348526792384 ARC_run_language_modeling_emohash.py:162] memusage each time: 39182488 I0826 18:51:27.670411 47348526792384 ARC_run_language_modeling_emohash.py:166] memusage before: 39182488 I0826 18:51:27.670989 47348526792384 ARC_run_language_modeling_emohash.py:169] memusage after: 39182488 I0826 18:51:43.620446 47109816241856 ARC_run_language_modeling_emohash.py:162] memusage each time: 39184224 I0826 18:51:43.620970 47109816241856 ARC_run_language_modeling_emohash.py:166] memusage before: 39184224 I0826 18:51:43.621682 47109816241856 ARC_run_language_modeling_emohash.py:169] memusage after: 39184224 I0826 18:51:49.295235 47667525713600 ARC_run_language_modeling_emohash.py:162] memusage each time: 38993432 I0826 18:51:49.295728 47667525713600 ARC_run_language_modeling_emohash.py:166] memusage before: 38993432 I0826 18:51:49.296677 47667525713600 ARC_run_language_modeling_emohash.py:169] memusage after: 38993432 ``` Then, the job was killed. I noticed I am using distributed training where each node has 4 GPUs. Since each of the 4 python threads eventually reads the entire file (90GB) into memory the dataset would take up over 360GB per node if they fully loaded the dataset. But each node only have 186GB RAM. Do you have any suggestion to limit the caching size?<|||||>any progress? @GCHQResearcher92457 <|||||>Hi @BramVanroy @GCHQResearcher92457 , I found a point that might be causing memory issues in the code (https://github.com/GCHQResearcher92457/transformers/blob/lazy-text-dataset-loading-for-lm/examples/run_language_modeling.py). In the main function, the rank 1-3 threads will all stop at the barrier at line 770 and rank 0 will progress and load the model and vocab it will then hit line 825 and release the barrier. Once the barrier is released threads 1-3 will process the lines 770-825 (load model in the main function). Same for line 832-837 (load dataset). I have four GPUs at each node. Hence, the rank 1-3 load the model and dataset from disk individually instead of using a copy from rank 0. This leads to the OOM issue. I think the rank 1-3 threads should not run the line 832-837 again once the barrier released. But I added some log found: When a process hits a barrier is simply waits at that spot in the code until all other processes have hit a barrier. Then when it releases it continues from the point it is within the code, not jumping to the latest barrier. I tried to add an if condition at line 770. This only allows rank 0 to load the model. But I got a new error. That shows the variables are not synchronized across devices. Rank 1-3 cannot get variable `model`. Did you notice this issue? Do you have any suggestions? <|||||>@chiyuzhang94 I am not sure why the memory is not clearing after using clearcache. It might be that you still have to call the garbage collector after clearing the cache, you can try that. It is true that I had not thought about multinode support so you will indeed have multiple in-memory caches for each process. I do not think it is easy to by-pass that, unless by turning around all of the code and instead working with a dedicated reading process, which is a separate process that fetches lines from the data file. As has been said before, though, it is now recommended to switch over to https://github.com/huggingface/nlp which allows for on-disk datasets which are fast and have a low memory footprint.<|||||>@BramVanroy (For the sake of discussion) Wouldn't it be reasonably easy to enable (non-cached) random access to the text file(s) by storing a list of the positions of `"\n"` and then doing `fseek`s on the fly (ideally, using a sampler that yields batches of sequential lines, so that one batch needs only one file read)? <|||||>> @BramVanroy (For the sake of discussion) > > Wouldn't it be reasonably easy to enable (non-cached) random access to the text file(s) by storing a list of the positions of `"\n"` and then doing `fseek`s on the fly (ideally, using a sampler that yields batches of sequential lines, so that one batch needs only one file read)? Shouldn't be too hard to implement indeed, although my fear is that this might not be fast enough from an IO perspective. That is perhaps the trade-off that one would want to make, though, so it might be worth it. You'd still need to make sure that all data is actually used, so in a shuffle setting this might not be straightforward if you want batches of consistent size. Perhaps depending on the number of lines, you can create a list of indexes that have `batch_size` distance between them (e.g. 0, 64, 128, 256), and then shuffle those indexes and at each iteration select one randomly that has not been seen yet. Then select `batch_size` lines starting from that index. That, in combination with your suggestion of getting the positions of \n should work indeed! I am not sure whether I want to put time into this, though, seeing that `nlp` is the preferred way to go.<|||||>> @chiyuzhang94 I am not sure why the memory is not clearing after using clearcache. It might be that you still have to call the garbage collector after clearing the cache, you can try that. > > It is true that I had not thought about multinode support so you will indeed have multiple in-memory caches for each process. I do not think it is easy to by-pass that, unless by turning around all of the code and instead working with a dedicated reading process, which is a separate process that fetches lines from the data file. > > As has been said before, though, it is now recommended to switch over to https://github.com/huggingface/nlp which allows for on-disk datasets which are fast and have a low memory footprint. Hi @BramVanroy , Thanks for your suggestion. I looked at the `nlp` tool. I didn't find an example of loading a text file for LM pre-training. I adapted the dataset loading class like this: ``` class DatasetNLP(Dataset): def __init__(self, filename, cache_dir, args): self.dataset = load_dataset('text', data_files= filename, cache_dir=cache_dir)["train"]["text"] def __len__(self): return len(self.dataset) def __getitem__(self, index): line = self.dataset[index] return line ``` I am wondering whether this is the optimal way to use `nlp` with PyTorch dataloader. <|||||>I used that approach in my way to train a LM (RoBERTa like) from scratch. I didn't modified the dataloader. It works for some iterations but it ends sooner than later with kind of CUBLAS ERROR<|||||>I'll start updating the examples to use the datasets library as soon as our new `nlp` release is out (probably today). Your example @chiyuzhang94 is ok but by doing `self.dataset = load_dataset('text', data_files= filename, cache_dir=cache_dir)["train"]["text"]` you are loading all the dataset in RAM which is too bad because nlp can do memory mapping from drive. You can directly use the dataset in a data loader by using `set_format(type='torch')`. More information is here: https://huggingface.co/nlp/master/quicktour.html#formatting-the-dataset<|||||>> > I'll start updating the examples to use the datasets library as soon as our new `nlp` release is out (probably today). > > Your example @chiyuzhang94 is ok but by doing `self.dataset = load_dataset('text', data_files= filename, cache_dir=cache_dir)["train"]["text"]` you are loading all the dataset in RAM which is too bad because nlp can do memory mapping from drive. > > You can directly use the dataset in a data loader by using `set_format(type='torch')`. More information is here: https://huggingface.co/nlp/master/quicktour.html#formatting-the-dataset > > Hi, I was wondering is it possible to finish the lazydataloader today? > I am a little bit eager for this function. > I would really appreciate your help. Thanks! No, that is not possible. You cannot expect a company to open-source a great product and at the same time implementing features within the day. As said numerous times in this topic, try out the `nlp` repository instead. It will help you out with any memory issues that you might have.<|||||>> I'll start updating the examples to use the datasets library as soon as our new `nlp` release is out (probably today). > > Your example @chiyuzhang94 is ok but by doing `self.dataset = load_dataset('text', data_files= filename, cache_dir=cache_dir)["train"]["text"]` you are loading all the dataset in RAM which is too bad because nlp can do memory mapping from drive. > > You can directly use the dataset in a data loader by using `set_format(type='torch')`. More information is here: https://huggingface.co/nlp/master/quicktour.html#formatting-the-dataset Hi @thomwolf , Thanks for your suggestion. I tried to implement this to load my text file. This `test.txt` is a simple sample where each line is a sentence. ``` dataset = load_dataset('text', data_files='test.txt',cache_dir="./") dataset.set_format(type='torch',columns=["text"]) dataloader = torch.utils.data.DataLoader(dataset, batch_size=8) next(iter(dataloader)) ``` But dataload cannot yield sample and error is: ``` --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-28-388aca337e2f> in <module> ----> 1 next(iter(dataloader)) /Library/Python/3.7/site-packages/torch/utils/data/dataloader.py in __next__(self) 343 344 def __next__(self): --> 345 data = self._next_data() 346 self._num_yielded += 1 347 if self._dataset_kind == _DatasetKind.Iterable and \ /Library/Python/3.7/site-packages/torch/utils/data/dataloader.py in _next_data(self) 383 def _next_data(self): 384 index = self._next_index() # may raise StopIteration --> 385 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 386 if self._pin_memory: 387 data = _utils.pin_memory.pin_memory(data) /Library/Python/3.7/site-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index] /Library/Python/3.7/site-packages/torch/utils/data/_utils/fetch.py in <listcomp>(.0) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index] KeyError: 0 ``` `dataset.set_format(type='torch',columns=["text"])` returns a log says: `Set __getitem__(key) output type to torch for ['text'] columns (when key is int or slice) and don't output other (un-formatted) columns.` I noticed the `dataset` is `DatasetDict({'train': Dataset(features: {'text': Value(dtype='string', id=None)}, num_rows: 44)})`. Each sample can be accessed by `dataset["train"]["text"]`. I don't know how to modify this code to load the text file. Could you please give me any suggestions? <|||||>@chiyuzhang94 Can you please ask your question either [on the forums](https://discuss.huggingface.co/) or [on the respective repository](https://github.com/huggingface/nlp)? Your question is not a `transformers` question anymore, nor should PRs be used for general questions like this.<|||||>> @chiyuzhang94 Can you please ask your question either [on the forums](https://discuss.huggingface.co/) or [on the respective repository](https://github.com/huggingface/nlp)? Your question is not a `transformers` question anymore, nor should PRs be used for general questions like this. Sure. Thanks for your investigation. I posted this question here: https://github.com/huggingface/datasets/issues/610#issue-698349388. @BramVanroy @thomwolf <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,008
closed
camembert-base-fquad
Model card for illuin release of camembert-base-fquad
04-27-2020 08:34:53
04-27-2020 08:34:53
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4008?src=pr&el=h1) Report > Merging [#4008](https://codecov.io/gh/huggingface/transformers/pull/4008?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4e817ff41885063e08bb3bcd63e5adfd835b9911&el=desc) will **decrease** coverage by `0.96%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4008/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4008?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4008 +/- ## ========================================== - Coverage 78.44% 77.48% -0.97% ========================================== Files 111 111 Lines 18518 18518 ========================================== - Hits 14527 14348 -179 - Misses 3991 4170 +179 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4008?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4008/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0.00%> (-81.21%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4008/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0.00%> (-10.00%)` | :arrow_down: | | [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4008/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `95.19% <0.00%> (-2.63%)` | :arrow_down: | | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4008/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.44% <0.00%> (-2.30%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4008/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `90.95% <0.00%> (-1.81%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4008/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.41% <0.00%> (-1.38%)` | :arrow_down: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4008/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `43.20% <0.00%> (-0.70%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4008?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4008?src=pr&el=footer). Last update [4e817ff...7012a06](https://codecov.io/gh/huggingface/transformers/pull/4008?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>That's awesome. Did you share this with the camembert team (@louismartin @benjamin-mlr et al)?<|||||>Thanks for the ping @julien-c, yes they shared their awesome work with us! And thanks @mdhoffschmidt for releasing your model :) <|||||>Great ! Thanks @julien-c and @louismartin :)
transformers
4,007
closed
Create model card
04-27-2020 08:27:18
04-27-2020 08:27:18
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4007?src=pr&el=h1) Report > Merging [#4007](https://codecov.io/gh/huggingface/transformers/pull/4007?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4e817ff41885063e08bb3bcd63e5adfd835b9911&el=desc) will **decrease** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4007/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4007?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4007 +/- ## ========================================== - Coverage 78.44% 78.44% -0.01% ========================================== Files 111 111 Lines 18518 18518 ========================================== - Hits 14527 14526 -1 - Misses 3991 3992 +1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4007?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4007/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `68.49% <0.00%> (-0.37%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4007?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4007?src=pr&el=footer). Last update [4e817ff...0bb517d](https://codecov.io/gh/huggingface/transformers/pull/4007?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,006
closed
Create model card
04-27-2020 08:15:28
04-27-2020 08:15:28
Nice! [model page](https://huggingface.co/mrm8488/bert-small-finetuned-typo-detection)<|||||>I plan to add a Colab to show the whole process: download, preprocess, train, eval and upload to HF
transformers
4,005
closed
Fast Tokenizers do not work when `return_offsets_mapping=True, return_tensors="pt"`
# 🐛 Bug ## Information When I try the following code: ```python from transformers import BertTokenizerFast fast = BertTokenizerFast.from_pretrained("bert-base-cased") fast.encode_plus("Hello I am tokenizing", return_offsets_mapping=True) ``` It works as intended. However, if I try: ```python from transformers import BertTokenizerFast fast = BertTokenizerFast.from_pretrained("bert-base-cased") fast.encode_plus("Hello I am tokenizing", return_offsets_mapping=True, return_tensors="pt") ``` It throws the following error: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-2-c6ee501a801c> in <module> 1 from transformers import BertTokenizer, BertTokenizerFast 2 fast = BertTokenizerFast.from_pretrained("bert-base-cased") ----> 3 fast.encode_plus("Hello I am tokenizing", return_offsets_mapping=True, return_tensors="pt") ~/.conda/envs/bertology/lib/python3.7/site-packages/transformers/tokenization_utils.py in encode_plus(self, text, text_pair, add_special_tokens, max_length, pad_to_max_length, stride, truncation_strategy, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, **kwargs) 1972 return_offsets_mapping=return_offsets_mapping, 1973 pad_to_max_length=pad_to_max_length, -> 1974 **kwargs, 1975 ) 1976 ~/.conda/envs/bertology/lib/python3.7/site-packages/transformers/tokenization_utils.py in batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, max_length, stride, truncation_strategy, pad_to_max_length, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, **kwargs) 1926 stack = tf.stack(stack, axis=0) 1927 elif return_tensors == "pt": -> 1928 stack = torch.stack(stack, dim=0) 1929 elif not return_tensors and len(stack) == 1: 1930 stack = stack[0] TypeError: expected Tensor as element 0 in argument 0, but got list ``` ## Environment info - `transformers` version: 2.8.0 - Platform: Linux-4.4.0-174-generic-x86_64-with-debian-stretch-sid - Python version: 3.7.6 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): not installed (NA)
04-27-2020 07:43:38
04-27-2020 07:43:38
@GuillemGSubies I think this works with the lastest `master` version of Transformers and using `tokenizers` in version *0.7.0*: ```python from transformers import BertTokenizerFast fast = BertTokenizerFast.from_pretrained("bert-base-cased") fast.encode_plus("Hello I am tokenizing", return_offsets_mapping=True, return_tensors="pt") ``` Outputs: ```bash {'input_ids': tensor([[ 101, 8667, 146, 1821, 22559, 4404, 102]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1]]), 'offset_mapping': tensor([[[ 0, 0], [ 0, 5], [ 6, 7], [ 8, 10], [11, 16], [16, 21], [ 0, 0]]])} ```<|||||>Oh, I should have tried in master before opening the issue. I will wait for the next release then, thanks you.
transformers
4,004
closed
ALBERT with Masked Language Model Input Processing from SavedModel
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): ALBERT Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) ``` input_ids=tf.convert_to_tensor(input_ids,name='input_ids',dtype=tf.int32) attention_mask=tf.convert_to_tensor(attention_mask,name='attention_mask',dtype=tf.int32) position_ids=tf.convert_to_tensor(position,name='position_ids',dtype=tf.int32) token_type_ids=tf.convert_to_tensor(type_emb,name='token_type_ids',dtype=tf.int32) inputs={'input_ids': input_ids, 'position_ids': position_ids, 'token_type_ids': token_type_ids, 'attention_mask': attention_mask} print(inputs) outputs=model(inputs,training=False) ``` The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Set the inputs 2. Run the above script <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ``` ValueError: Could not find matching function to call loaded from the SavedModel. Got: Positional arguments (1 total): * {'input_ids': <tf.Tensor 'inputs_1:0' shape=(512,) dtype=int32>, 'position_ids': <tf.Tensor 'inputs_2:0' shape=(512,) dtype=int32>, 'token_type_ids': <tf.Tensor 'inputs_3:0' shape=(512,) dtype=int32>, 'attention_mask': <tf.Tensor 'inputs:0' shape=(512,) dtype=int32>} Keyword arguments: {'training': False} Expected these arguments to match one of the following 4 option(s): Option 1: Positional arguments (1 total): * {'position_ids': TensorSpec(shape=(None, 512), dtype=tf.int32, name='position_ids'), 'token_type_ids': TensorSpec(shape=(None, 512), dtype=tf.int32, name='token_type_ids'), 'attention_mask': TensorSpec(shape=(None, 512), dtype=tf.int32, name='attention_mask'), 'input_ids': TensorSpec(shape=(None, 512), dtype=tf.int32, name='input_ids')} Keyword arguments: {'training': True} Option 2: Positional arguments (1 total): * {'position_ids': TensorSpec(shape=(None, 512), dtype=tf.int32, name='inputs/position_ids'), 'token_type_ids': TensorSpec(shape=(None, 512), dtype=tf.int32, name='inputs/token_type_ids'), 'attention_mask': TensorSpec(shape=(None, 512), dtype=tf.int32, name='inputs/attention_mask'), 'input_ids': TensorSpec(shape=(None, 512), dtype=tf.int32, name='inputs/input_ids')} Keyword arguments: {'training': True} Option 3: Positional arguments (1 total): * {'input_ids': TensorSpec(shape=(None, 512), dtype=tf.int32, name='input_ids'), 'position_ids': TensorSpec(shape=(None, 512), dtype=tf.int32, name='position_ids'), 'token_type_ids': TensorSpec(shape=(None, 512), dtype=tf.int32, name='token_type_ids'), 'attention_mask': TensorSpec(shape=(None, 512), dtype=tf.int32, name='attention_mask')} Keyword arguments: {'training': False} Option 4: Positional arguments (1 total): * {'position_ids': TensorSpec(shape=(None, 512), dtype=tf.int32, name='inputs/position_ids'), 'token_type_ids': TensorSpec(shape=(None, 512), dtype=tf.int32, name='inputs/token_type_ids'), 'attention_mask': TensorSpec(shape=(None, 512), dtype=tf.int32, name='inputs/attention_mask'), 'input_ids': TensorSpec(shape=(None, 512), dtype=tf.int32, name='inputs/input_ids')} Keyword arguments: {'training': False} ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> The model makes prediction on the inputs ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.8.0 - Platform: Ubuntu 16 - Python version: 3.7.7 - Tensorflow version (GPU?): 2.1.0 GPU - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No
04-27-2020 06:02:11
04-27-2020 06:02:11
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Hi! I have the similar problem too. Have you fixed it? If so, how?
transformers
4,003
closed
Maybe it is a bug for Roberta vocab file.
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Roberta Language I am using the model on (English, Chinese ...): English The problem arises when using: * [1 ] the official example scripts: (give details below) * [1] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [1 ] my own task or dataset: (give details below) Hi Huggingface team, When I use Roberta, I found that there are so many words start with "Ġ" in the vocab file you provided. I just want to make sure whether it is a bug or not. Around 40,771 words start with "Ġ". Roberta-base vocab file: https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-vocab.json Roberta-large vocab file: https://s3.amazonaws.com/models.huggingface.co/bert/roberta-large-vocab.json Best Regards, Chen
04-27-2020 05:21:41
04-27-2020 05:21:41
Hi @czheng17, that's because RoBERTa uses a byte-level BPE (like GPT-2), see section 4.4 in the original paper. A good explanation can be found here: https://github.com/pytorch/fairseq/issues/1716#issuecomment-588750983 :)<|||||>Hi @stefan-it , Interesting! Thank you for your help! Best,
transformers
4,002
closed
🌟 Transformer Lite
# 🌟 New model addition ## Model description Part of the abstract : > In this paper, we present an efficient mobile NLP architecture, Lite Transformer to facilitate deploying mobile NLP applications on edge devices. [...] brings consistent improvement over the vanilla transformer on three well-established language tasks: machine translation, abstractive summarization, and language modeling. [...] Notably, Lite Transformer outperforms the AutoML-based Evolved Transformer by 0.5 higher BLEU for the mobile NLP setting without the costly architecture search that requires more than 250 GPU years ## Open source status * [x] the model implementation is available: [Repository](https://github.com/mit-han-lab/lite-transformer) * [x] the model weights are available: [In the README](https://github.com/mit-han-lab/lite-transformer#models) * [x] who are the authors: @Michaelvll
04-27-2020 04:17:44
04-27-2020 04:17:44
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,001
closed
TF BART ?
# 🌟 New model addition ## Model description BART is currently only available in Pytorch. **Are you planning to release a TF version ?**
04-27-2020 04:11:51
04-27-2020 04:11:51
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>unstale
transformers
4,000
closed
Adding --do_lower_case option
In SQuAD 1.1 using the uncased BERT model, **--do_lower_case** option must be used for initialization of a tokenizer. Omitting this optional argument, the performance in development set would be lower around 8-9pt in both EM and F1 (In the official setting described in readme, the actual metrics were 72.09 (EM)/81.84 (F1) we confirmed).
04-27-2020 03:49:21
04-27-2020 03:49:21
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4000?src=pr&el=h1) Report > Merging [#4000](https://codecov.io/gh/huggingface/transformers/pull/4000?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4e817ff41885063e08bb3bcd63e5adfd835b9911&el=desc) will **decrease** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4000/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4000?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4000 +/- ## ========================================== - Coverage 78.44% 78.44% -0.01% ========================================== Files 111 111 Lines 18518 18518 ========================================== - Hits 14527 14526 -1 - Misses 3991 3992 +1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4000?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4000/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `68.49% <0.00%> (-0.37%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4000/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.59% <0.00%> (-0.17%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4000/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <0.00%> (+0.12%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4000?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4000?src=pr&el=footer). Last update [4e817ff...23041d2](https://codecov.io/gh/huggingface/transformers/pull/4000?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>There's no option to do `--do_lower_case` in the scripts anymore, the issue is that this particular tokenizer should have a `tokenizer_config.json` that always applies lowercasing.<|||||>The option `--do_lower_case` seems to be available in the [run_squad.py](https://github.com/huggingface/transformers/blob/master/examples/run_squad.py#L579-L581), currently (but not applied in the official readme). It's my understanding that `--do_lower_case` option should be applied, in order to reproduce the official results described in readme (only uncased models' examples). Or do you mean that `--do_lower_case` should not be used in the current (or future) version?<|||||>Yes, it shouldn’t be used in current or future versions of the scripts (because it’s handled in tokenizer configuration)<|||||>In your case, you should apply it manually for now, while we fix the root issue<|||||>Thank you for your answers! I understand the issue and its solution, and so I will be closing this pull request.
transformers
3,999
closed
How do I convert T5 checkpoint to pytorch bin to utilize fine-tuned model from finetune.py?
# ❓ Questions & Help I've recently tried to fine-tune T5 using bart's [finetune.py](https://github.com/huggingface/transformers/blob/master/examples/summarization/bart/finetune.py) (supports T5 fine-tuning as well). The model is saved in a `ckpt` format when I wanted it as a pytorch `bin`, so I found this [page](https://huggingface.co/transformers/converting_tensorflow_models.html) in the docs on how to convert `ckpt` to `bin`. However, I don't see the T5 model in this page. Is it possible to use the CLI to convert T5 models? I also have questions around fine-tuning T5 but will discuss details in a separate issue.
04-27-2020 01:10:39
04-27-2020 01:10:39
Sorry to answer this late - can you try the following: ```python model.from_pretrained(<path_to_model.ckpt>, from_tf=True) #even though you don't use TF model.save_pretrained("./") # should be in .bin format ``` If this does not work try the following: ```python parser = argparse.ArgumentParser() add_generic_args(parser, os.getcwd()) parser = SummarizationTrainer.add_model_specific_args(parser, os.getcwd()) args = parser.parse_args() model = SummarizationTrainer(args) model = model.load_from_checkpoint(<path_to_your_checkpoint>) model.model.save_pretrained("./") # should be in bin format ``` Please do let me know if this works.<|||||>Actually someone else found a much better solution than I did :-) See: https://github.com/huggingface/transformers/issues/4144<|||||>Awesome this worked! Thanks so much @patrickvonplaten
transformers
3,998
closed
Question about the output linear weight
I notice that the output linear weight for BertForMaskedLM is the same as input embedding. I don't read anywhere in the original paper that the output projection is tied with word embedding? Can you please explain the design choice here? To reproduce: ``` m = transformers.BertForMaskedLM.from_pretrained('bert-base-uncased') m.get_input_embeddings().weight - m.get_output_embeddings().weight ```
04-27-2020 00:14:24
04-27-2020 00:14:24
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,997
closed
Toy size models versions for faster experimentation and testing
# 🚀 Feature request To have a small(toy) in terms of number of parameters versions of models e.g.: - `gpt2-toy` - `bert-toy-uncased` etc. Alternative: to have separate model type `toy-transformer`. ## Motivation Working on the project and using transformers package our team noticed that it's difficult for some team members who had machines without GPU to experiment with code working with GPT2 model and to run tests for it. Also, it required significant resources on CI environment to run tests. We've ended up by implementing a custom configuration for GPT2 model with a resulting model size ~10M parameters for local experimentation and testing. I think it might be beneficial for the community to have tiny model versions in the package that weakens hardware requirements to start work and to run tests. If it's unreasonable to implement tiny version for every model - having separate model type which implements same interfaces and have corresponding heads will also help to reach the same objective. ## Your contribution If either of the options sounds reasonable I can participate in the implementation.
04-26-2020 23:35:53
04-26-2020 23:35:53
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,996
closed
[DialoGPT] add dialogpt training tips
04-26-2020 22:05:22
04-26-2020 22:05:22
transformers
3,995
closed
[model_cards] Add model card for Hindi-Bert
Trained with Google ELECTRA and Hindi Corpus (OSCAR CommonCrawl and latest Hindi Wikipedia) README offers more details and links to notebooks showing pretraining and finetuning on movie reviews
04-26-2020 21:36:15
04-26-2020 21:36:15
[Model page](https://huggingface.co/monsoon-nlp/hindi-bert)
transformers
3,994
closed
Allow a more backward compatible behavior of max_len_single_sentence and max_len_sentences_pair
Since #3706, `max_len_single_sentence` and `max_len_sentences_pair` are now automatically setup in the base class. This PR allows the user to try to setup `max_len_single_sentence` and `max_len_sentences_pair`. If the value if the same as the pre-computed, we display a deprecation warning (avoid breaking old code in this case). If the value is different we raise an explicit error message. cc @HendrikStrobelt
04-26-2020 21:13:41
04-26-2020 21:13:41
transformers
3,993
closed
[Generation] Generation should allow to start with empty prompt
This PR allows generation from an empty input.
04-26-2020 20:34:56
04-26-2020 20:34:56