repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
8,706
closed
T5v1.1 Addition of special tokens
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.0.0 --rc1 - Platform: Colab - Python version: 3.6.9 - PyTorch version (GPU?): TESLA V4 - Tensorflow version (GPU?): 2.3.0 - Using GPU in script?: YES - Using distributed or parallel set-up in script?: NO ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. T5: @patrickvonplaten --> ## Information Model I am using (Bert, XLNet ...): T5-1.1 The problem arises when using: * [ ] the official example scripts: (give details below) * [X ] my own modified scripts: (give details below) ```python from transformers import T5TokenizerFast, T5ForConditionalGeneration TOKENIZER_NAME = 't5-base' MODEL_NAME = 'google/t5-v1_1-base' tokenizer = T5TokenizerFast.from_pretrained(MODEL_NAME) special_tokens_dict = {'additional_special_tokens': ['<ORG>','<PERSON>']} num_added_toks = tokenizer.add_special_tokens(special_tokens_dict) model = T5ForConditionalGeneration.from_pretrained('google/t5-v1_1-base', return_dict=True) model.resize_token_embeddings(len(tokenizer)) model.to("cuda") ``` The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) Custom dataset ## To reproduce Steps to reproduce the behavior: 1. Attempting to add Entity tokens to T5 1.1, upon loading from pretrained the following error occurs: `size mismatch for lm_head.weight: copying a param with shape torch.Size([32128, 768]) from checkpoint, the shape in current model is torch.Size([32102, 768]).` I am assuming the addition of the special tokens did not get propagated to the lm head size. I would expect the LM Head to be resized in addition to the standard layers. Many Thanks, Chris
11-21-2020 21:59:37
11-21-2020 21:59:37
duplicate of https://github.com/huggingface/transformers/issues/8643. This is indeed a big problem. I'll try to get to it this week!<|||||>Hey @FL33TW00D - actually I cannot reproduce your error....Can you try to update to the `tokenizers` version as well?<|||||>I can correctly shorten T5's embedding matrix...<|||||>Hi @patrickvonplaten, Appreciate you looking at this. I suspect that in this case it's user error. I am attempting to add the special tokens like so prior to pretraining: ```python from transformers import T5TokenizerFast, T5ForConditionalGeneration MODEL_NAME = 'google/t5-v1_1-base' special_tokens = ["<ORG>", "<PERSON>"] tokenizer = T5TokenizerFast.from_pretrained('t5-base') special_tokens_dict = {'additional_special_tokens': ['<ORG>','<PERSON>']} num_added_tokens = tokenizer.add_special_tokens(special_tokens_dict) print(f'ADDED TOKENS: {num_added_tokens}') model = T5ForConditionalGeneration.from_pretrained(MODEL_NAME) model.resize_token_embeddings(len(tokenizer)) model.to("cuda") ``` I then pretrain the model, and save like so: `model.save_pretrained('t5_base_test')` It is upon model loading that I receive the error: ``` T5ForConditionalGeneration.from_pretrained('./t5_base_test') ``` ``` size mismatch for lm_head.weight: copying a param with shape torch.Size([32128, 768]) from checkpoint, the shape in current model is torch.Size([32102, 768]). ``` From the config.json, it looks like the rest of the layers are being scaled to the len(tokenizer) of 32102, and only the language modelling head on the final layer remaining as 32128. Any insight into this? Many Thanks, Chris<|||||>I can reproduce - will fix it! Thanks for the detailed error description <|||||>BTW, it's recommend to always use the same model identifier for model and tokenizer, even though in this case it would not have made a difference. So: ```python tokenizer = T5TokenizerFast.from_pretrained('google/t5-v1_1-base') ```<|||||>> > > I can reproduce - will fix it! Thanks for the detailed error description Massive thanks for fixing this. Really appreciate it.<|||||>Will try to have it merged into master by tomorrow<|||||>> Will try to have it merged into master by tomorrow @patrickvonplaten No worries already forked and working great! :+1: <|||||>Hi @patrickvonplaten I have faced the same situation with `MT5ForConditionalGeneration` when I have reproduced the [question_generation](https://github.com/patil-suraj/question_generation) with Thai language data (I have prepared the dataset from ['xquad.th'](https://huggingface.co/datasets/viewer/?dataset=xquad&config=xquad.th) ) by @patil-suraj This is my error messages ```bash >>> model = MT5ForConditionalGeneration.from_pretrained('./mt5-base-qg-hl-xquad-th-6-epochs') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/sakares/transformers/src/transformers/modeling_utils.py", line 1144, in from_pretrained raise RuntimeError( RuntimeError: Error(s) in loading state_dict for MT5ForConditionalGeneration: size mismatch for lm_head.weight: copying a param with shape torch.Size([250112, 768]) from checkpoint, the shape in current model is torch.Size([250102, 768]). ``` I have dive into the file src/transformers/models/mt5/modeling_mt5.py and found that MT5ForConditionalGeneration just overrode the T5ForConditionalGeneration, and it should not be a problem. Sorry to bring you here @patil-suraj . I am just curious since I have modified the script run_qg.py for MT5 and, according to this [discussion](https://github.com/huggingface/transformers/pull/8880#issuecomment-737113053), I found the script did not have things like `model.resize_token_embeddings(len(tokenizer))` My question: should I run method resize_token_embeddings before start the training model. <|||||>also facing same issue as @sakares ..Have you solved it?<|||||>Can you guys try again on master -> this should have been fixed by now: https://github.com/huggingface/transformers/issues/9055#issuecomment-745450713<|||||>@acul3 No luck yet. But I found the alternative solution with PyTorch Lightning script ["Finetune MT5 for Question Generation in Hindi"](https://www.kaggle.com/parthplc/finetune-mt5-for-question-generation-in-hindi/) and it works as expected<|||||> i manage to solve my problem by changing tokenizer to `google/mt5-base' instead of 't5-base'(my mistake) and install transformers from source(master) as @patrickvonplaten told will try to look that script @sakares..thank you
transformers
8,705
closed
DPRReaderTokenizers returns, for multiple passages given, only the tokens & masks of one passage
## Environment info - `transformers` version: 3.5.1 - Platform: Colab Notebook - Python version: 3.6.9 - PyTorch version (GPU?): 1.7.0 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help tokenizers: @mfuntowicz --> ## Information Model I am using : DPRReaderTokenizer The problem arises when using: DPRReaderTokenizer, instead of returning as many tensor as passages, he returns only one (On the documentation it must return (n_passages, sequence_length), but it returns (1, sequence_length) on basic examples. The tasks I am working on is: * Tokenization with DPRReaderTokenizer on multiple passages (texts) ## To reproduce from transformers import AlbertTokenizer, AlbertForQuestionAnswering, DPRReader, DPRReaderTokenizer, AutoTokenizer import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") tokenizer_DPR = DPRReaderTokenizer.from_pretrained('facebook/dpr-reader-single-nq-base') model_DPR = DPRReader.from_pretrained('facebook/dpr-reader-single-nq-base', return_dict=True).cuda() encoded_inputs = tokenizer_DPR( questions=["What is Transformers?"], titles=['Attention is all you need', 'One famous library'], texts=['Attention is a new mechanism designed to improve the performance of the seq2seq models', 'One of the most famous NLP library is called Transformers' ], padding=True, return_tensors='pt' ) encoded_inputs {'input_ids': tensor([[ 101, 2054, 2003, 19081, 1029, 102, 3086, 2003, 2035, 2017, 2342, 102, 3086, 2003, 1037, 2047, 7337, 2881, 2000, 5335, 1996, 2836, 1997, 1996, 7367, 4160, 2475, 3366, 4160, 4275]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])} ## Remarks The expected outputs is two tensors ... and I got only one .. 👎
11-21-2020 20:52:11
11-21-2020 20:52:11
Hello, A little update, I think I fixed this issue. The tokenizer returns a tensor of shape (n_passages, n_sequence_length) but only because I have duplicated the question like [questions] * n_passages. It was not clear on the documentation since I thought it was automatically done.
transformers
8,704
closed
Generating from mT5
## Environment info - `transformers` version: #9c0afdaf7b091c341072b432ad6ee17ba7a5016b - Platform: Google colab - Python version: 3.6.9 - PyTorch version (GPU?): 1.7.0 No GPU ### Who can help mT5: @patrickvonplaten ## Information Generating from `mT5-small` gives (nearly) empty output: ``` from transformers import MT5ForConditionalGeneration, T5Tokenizer model = MT5ForConditionalGeneration.from_pretrained("google/mt5-small") tokenizer = T5Tokenizer.from_pretrained("google/mt5-small") article = "translate to french: The capital of France is Paris." batch = tokenizer.prepare_seq2seq_batch(src_texts=[article], return_tensors="pt") output_ids = model.generate(input_ids=batch.input_ids, num_return_sequences=1, num_beams=8, length_penalty=0.1) tokenizer.decode(output_ids[0]) ``` `>>> <pad> <extra_id_0></s>` Using the same input for T5 gives reasonable output: ``` from transformers import T5ForConditionalGeneration, T5Tokenizer model = T5ForConditionalGeneration.from_pretrained("t5-small") tokenizer = T5Tokenizer.from_pretrained("t5-small") article = "translate to french: The capital of France is Paris." batch = tokenizer.prepare_seq2seq_batch(src_texts=[article], return_tensors="pt") output_ids = model.generate(input_ids=batch.input_ids, num_return_sequences=1, num_beams=8, length_penalty=0.1) tokenizer.decode(output_ids[0]) ``` `>>> <pad> La capitale de la France est Paris.</s>` My understanding is that mT5 is trained in the same way as T5, and should work in a very similar way?
11-21-2020 17:43:51
11-21-2020 17:43:51
mT5 is not pretrained on downstream tasks like T5 was - see: https://huggingface.co/transformers/master/model_summary.html#mt5 So it not surprising that mT5 won't work well out-of-the-box without fine-tuning.<|||||>Ah, I hadn't realised that. But in that case, wouldn't the expected output be a reconstruction of the input?<|||||>> Ah, I hadn't realised that. But in that case, wouldn't the expected output be a reconstruction of the input? Hard to say if the model does not include any sentinel tokens (`<extra_id_1>`) and if one uses `generate()` instead of just the forward pass.... . Wolud be interesting to play around with the two pre-trained model variants though and see what differences they show...<|||||>I agree that I would only get reconstruction if the decoding setup matched training :) Can you point me at any documentation that describes what special tokens are expected? I dug around in your implementation and the official repo but couldn't see anything. The output of `tokenizer.prepare_seq2seq_batch()` is the same for src and tgt as well (presumably because it uses the T5 tokenizer - does it not need its own?) Edit: Looking again, it seems like the sentinel tokens are just the equivalent of `[MASK]`? In which case the model should be able to reconstruct the input if it has access to the full (un-noised) sequence.<|||||>Maybe these pointers help: - https://github.com/huggingface/transformers/issues/7451 - https://github.com/huggingface/transformers/issues/7910 - https://github.com/huggingface/transformers/issues/3985 mT5 is pretrained exactly like T5 only without the downstream supersived training mixin. I think the T5 paper should explain in detail how this in done.<|||||>Does anybody have some more pointers on how to use (train) the mT5 model that has been added to master for text generation? Anything explaining how the finetuning is done in practice using Huggingface Transformers would be greatly appreciated!<|||||>Hey @Rijgersberg, what exactly do you mean by text generation ? GPT2-like open-end text generation?<|||||>Well not open-end text generation in the sense of "writing", but using text-to-text generation to perform all types of different NLP tasks with little to no training. Basically what the GPT-3-paper calls "few shot learning". Specifically, I would be interested in replicating the [WT5?! Training Text-to-Text Models to Explain their Predictions](https://arxiv.org/abs/2004.14546) results in languages other than English. But I'm having some trouble understanding what the differences between the T5 and mT5 models in Transformers mean for accomplishing that task.<|||||>Hey @tomhosking how did you use MT5ForConditionalGeneration, T5Tokenizer I used ``` pip install transformers ``` But it is showing ``` ImportError: cannot import name 'MT5ForConditionalGeneration' ``` How can we install it?🤔 <|||||>@parthplc You can specify version of package You would like to install. For me it was experimental: `transformers==4.0.0rc1` and it works fine. For training mT5 model for generating summary You can check out [this](https://towardsdatascience.com/fine-tuning-a-t5-transformer-for-any-summarization-task-82334c64c81) post. It worked for me. [edit] I forgot to mention, the only modification You have to make is to replace `T5ForConditionalGeneration` with `MT5ForConditionalGeneration`.<|||||>> Well not open-end text generation in the sense of "writing", but using text-to-text generation to perform all types of different NLP tasks with little to no training. Basically what the GPT-3-paper calls "few shot learning". > > Specifically, I would be interested in replicating the [WT5?! Training Text-to-Text Models to Explain their Predictions](https://arxiv.org/abs/2004.14546) results in languages other than English. But I'm having some trouble understanding what the differences between the T5 and mT5 models in Transformers mean for accomplishing that task. In this case, I would just fine-tune mT5 with the normal causal language modeling objective meaning: ```python from transformers import MT5ForConditionalGeneration, T5Tokenizer mt5 = MT5ForConditionalGeneration.from_pretrained("google/mt5-base") mt5_tok = T5Tokenizer.from_pretrained("google/mt5-base") input_ids = mt5_tok("explain sentiment: I went to see this movie with my husband, and we both thought the acting was terrible!", return_tensors="pt").input_ids # in the language of your choice labels = mt5_tok("negative explanation: the acting was terrible.", return_tensors="pt").input_ids # in the language of your choice loss = mt5(input_ids=input_ids, labels=labels).loss ``` I took one of the visual examples of the paper you mentioned. In short, there is no difference in how mt5 and t5 should be fine-tuned. Also, @mrm8488 already successfully fine-tuned an mT5 model: https://twitter.com/mrm8488/status/1329478063768350723 sorry to ping you here @mrm8488 - but maybe you have some tips/tricks for mt5 fine-tuning? Also pinging our T5 fine-tuning expert @patil-suraj <|||||>> Well not open-end text generation in the sense of "writing", but using text-to-text generation to perform all types of different NLP tasks with little to no training. Basically what the GPT-3-paper calls "few shot learning". I'm not sure if you can use mT5 with no training (fine-tuning), since it was not pre-trained with any supervised objective like `T5`. One experiment to try is to fine-tune `mT5` on the english data and see if it works for your language without any language specific fine-tuning (In my experiments, `T5` trained on English SQuAD for que gen gave interesting results for French and German without any language specific fine-tuning). But for better results you should fine-tune `mT5` on the language specific dataset. And also as Patrick said, you can fine-tune `mT5` and `T5` the same way. The major differences between `mT5` and `T5` are - `mT5` is based on [`T51.1`](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/released_checkpoints.md#t511) - pre-trained on 101 languages - no supervised pre-training<|||||>Hi, I slightly modified the script provided by @patil-suraj to fine-tune [`T5` on SQUAD] (https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) and after many epochs (I think I am missing anything/doing something wrong) I got 'decent' results fine-tuning mT5-small on tydiQA for multilingual QA https://huggingface.co/mrm8488/mT5-small-finetuned-tydiqa-for-xqa. The [PR with the model card](https://github.com/huggingface/transformers/pull/8729) for more details is not approved yet.<|||||>> Hi, I slightly modified the script provided by @ patil-suraj to fine-tune [`T5` on SQUAD] (https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) and after many epochs (I think I am missing anything/doing something wrong) I got 'decent' results fine-tuning mT5-small on tydiQA for multilingual QA https://huggingface.co/mrm8488/mT5-small-finetuned-tydiqa-for-xqa. The [PR with the model card](https://github.com/huggingface/transformers/pull/8729) for more details is not approved yet. just merged it :-) BTW, you can now directly create the model cards online - no need for PRs anymore ;-)<|||||>> > Well not open-end text generation in the sense of "writing", but using text-to-text generation to perform all types of different NLP tasks with little to no training. Basically what the GPT-3-paper calls "few shot learning". > > I'm not sure if you can use mT5 with no training (fine-tuning), since it was not pre-trained with any supervised objective like `T5`. > > One experiment to try is to fine-tune `mT5` on the english data and see if it works for your language without any language specific fine-tuning (In my experiments, `T5` trained on English SQuAD for que gen gave interesting results for French and German without any language specific fine-tuning). > > But for better results you should fine-tune `mT5` on the language specific dataset. > > And also as Patrick said, you can fine-tune `mT5` and `T5` the same way. > The major differences between `mT5` and `T5` are > > * `mT5` is based on [`T51.1`](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/released_checkpoints.md#t511) > * pre-trained on 101 languages > * no supervised pre-training hey @patil-suraj @mrm8488 how can we finetune mT5 for other languages. Let's suppose we have language translation problem for any language other than English and if we finetune using T5 tokenizer we would be replacing each word with unk tokens. how will it be fine-tuned? eg. ``` print(tokenizer.decode(data['source_ids'])) print(tokenizer.decode(data['target_ids'])) ``` ``` English to Hindi: Tell me the name of the ninth month.</s> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <unk> <unk> <unk> <unk> <unk> <unk> </s> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> ``` <|||||>@parthplc - I don't really understand your question. Since mT5 was trained on 101 languages it's tokenizer can obviously handle all those languages, *e.g.*: ```python from transformers import AutoTokenizer tok = AutoTokenizer.from_pretrained("google/mt5-small") tok.decode(tok("Der Satz wird auch definiert als sprachliche Einheit, die aus Subjekt und Prädikat besteht. Dies soll auf Aristoteles zurückgehen. Entsprechend definiert die traditionelle Grammatik den Satz als bestehend aus: Satzaussage (Prädikat), Satzergänzung (Objekt) und Satzgegenstand (Subjekt).").input_ids) # gives no <unk> symbols ``` Hopefully, this makes more sense now<|||||>> ## Environment info > * `transformers` version: #9c0afdaf7b091c341072b432ad6ee17ba7a5016b > * Platform: Google colab > * Python version: 3.6.9 > * PyTorch version (GPU?): 1.7.0 > No GPU > > ### Who can help > mT5: @patrickvonplaten > > ## Information > Generating from `mT5-small` gives (nearly) empty output: > > ``` > from transformers import MT5ForConditionalGeneration, T5Tokenizer > model = MT5ForConditionalGeneration.from_pretrained("google/mt5-small") > tokenizer = T5Tokenizer.from_pretrained("google/mt5-small") > article = "translate to french: The capital of France is Paris." > batch = tokenizer.prepare_seq2seq_batch(src_texts=[article], return_tensors="pt") > output_ids = model.generate(input_ids=batch.input_ids, num_return_sequences=1, num_beams=8, length_penalty=0.1) > tokenizer.decode(output_ids[0]) > ``` > > `>>> <pad> <extra_id_0></s>` > > Using the same input for T5 gives reasonable output: > > ``` > from transformers import T5ForConditionalGeneration, T5Tokenizer > model = T5ForConditionalGeneration.from_pretrained("t5-small") > tokenizer = T5Tokenizer.from_pretrained("t5-small") > article = "translate to french: The capital of France is Paris." > batch = tokenizer.prepare_seq2seq_batch(src_texts=[article], return_tensors="pt") > output_ids = model.generate(input_ids=batch.input_ids, num_return_sequences=1, num_beams=8, length_penalty=0.1) > tokenizer.decode(output_ids[0]) > ``` > > `>>> <pad> La capitale de la France est Paris.</s>` > > My understanding is that mT5 is trained in the same way as T5, and should work in a very similar way? Hi, I met the same problem when fine-tuning mt5 to a Chinese QG environment. I'm wondering if you have solved this issue?<|||||>hi @nomoreoneday `mT5` is a pre-trained model and it's not finetuned on any downstream task, whereas T5 was already trained on translation task as part of its supervised pre-training mixture, which could explain the empty output of mT5. You should fine-tune the model on your task, to use for generation. > I met the same problem when fine-tuning mt5 to a Chinese QG environment And if you are having trouble with fine-tuning then please post a shot code snippet so we can reproduce your issue.<|||||>> hi @nomoreoneday > > `mT5` is a pre-trained model and it's not finetuned on any downstream task, whereas T5 was already trained on translation task as part of its supervised pre-training mixture, which could explain the empty output of mT5. > > You should fine-tune the model on your task, to use for generation. > > > I met the same problem when fine-tuning mt5 to a Chinese QG environment > > And if you are having trouble with fine-tuning then please post a shot code snippet so we can reproduce your issue. hi @patil-suraj thanks for replying. I'm trying to replicate your project(https://github.com/patil-suraj/question_generation) on a Chinese QG task. I got decent results when I run ``` export CUDA_VISIBLE_DEVICES=0 python3 eval.py \ --model_name_or_path mt5-small-ncp-qg-hl-base_epoch30 \ --valid_file_path data/valid_data_qa_hl_mt5_ncp_all_task.pt \ --model_type mt5 \ --num_beams 4 \ --max_decoding_length 32 \ --output_path hypothesis_mt5-small-ncp-qg-hl-base_epoch30_ncp_all_task.txt ``` But when I trying to construct the pipeline and run: ` def _extract_answers(self,context): sents,inputs = self._prepare_inputs_for_ans_extraction(context) inputs = self._tokenize(inputs,padding = True,truncation = True) #encoding print("inputs after encoding:",inputs) outs = self.ans_model.generate( input_ids = inputs['input_ids'].to(self.device), attention_mask = inputs['attention_mask'].to(self.device), max_length = 32, ) dec = [self.ans_tokenizer.decode(ids,skip_special_tokens=False) for ids in outs] #decoding print("dec:", dec) answers = [item.split('<sep>') for item in dec] print("answers1:",answers) answers = [i[:-1] for i in answers] print("answers2:",answers) return sents, answers ` I got the empty answers. like this `dec: ['<pad> <extra_id_0></s>'] answers1: [['<pad> <extra_id_0></s>']] answers2: [[]]`<|||||>I wondering if there is any difference in data preprocessing between t5 and mt5. <|||||>> > hi @nomoreoneday > > `mT5` is a pre-trained model and it's not finetuned on any downstream task, whereas T5 was already trained on translation task as part of its supervised pre-training mixture, which could explain the empty output of mT5. > > You should fine-tune the model on your task, to use for generation. > > > I met the same problem when fine-tuning mt5 to a Chinese QG environment > > > > > > And if you are having trouble with fine-tuning then please post a shot code snippet so we can reproduce your issue. > > hi @patil-suraj > > thanks for replying. I'm trying to replicate your project(https://github.com/patil-suraj/question_generation) on a Chinese QG task. I got decent results when I run > > ``` > export CUDA_VISIBLE_DEVICES=0 > python3 eval.py \ > --model_name_or_path mt5-small-ncp-qg-hl-base_epoch30 \ > --valid_file_path data/valid_data_qa_hl_mt5_ncp_all_task.pt \ > --model_type mt5 \ > --num_beams 4 \ > --max_decoding_length 32 \ > --output_path hypothesis_mt5-small-ncp-qg-hl-base_epoch30_ncp_all_task.txt > ``` > > But when I trying to construct the pipeline and run: > > ` > def _extract_answers(self,context): > > ``` > sents,inputs = self._prepare_inputs_for_ans_extraction(context) > inputs = self._tokenize(inputs,padding = True,truncation = True) #encoding > print("inputs after encoding:",inputs) > > outs = self.ans_model.generate( > input_ids = inputs['input_ids'].to(self.device), > attention_mask = inputs['attention_mask'].to(self.device), > max_length = 32, > ) > > dec = [self.ans_tokenizer.decode(ids,skip_special_tokens=False) for ids in outs] #decoding > print("dec:", dec) > answers = [item.split('<sep>') for item in dec] > print("answers1:",answers) > answers = [i[:-1] for i in answers] > print("answers2:",answers) > > return sents, answers > ``` > > ` > > I got the empty answers. like this > > `dec: ['<pad> <extra_id_0></s>'] answers1: [['<pad> <extra_id_0></s>']] answers2: [[]]` having the same issue here<|||||>I have finally overcome the ['<pad> <extra_id_0></s>'] issue and obtained decent post-training predictions with MT5, just had to 1. lower the lr (set to 0.001, as indicated in mt5 paper) and 2. train for a lot more epochs in comparison with T5 for the same task (60 epochs for MT5 vs 10 for T5 for a simple text style transfer task fine-tuning). @nomoreoneday I have no touched anything but the model name when switching between t5 and mt5 in my training pipeline, wonder if I should?<|||||>> Hi, I slightly modified the script provided by @patil-suraj to fine-tune [`T5` on SQUAD] (https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) and after many epochs (I think I am missing anything/doing something wrong) I got 'decent' results fine-tuning mT5-small on tydiQA for multilingual QA https://huggingface.co/mrm8488/mT5-small-finetuned-tydiqa-for-xqa. The [PR with the model card](https://github.com/huggingface/transformers/pull/8729) for more details is not approved yet. @mrm8488 Hi, for this model mrm8488/mT5-small-finetuned-tydiqa-for-xqa , I tried to run your demo script, but failed with error loading the tokenizer. And the Hosted inference API on this page doesn't work as well. hope to see your feedback.<|||||>> > hi @nomoreoneday > > `mT5` is a pre-trained model and it's not finetuned on any downstream task, whereas T5 was already trained on translation task as part of its supervised pre-training mixture, which could explain the empty output of mT5. > > You should fine-tune the model on your task, to use for generation. > > > I met the same problem when fine-tuning mt5 to a Chinese QG environment > > > > > > And if you are having trouble with fine-tuning then please post a shot code snippet so we can reproduce your issue. > > hi @patil-suraj > > thanks for replying. I'm trying to replicate your project(https://github.com/patil-suraj/question_generation) on a Chinese QG task. I got decent results when I run > > ``` > export CUDA_VISIBLE_DEVICES=0 > python3 eval.py \ > --model_name_or_path mt5-small-ncp-qg-hl-base_epoch30 \ > --valid_file_path data/valid_data_qa_hl_mt5_ncp_all_task.pt \ > --model_type mt5 \ > --num_beams 4 \ > --max_decoding_length 32 \ > --output_path hypothesis_mt5-small-ncp-qg-hl-base_epoch30_ncp_all_task.txt > ``` > > But when I trying to construct the pipeline and run: > > ` > def _extract_answers(self,context): > > ``` > sents,inputs = self._prepare_inputs_for_ans_extraction(context) > inputs = self._tokenize(inputs,padding = True,truncation = True) #encoding > print("inputs after encoding:",inputs) > > outs = self.ans_model.generate( > input_ids = inputs['input_ids'].to(self.device), > attention_mask = inputs['attention_mask'].to(self.device), > max_length = 32, > ) > > dec = [self.ans_tokenizer.decode(ids,skip_special_tokens=False) for ids in outs] #decoding > print("dec:", dec) > answers = [item.split('<sep>') for item in dec] > print("answers1:",answers) > answers = [i[:-1] for i in answers] > print("answers2:",answers) > > return sents, answers > ``` > > ` > > I got the empty answers. like this > > `dec: ['<pad> <extra_id_0></s>'] answers1: [['<pad> <extra_id_0></s>']] answers2: [[]]` Same too <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>> I have finally overcome the [' <extra_id_0>'] issue and obtained decent post-training predictions with MT5, just had to > > 1. lower the lr (set to 0.001, as indicated in mt5 paper) > and > 2. train for a lot more epochs in comparison with T5 for the same task (60 epochs for MT5 vs 10 for T5 for a simple text style transfer task fine-tuning). > > @nomoreoneday I have no touched anything but the model name when switching between t5 and mt5 in my training pipeline, wonder if I should? Thankyou,I have same problem, I am trying train more epochs, to see if it can be correct<|||||>> > I have finally overcome the [' <extra_id_0>'] issue and obtained decent post-training predictions with MT5, just had to > > > > 1. lower the lr (set to 0.001, as indicated in mt5 paper) > > and > > 2. train for a lot more epochs in comparison with T5 for the same task (60 epochs for MT5 vs 10 for T5 for a simple text style transfer task fine-tuning). > > > > @nomoreoneday I have no touched anything but the model name when switching between t5 and mt5 in my training pipeline, wonder if I should? > > Thankyou,I have same problem, I am trying train more epochs, to see if it can be correct Do you have any newer ideas about this problem?<|||||>> > > hi @nomoreoneday > > > `mT5` is a pre-trained model and it's not finetuned on any downstream task, whereas T5 was already trained on translation task as part of its supervised pre-training mixture, which could explain the empty output of mT5. > > > You should fine-tune the model on your task, to use for generation. > > > > I met the same problem when fine-tuning mt5 to a Chinese QG environment > > > > > > > > > And if you are having trouble with fine-tuning then please post a shot code snippet so we can reproduce your issue. > > > > > > hi @patil-suraj > > thanks for replying. I'm trying to replicate your project(https://github.com/patil-suraj/question_generation) on a Chinese QG task. I got decent results when I run > > ``` > > export CUDA_VISIBLE_DEVICES=0 > > python3 eval.py \ > > --model_name_or_path mt5-small-ncp-qg-hl-base_epoch30 \ > > --valid_file_path data/valid_data_qa_hl_mt5_ncp_all_task.pt \ > > --model_type mt5 \ > > --num_beams 4 \ > > --max_decoding_length 32 \ > > --output_path hypothesis_mt5-small-ncp-qg-hl-base_epoch30_ncp_all_task.txt > > ``` > > > > > > > > > > > > > > > > > > > > > > > > But when I trying to construct the pipeline and run: > > ` > > def _extract_answers(self,context): > > ``` > > sents,inputs = self._prepare_inputs_for_ans_extraction(context) > > inputs = self._tokenize(inputs,padding = True,truncation = True) #encoding > > print("inputs after encoding:",inputs) > > > > outs = self.ans_model.generate( > > input_ids = inputs['input_ids'].to(self.device), > > attention_mask = inputs['attention_mask'].to(self.device), > > max_length = 32, > > ) > > > > dec = [self.ans_tokenizer.decode(ids,skip_special_tokens=False) for ids in outs] #decoding > > print("dec:", dec) > > answers = [item.split('<sep>') for item in dec] > > print("answers1:",answers) > > answers = [i[:-1] for i in answers] > > print("answers2:",answers) > > > > return sents, answers > > ``` > > > > > > > > > > > > > > > > > > > > > > > > ` > > I got the empty answers. like this > > `dec: ['<pad> <extra_id_0></s>'] answers1: [['<pad> <extra_id_0></s>']] answers2: [[]]` > > having the same issue here I am also having the same issue<|||||>Sorry I'm loosing a bit track of what the problem is here. Note that `mt5` cannot generate coherent sentences out-of-the-box because it's only be pretrained on the span-mask filling task and not on any down-stream tasks.
transformers
8,703
closed
providing the user with possibility to set the cache path
Dear HuggingFace team, In the codes of huggingface, most of the time there is some cache_path which is on the home directory, /idiap/home/rkarimi/.cache/huggingface/datasets/downloads could you provide me with a command to set a different cache_path? to me this is hard-coded in the codes or I am missing something. Thank you. Best regards Rabeeh
11-21-2020 14:06:07
11-21-2020 14:06:07
This path is the default `cache_path` of the datasets library, not transformers. You can change it by setting an environment variable named `HF_HOME` to the path you want, the datasets will then be cached in this path suffixed with "/datasets/"<|||||>Hi thank you, the huggingface codes gebnerally also create an empty folder titled ' ' when I run it, which is specifies the caching folder address, could it be possible not to create this folder? thanks Best Rabeeh On Sat, Nov 21, 2020 at 6:44 PM Sylvain Gugger <[email protected]> wrote: > This path is the default cache_path of the datasets library, not > transformers. You can change it by setting an environment variable named > HF_HOME to the path you want, the datasets will then be cached in this > path suffixed with "/datasets/" > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/8703#issuecomment-731611519>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ABP4ZCDCFJDCXZAPZ73ORHDSQ74AZANCNFSM4T5ZWXDA> > . > <|||||>We would need to see the code you are running that creates this empty folder named `" "` to be able to help.<|||||>Hi there I am training seq2seq_trainer codes. I have adapted it for my use case but in the original version of codes should also happen. thanks Best Rabeeh On Sun, Nov 22, 2020, 4:46 AM Sylvain Gugger <[email protected]> wrote: > We would need to see the code you are running that creates this empty > folder named " " to be able to help. > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/8703#issuecomment-731693963>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ABP4ZCCKYJ5TFIKGT2N7LTLSRCCP5ANCNFSM4T5ZWXDA> > . > <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
8,702
closed
Question about beam_sample: using two softmax?
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.5.1 - Platform: Linux-4.15.0-122-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.7.0+cu101 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ### Who can help @patrickvonplaten @TevenLeScao <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao Blenderbot: @patrickvonplaten Bart: @patrickvonplaten Marian: @patrickvonplaten Pegasus: @patrickvonplaten mBART: @patrickvonplaten T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSMT: @stas00 examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): Bart The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## To reproduce I just have some confusions about the current beam_sample code. Please disregard if I misunderstood. I noticed in current beam_sample code there are two softmax operations to produce the token probabilities, line 1161 and 1171 in the following snippet. After 1171, the final probabilities would be similar to `F.softmax(F.log_softmax(next_token_logits, dim=-1), dim=-1)` which is very different from what we usually get by `softmax(logits)`. Similarly in [top-p filtering](https://github.com/huggingface/transformers/blob/9c0afdaf7b091c341072b432ad6ee17ba7a5016b/src/transformers/generation_logits_process.py#L184) `F.softmax` is used on log values. But shouldn't we use `exp` to recover the probabilities in these cases? https://github.com/huggingface/transformers/blob/9c0afdaf7b091c341072b432ad6ee17ba7a5016b/src/transformers/generation_utils.py#L1161-L1171 <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior
11-21-2020 12:36:00
11-21-2020 12:36:00
After submitting I realized `F.softmax(F.log_softmax())` is equivalent to `F.softmax()`. The different probability values I noticed comes from the normalization of multiple beams in one dimension (line 1169). That doesn't change the behaviour of the sampling.
transformers
8,701
closed
TypeError: an integer is required (got type NoneType)
While using trainer class to finetune gpt-2 on Hindi dataset. It's outputting the following error ``` TypeError Traceback (most recent call last) <ipython-input-44-3435b262f1ae> in <module>() ----> 1 trainer.train() 5 frames /usr/local/lib/python3.6/dist-packages/transformers/data/datasets/language_modeling.py in __getitem__(self, i) 99 100 def __getitem__(self, i) -> torch.Tensor: --> 101 return torch.tensor(self.examples[i], dtype=torch.long) 102 103 TypeError: an integer is required (got type NoneType) ``` Here is the link: https://colab.research.google.com/drive/1um5UeY9hasmjPNcR1WkBe2uDDFhLUBrX?usp=sharing
11-21-2020 06:21:52
11-21-2020 06:21:52
Hey @parthplc, Could you try to post a very short code snippet that can reproduce the error. It's too time-consuming to go through such a big notebook sadly<|||||>@parthplc Please post a solution if you close the issue. There is none in your colab as far as I can see (`model.resize_token_embeddings(len(tokenizer))` is the only relevant code I can see and it doesn't fix the problem).
transformers
8,700
closed
training text_classification with tpu using xla_spawn gives wrong result
I tested text_classification code with tpu and gpu versions shown below. tpu(colab with 8cores) and gpu(colab) versions take 2min and 17min, respectively, which is nice. Training gpu version gives good behavior, where loss decreases continuously. However, for tpu version, the dataset is almost divided into 8 segments(so 1/8 times consuming), but each node may not be connected. Results from tpu of each node are just result with gpu with 1/8 of the original dataset size. 1. Should I change something? Or if I`d like to use TPU, I have to use TF version? 2. My final goal is to train Roberta on TPU in Korean. There are three options. 1. Huggingface Trainer with xla --> I am here. 2. Huggingface TFTrainer --> TFTrainer supports TPU, but need to make mlm datasets. 3. Fairseq with xla If there are any sources or examples, please let me know. Thanks, # tpu version (same one shown in the document 'https://github.com/huggingface/transformers/tree/master/examples/text-classification'): python examples/xla_spawn.py \ --num_cores=8 \ transformers/examples/text-classification/run_glue.py \ --do_train \ --do_eval \ --task_name=mrpc \ --num_train_epochs=3 \ --max_seq_length=128 \ --learning_rate=5e-5 \ --output_dir=/tmp/mrpc \ --overwrite_output_dir \ --logging_steps=5 \ --save_steps=5 \ --tpu_metrics_debug \ --model_name_or_path=bert-base-cased \ --per_device_train_batch_size=64 \ --per_device_eval_batch_size=64 # single gpu version (remove --num_cores=8, --tpu_metrics_debug) python examples/xla_spawn.py \ transformers/examples/text-classification/run_glue.py \ --do_train \ --do_eval \ --task_name=mrpc \ --num_train_epochs=3 \ --max_seq_length=128 \ --learning_rate=5e-5 \ --output_dir=/tmp/mrpc \ --overwrite_output_dir \ --logging_steps=5 \ --save_steps=5 \ --model_name_or_path=bert-base-cased \ --per_device_train_batch_size=64 \ --per_device_eval_batch_size=64
11-21-2020 04:47:25
11-21-2020 04:47:25
Hello! I'm not sure I completely understand the issue. Are you saying you do not obtain the results you expected when using the 8 cores of the TPU, vs using a single TPU core?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
8,699
closed
Cannot load tokenizer in community T5 pretrained model
## Environment info - `transformers` version: 3.5 - Platform: Window 10 - Python version: 3.7.9 - PyTorch version (GPU?): 1.7.0+cpu (False) - Tensorflow version (GPU?): 2.3.1 (False) - Using GPU in script?: False - Using distributed or parallel set-up in script?: False ### Who can help T5: @patrickvonplaten ## Information I'm trying to use the **sshleifer/t5-base-cnn** for summarization task. But It seem likes there is something wrong with the tokenizer. tokenizer = T5Tokenizer.from_pretrained('sshleifer/t5-base-cnn') model = T5ForConditionalGeneration.from_pretrained('sshleifer/t5-base-cnn') This code return an error: OSError: Can't load tokenizer for 'sshleifer/t5-base-cnn'. Make sure that: - 'sshleifer/t5-base-cnn' is a correct model identifier listed on 'https://huggingface.co/models' - or 'sshleifer/t5-base-cnn' is the correct path to a directory containing relevant tokenizer files Can someone point out what I am missing or Is there any problem with my code ? Many thanks.
11-21-2020 03:34:20
11-21-2020 03:34:20
Yes, that model from @sshleifer does not bundle its own tokenizer, as you can see in the list of files: https://huggingface.co/sshleifer/t5-base-cnn/tree/main We'll add this info to the model card, but you can just use the one from t5: `T5Tokenizer.from_pretrained("t5-base")`<|||||>@julien-c Thank you for your help<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
8,698
closed
CSV/JSON file format for examples/token-classification/run_ner.py
## Environment info - `transformers` version: 3.5.0 - Platform: Linux-3.10.0-1160.6.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core - Python version: 3.6.8 - PyTorch version (GPU?): 1.6.0 (False) - Tensorflow version (GPU?): 2.3.1 (False) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ### Who can help @mfuntowicz, @stefan-it ## Information Model I am using (Bert, XLNet ...): XLM-R The problem arises when using: * [x] the official example scripts: (give details below) The tasks I am working on is: * [x] my own task or dataset: (give details below) https://github.com/huggingface/transformers/tree/master/examples/token-classification ``` python run_ner.py \ --model_name_or_path bert-base-uncased \ --train_file path_to_train_file \ --validation_file path_to_validation_file \ --output_dir /tmp/test-ner \ --do_train \ --do_eval ``` I am trying to perform ner on custom dataset. It's not clear what's the format of `path_to_train_file` and `path_to_validation_file`. From the code, it seems that the file format should be csv or json. Can you please give more details on this so that I can format my dataset accordingly? Thanks.
11-21-2020 01:56:50
11-21-2020 01:56:50
Hi @ganeshjawahar , please have a look at the `run_NER_old.py` script! It should handle custom files 🤗 <|||||>Usage and more examples are documented here: https://github.com/huggingface/transformers/tree/master/examples/token-classification#old-version-of-the-script<|||||>Thanks for the quick response. I'm able to make use of `run_ner_old.py` with my custom dataset. Is there a similar documentation to use `run_ner.py` with custom dataset? P.S.: `run_ner_old.py` loads all examples into RAM and that's a problem for me as my custom dataset is very large. I was thinking of getting around this issue by using `run_ner.py` which uses datasets library. <|||||>If you can provide a tiny example for csv or json format, that should be very helpful. 🤗<|||||>Ah, I see, an example for a json-based file format can be found here: https://github.com/huggingface/transformers/blob/master/tests/fixtures/tests_samples/conll/sample.json Another possibility would be, that you write a custom recipe with Hugging Face datasets library. Then you can run the `run_NER.py` script by passing the (local) path name of your recipe to the script. Just have a look at the CoNNL dataset/recipe: https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py You could usw it as a template and modify it for your needs 🤗 <|||||>I think the JSON sample should be in the [token-classification README](https://github.com/huggingface/transformers/blob/master/examples/token-classification/README.md) for people trying to use `run_ner.py` from local files. Would you also be willing to provide a CSV sample? So far, I have found through trial, error, and code deciphering that: - The CSV needs to start with column names (not respecting this causes `ValueError: External features info don't match the dataset`) - The column separator should be a comma (`,`) - Text containing commas should be in double quotes (like this `","`) to disambiguate columns - Literal double quotes should be escaped with `\` Right now, my CSV file looks like this: ``` token,label DC,M ##T,M ##N,M ##4,M as,O a,O m,O ##od,O ##ifier,O ... ``` I get the following error: ``` File "projects/github/transformers/examples/token-classification/run_ner.py", line 221, in main if isinstance(features[label_column_name].feature, ClassLabel): AttributeError: 'Value' object has no attribute 'feature' ``` Using the python debugger, I've found that `features[label_column_name] = Value(dtype='string', id=None)` but I don't know if this is expected behavior. I can only assume that it isn't, but I can't seem to figure out what else `features[label_column_name]` could or should be. I'm pretty much stuck, and knowing if the issue comes from the structure of my CSV would be very helpful. Furthermore, I've tried formatting my data as close as I could to the [JSON conll sample](https://github.com/huggingface/transformers/blob/master/tests/fixtures/tests_samples/conll/sample.json), but I get the following error: ``` json.decoder.JSONDecodeError: Extra data: line 2 column 1 ``` After a little bit of googling, as I suspected it turns out one cannot have multiple JSON objects in one file. So if the intended JSON format for `run_ner.py` requires one JSON object per sequence but JSON files can't contain more than one JSON object, how can we get `run_ner.py` to work with several sequences in JSON mode?<|||||>Exact same process/issue/errors as @gpiat. Would be very helpful if the format for the csv option for run_ner.py was explicitly defined in the readme. If there was a sample input for the csv option that is fully functional with the script it would be much more simple to modify our custom data to match the sample as opposed to writing a custom recipe.<|||||>Same problem as @gpiat with CSV. @stefan-it And it seems the old script is no longer available?<|||||>I believe I've solved the same problem as @gpiat , @millanbatra1234 and @AleksandrsBerdicevskis have had: Replace the `if isinstance(features[label_column_name].feature, ClassLabel):` in run_ner.py with `if hasattr(features[label_column_name], 'feature') and isinstance(features[label_column_name].feature, ClassLabel):`. I tried @gpiat's CSV format and that doesn't work. Instead, I used the JSON format, which looks like this: ``` {"tokens": ["APPLICATION", "and", "Affidavit", "for", "Search", "Warrant", "as", "to", "The", "Matter", "of", "the", "Search", "of", "9", "Granite", "Street", ",", "#", "5", "(", "Attachments", ":", "#", "1", "Affidavit", "of", "James", "Keczkemethy)(Belpedio", ",", "Lisa", ")", "(", "Entered", ":", "12/15/2020", ")"], "tags": ["O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-MISC", "I-MISC", "I-MISC", "I-MISC", "I-MISC", "L-MISC", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"]} {"tokens": ["APPLICATION", "for", "Search", "Warrant", "by", "USA", "as", "to", "702", "-", "517", "-", "7282", "(", "KM", ",", "ilcd", ")", "(", "Entered", ":", "12/10/2020", ")"], "tags": ["O", "O", "O", "O", "O", "O", "O", "O", "B-MISC", "I-MISC", "I-MISC", "I-MISC", "L-MISC", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"]} {"tokens": ["APPLICATION", "AND", "AFFIDAVIT", "by", "USA", "as", "to", "4", "CELLULAR", "TELEPHONES", "SEIZED", "FROM", "THE", "FDC", "IN", "PHILADELPHIA", "AND", "CURRENTLY", "HELD", "BY", "THE", "FBI", "PHILADELPHIA", "DIVISION", "Re", ":", "Search", "Warrant", "Issued", ".", "(", "mac", ",", ")", "(", "Entered", ":", "12/09/2020", ")"], "tags": ["O", "O", "O", "O", "O", "O", "O", "B-MISC", "I-MISC", "I-MISC", "I-MISC", "I-MISC", "I-MISC", "I-MISC", "I-MISC", "I-MISC", "I-MISC", "I-MISC", "I-MISC", "I-MISC", "I-MISC", "I-MISC", "I-MISC", "L-MISC", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"]} ``` So, yes, you can have more than one JSON object in the file. Each JSON object goes on its own line. This is sometimes called JSONL or JSONLINES format. <|||||>@jeremybmerrill Thanks! Yes, JSON does work, I should have mentioned that (it actually does even without changing the code as you suggest). (With JSON, I run into another input problem (#9660), but I guess that's a different story.)<|||||>In my case the json format didn't work due to this issue [github.com/huggingface/datasets/issues/2181](https://github.com/huggingface/datasets/issues/2181). Pyarrow can't handle json if the line size is too big. So I had to split large lines into smaller ones.<|||||>JSON works, but CSV still does not work now<|||||>CSV file input does not work! I converted it into JSON, so It works now.
transformers
8,697
closed
test
11-20-2020 22:28:10
11-20-2020 22:28:10
transformers
8,696
closed
gpt2 and t5 parallel modeling
# Model Parallelism for T5 and GPT2 Adds two new methods to `GPT2LMHead` and the `GPT2Model` classes to enable you to generate and fine-tune models using model parallelism. This feature is most applicable for `gpt2-large` and `gpt2-xl`. Minor modifications are made to the `TrainingArguments` and `Trainer` classes to avoid conflicting data parallelism behavior and related batch_size increases which would negate model parallelism. Note that nearly 64GB of GPU (4 Tesla v100s) are needed to fine-tune `gpt2-xl` @ 1024 tokens. It is critically important to provide users the ability to specify where to put the blocks of a model because the GPU sizes and numbers are likely to be very diverse. This is done with a dictionary called `device_map`. I am planning on providing some examples and guidelines for the p3, p2 and g3 AWS instances. Model parallelism has to be baked into the model class itself. Currently working on the T5 model. From my calculations the 11B model cannot fit on the largest p3 instance that I have access to (8 Tesla v100 GPUs). The 3B model can. The methods are: - `parallelize`, which will distribute the attention blocks of the model across several devices according to a device map - `deparallelize`, which will move the model back to cpu # Example ``` model = GPT2LMHeadModel.from_pretrained('gpt2-xl') device_map = {0: [0, 1, 2, 3, 4, 5, 6, 7, 8], 1: [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21], 2: [22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34], 3: [35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]} model.parallelize(device_map) # Distributes the model's attention blocks across several devices model.deparallelize() # Puts the model back on cpu and calls torch.cuda.empty_cache() to liberate GPU memory ``` ## Reviewers @LysandreJik
11-20-2020 21:41:51
11-20-2020 21:41:51
Would it be a good idea to support a less painful way of writing a device_map? This is because as the developer experiments with the mapping, the current method is very inefficient to modify the layer maps. Perhaps there could be more than one way to do it? Instead of: ``` device_map = {0: [0, 1, 2, 3, 4, 5, 6, 7, 8], 1: [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21], 2: [22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34], 3: [35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]} ``` e.g. some ideas for perhaps much simpler ways to create such a map: * remap the layers as follows: ``` device_map = { devices: [0, 3, 4, 5], layer_split: [8, 12, 12, 12], } ``` * a simple string: ``` device_map = { devices: [0, 1, 2, 3], layer_map: "1-10, 9-21, 22-34, 35-47", } ``` * a simple string using slice notation ``` device_map = { devices: [0, 1, 2, 3], layer_slice: "1:10, 9:21, 22:34, 35:47", } ``` probably several ways can be supported and a wrapper expand them into the explicit version used now based on the keys of the map in the argument. in either case, changing the map is much easier then... <|||||>Not a bad idea. Adding to that: create a mapping utility that's been tested for larger model types like `device_map = get_device_map(machine = 4, model = "gpt2-xl")`. The first device should have fewer layers because it has the embedding and head. This is what I was using for testing: ``` def get_device_map(machine: str, model_name: str) -> dict: """Returns a dictionary optimized for distributing a model across several devices in a model parallel manner.""" if machine in ["TeslaV100x4", "p3.8xlarge", 4]: device_dict = { "gpt2-xl": { 0: [0, 1, 2, 3, 4, 5, 6, 7, 8], 1: [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21], 2: [22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34], 3: [35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47], }, "t5-large": { 0: [0, 1, 2], 1: [3, 4, 5, 6, 7, 8, 9], 2: [10, 11, 12, 13, 14, 15, 16], 3: [17, 18, 19, 20, 21, 22, 23], }, "t5-3b": { 0: [0, 1, 2], 1: [3, 4, 5, 6, 7, 8, 9], 2: [10, 11, 12, 13, 14, 15, 16], 3: [17, 18, 19, 20, 21, 22, 23], }, "t5-11b": { 0: [0, 1, 2], 1: [3, 4, 5, 6, 7, 8, 9], 2: [10, 11, 12, 13, 14, 15, 16], 3: [17, 18, 19, 20, 21, 22, 23], }, } return device_dict[model_name] ```<|||||>This is definitely a goodness to have in the library as well as it will save the developer start up time! I'd even extend it to a specific card size in the argument to `get_device_map`, since the map would be different depending on the size of the cards. But this won't work for cards of different sizes, e.g. at the moment I have 1x 22GB + 1x 8GB cards. But perhaps this is an odd case and most serious setups have identical cards. I don't know. <|||||>@stas00 Yeah, I think you're right. Could do something simple like `device_map` dictionary should be ranges like this: ``` device_map = {0: range(0, 10), 1: range(11, 24), ...} ``` Simpler than creating a list.<|||||>well, it's the same just using python to save on typing ;) this is still awkward a bit as you have to count ;) here there is less counting: I want you to use devices `[0,1,2,3]` and slice the layers as `[8, 12, 12, 12]` :) That's why I'm suggesting to support more than one way.<|||||>But most likely any of these custom ways can be easily delegated to a helper util, so the end result is the `device_map` as you implemented it. e..g: ``` device_map=device_map_make_by_partition([0,1,2,3], [8, 12, 12, 12]) device_map=device_map_make_by_slice([0,1,2,3], "1:10, 9:21, 22:34, 35:47") ```
transformers
8,695
closed
Update README.md to fix typo
Fix typo on line 45 # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
11-20-2020 21:27:43
11-20-2020 21:27:43
@patil-suraj this pull request can be closed the typo has already been fixed. <|||||>Thanks for letting me know.
transformers
8,694
closed
[Generate Test] fix flaky ci
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> In this PR: #8686 , I forgot to change the test accordingly -> this caused CI to be flaky. Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
11-20-2020 20:40:03
11-20-2020 20:40:03
transformers
8,693
closed
update tensorflow to functional version
## What does this PR do? Related to #7333 notebooks/02-transformers.ipynb has you install an unsupported version of tensorflow. Fixes # N/A ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ x ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. @patrickvonplaten
11-20-2020 20:22:45
11-20-2020 20:22:45
This was solved by https://github.com/huggingface/transformers/pull/8616 Thank you for your contribution!
transformers
8,692
closed
issues with seq length with inference code for classification
## Environment info - `transformers` version: 3.5.1 - Platform: Google Colab - Python version: 3.7 - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: yes - Using distributed or parallel set-up in script?: No ### Who can help @LysandreJik , @mfuntowicz , @VictorSanh ## Information Model I am using: BERT The problem arises when using: * [X] the official example scripts: (give details below) Modified the official scripts slightly to change the length of the input sequence. ``` from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch tokenizer = AutoTokenizer.from_pretrained("bert-base-cased-finetuned-mrpc") model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased-finetuned-mrpc", return_dict=True) classes = ["not paraphrase", "is paraphrase"] sequence_0 = "The company HuggingFace is based in New York City" * 100 sequence_1 = "Apples are especially bad for your health" sequence_2 = "HuggingFace's headquarters are situated in Manhattan" paraphrase = tokenizer(sequence_0, sequence_2, return_tensors="pt") not_paraphrase = tokenizer(sequence_0, sequence_1, return_tensors="pt") paraphrase_classification_logits = model(**paraphrase).logits not_paraphrase_classification_logits = model(**not_paraphrase).logits paraphrase_results = torch.softmax(paraphrase_classification_logits, dim=1).tolist()[0] not_paraphrase_results = torch.softmax(not_paraphrase_classification_logits, dim=1).tolist()[0] # Should be paraphrase for i in range(len(classes)): print(f"{classes[i]}: {int(round(paraphrase_results[i] * 100))}%") # Should not be paraphrase for i in range(len(classes)): print(f"{classes[i]}: {int(round(not_paraphrase_results[i] * 100))}%")``` ``` The tasks I am working on is: * [X ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. If you run the code above, you run into the RunTime Error Error: ```Token indices sequence length is longer than the specified maximum sequence length for this model (1313 > 512). Running this sequence through the model will result in indexing errors --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-3-f386a657dfdb> in <module>() 9 paraphrase = tokenizer(sequence_0, sequence_2, return_tensors="pt") 10 not_paraphrase = tokenizer(sequence_0, sequence_1, return_tensors="pt") ---> 11 paraphrase_classification_logits = model(**paraphrase).logits 12 not_paraphrase_classification_logits = model(**not_paraphrase).logits 13 paraphrase_results = torch.softmax(paraphrase_classification_logits, dim=1).tolist()[0] 5 frames /usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds) 199 token_type_embeddings = self.token_type_embeddings(token_type_ids) 200 --> 201 embeddings = inputs_embeds + position_embeddings + token_type_embeddings 202 embeddings = self.LayerNorm(embeddings) 203 embeddings = self.dropout(embeddings) RuntimeError: The size of tensor a (1313) must match the size of tensor b (512) at non-singleton dimension 1```
11-20-2020 19:33:32
11-20-2020 19:33:32
There is an issue here because your sequence is now to long for your model. The model only supports sequences of size 512 or less, but this code: ```py sequence_0 = "The company HuggingFace is based in New York City" * 100 sequence_1 = "Apples are especially bad for your health" sequence_2 = "HuggingFace's headquarters are situated in Manhattan" paraphrase = tokenizer(sequence_0, sequence_2, return_tensors="pt") not_paraphrase = tokenizer(sequence_0, sequence_1, return_tensors="pt") ``` creates tensors of length `1310` and `1313`, which is too long for your model. You should enable the truncation parameter on your tokenizer to ensure that the length is correct: ```py paraphrase = tokenizer(sequence_0, sequence_2, return_tensors="pt", truncation=True) not_paraphrase = tokenizer(sequence_0, sequence_1, return_tensors="pt", truncation=True) ```
transformers
8,691
closed
Pegasus example not working
@patrickvonplaten Hi, I am trying to run on the pegasus example on Colab. "!pip install git+https://github.com/huggingface/transformers.git !pip install sentencepiece from transformers import PegasusForConditionalGeneration, PegasusTokenizer import torch src_text = [ """ PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow.""" ] model_name = 'google/pegasus-xsum' torch_device = 'cuda' if torch.cuda.is_available() else 'cpu' tokenizer = PegasusTokenizer.from_pretrained(model_name) model = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device) batch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest').to(torch_device) translated = model.generate(**batch) tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True) assert tgt_text[0] == "California's largest electricity provider has turned off power to hundreds of thousands of customers. Collecting git+https://github.com/huggingface/transformers.git Cloning https://github.com/huggingface/transformers.git to /tmp/pip-req-build-gvb7jrr9 Running command git clone -q https://github.com/huggingface/transformers.git /tmp/pip-req-build-gvb7jrr9 Installing build dependencies ... done Getting requirements to build wheel ... done Preparing wheel metadata ... done Requirement already satisfied (use --upgrade to upgrade): transformers==4.0.0rc1 from git+https://github.com/huggingface/transformers.git in /usr/local/lib/python3.6/dist-packages Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from transformers==4.0.0rc1) (2.23.0) Requirement already satisfied: tokenizers==0.9.4 in /usr/local/lib/python3.6/dist-packages (from transformers==4.0.0rc1) (0.9.4) Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.6/dist-packages (from transformers==4.0.0rc1) (4.41.1) Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from transformers==4.0.0rc1) (1.18.5) Requirement already satisfied: filelock in /usr/local/lib/python3.6/dist-packages (from transformers==4.0.0rc1) (3.0.12) Requirement already satisfied: dataclasses; python_version < "3.7" in /usr/local/lib/python3.6/dist-packages (from transformers==4.0.0rc1) (0.7) Requirement already satisfied: packaging in /usr/local/lib/python3.6/dist-packages (from transformers==4.0.0rc1) (20.4) Requirement already satisfied: sacremoses in /usr/local/lib/python3.6/dist-packages (from transformers==4.0.0rc1) (0.0.43) Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.6/dist-packages (from transformers==4.0.0rc1) (2019.12.20) Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->transformers==4.0.0rc1) (3.0.4) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->transformers==4.0.0rc1) (2020.6.20) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->transformers==4.0.0rc1) (1.24.3) Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->transformers==4.0.0rc1) (2.10) Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from packaging->transformers==4.0.0rc1) (1.15.0) Requirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.6/dist-packages (from packaging->transformers==4.0.0rc1) (2.4.7) Requirement already satisfied: click in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers==4.0.0rc1) (7.1.2) Requirement already satisfied: joblib in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers==4.0.0rc1) (0.17.0) Building wheels for collected packages: transformers Building wheel for transformers (PEP 517) ... done Created wheel for transformers: filename=transformers-4.0.0rc1-cp36-none-any.whl size=1349475 sha256=8f08b76fc03d4cd0c1532e37462b5f1682fc58ad7f92ed533533b276fc4ecaf5 Stored in directory: /tmp/pip-ephem-wheel-cache-8gbsru65/wheels/33/eb/3b/4bf5dd835e865e472d4fc0754f35ac0edb08fe852e8f21655f Successfully built transformers Requirement already satisfied: sentencepiece in /usr/local/lib/python3.6/dist-packages (0.1.94) --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-1-ad40feda49b0> in <module>() 12 tokenizer = PegasusTokenizer.from_pretrained(model_name) 13 model = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device) ---> 14 batch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest').to(torch_device) 15 translated = model.generate(**batch) 16 tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True) 2 frames /usr/local/lib/python3.6/dist-packages/transformers/file_utils.py in wrapper(*args, **kwargs) 1236 def wrapper(*args, **kwargs): 1237 if is_torch_available(): -> 1238 return func(*args, **kwargs) 1239 else: 1240 raise ImportError(f"Method `{func.__name__}` requires PyTorch.") /usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py in to(self, device) 777 modification. 778 """ --> 779 self.data = {k: v.to(device) for k, v in self.data.items()} 780 return self 781 /usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py in <dictcomp>(.0) 777 modification. 778 """ --> 779 self.data = {k: v.to(device) for k, v in self.data.items()} 780 return self 781 AttributeError: 'list' object has no attribute 'to' " Please help. Thanks, Akila
11-20-2020 18:45:32
11-20-2020 18:45:32
@greenstars having the same issue - How did you resolve this?<|||||>> @greenstars having the same issue - How did you resolve this? @EliaKunz I changed "!pip install git+https://github.com/huggingface/transformers.git" to "!pip install transformers". <|||||>Thx! Was on datalore with the latest transformers 4 - downgraded to 3.5 and everything is working now.<|||||>I had the same issue with the latest transformers 4.1 (pip installed). It's fixed after adding return_tensors point. From `batch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest').to(torch_device)` to `batch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest', return_tensors='pt').to(torch_device)` did the job for me. <|||||>On running `batch = tokenizer(src_text, truncation=True, padding='longest', return_tensors="pt").to(device)` I am getting the error ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-49-e6e55e18a32c> in <module>() ----> 1 batch = tokenizer(src_text, truncation=True, padding='longest', return_tensors="pt").to(device) 2 translated = model.generate(**batch) 3 tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True) TypeError: 'NoneType' object is not callable ``` and on running `batch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest', return_tensors='pt').to(torch_device)` I am getting the error ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-50-b7183fa2a37c> in <module>() ----> 1 batch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest', return_tensors='pt').to(torch_device) 2 translated = model.generate(**batch) 3 tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True) AttributeError: 'NoneType' object has no attribute 'prepare_seq2seq_batch' ``` Any help would be greatly appreciated. <|||||>@YatinKapoor your tokenizer seems to be `None`<|||||>Need to replace PegasusTokenizer with AutoTokenizer ``` from transformers import PegasusForConditionalGeneration, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(model_name) ```` <|||||>@maifeng thanks! AutoTokenizer did the job for me! <|||||>> I had the same issue with the latest transformers 4.1 (pip installed). It's fixed after adding return_tensors point. > > From > > `batch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest').to(torch_device)` > > to > > `batch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest', return_tensors='pt').to(torch_device)` > > did the job for me. Worked for me
transformers
8,690
closed
connection issue
Hi I am runnig seq2seq_trainer on TPUs I am always getting this connection issue could you please have a look sicne this is on TPUs this is hard for me to debug thanks Best Rabeeh 2389961.mean (11/20/2020 05:24:09 PM) (Detached) local_files_only=local_files_only, File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/file_utils.py", line 955, in cached_path local_files_only=local_files_only, File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/file_utils.py", line 1125, in get_from_cache "Connection error, and we cannot find the requested files in the cached path." ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on. Traceback (most recent call last): File "/home/rabeeh//internship/seq2seq/xla_spawn.py", line 71, in <module> main() XLA label: %copy.32724.remat = f32[80,12,128,128]{3,2,1,0:T(8,128)} copy(f32[80,12,128,128]{2,3,1,0:T(8,128)} %bitcast.576) Allocation type: HLO temp ========================== 19. Size: 60.00M Shape: f32[80,12,128,128]{3,2,1,0:T(8,128)} Unpadded size: 60.00M XLA label: %copy.32711.remat = f32[80,12,128,128]{3,2,1,0:T(8,128)} copy(f32[80,12,128,128]{2,3,1,0:T(8,128) 0%| | 2/18060 [08:12<1234:22:09, 246.08s/it]Traceback (most recent call last): File "/home/rabeeh//internship/seq2seq/xla_spawn.py", line 71, in <module> main() File "/home/rabeeh//internship/seq2seq/xla_spawn.py", line 67, in main xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores) File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 395, in spawn start_method=start_method) File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 157, in start_processes while not context.join(): File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 112, in join (error_index, exitcode)
11-20-2020 17:40:29
11-20-2020 17:40:29
Having a similar issue while running Multi class classification model<|||||>@patrickvonplaten @sumyuck @sgugger <|||||>Hi I am constantly getting this erorr, looks like a bug to me since sometimes it appears sometimes not, could you please help me, this is expensive experiments I am trying on TPUs and I appreciate your help to fix it, it just many times fails due to this error getting this erorr Exception in device=TPU:0: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on. el/0 I1124 07:19:52.663760 424494 main shadow.py:87 > Traceback (most recent call last): File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn _start_fn(index, pf_cfg, fn, args) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 324, in _start_fn fn(gindex, *args) File "/workdir/seq2seq/finetune_t5_trainer.py", line 230, in _mp_fn main() File "/workdir/seq2seq/finetune_t5_trainer.py", line 71, in main cache_dir=model_args.cache_dir, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/configuration_utils.py", line 347, in from_pretrained config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/configuration_utils.py", line 388, in get_config_dict local_files_only=local_files_only, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/file_utils.py", line 955, in cached_path local_files_only=local_files_only, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/file_utils.py", line 1125, in get_from_cache "Connection error, and we cannot find the requested files in the cached path."<|||||>@sumyuck<|||||>@thomwolf <|||||>this is with transformer 3.5.1, pytorch 1.6, on TPU v3-8, and I am using xla_spawn to launch the jobs, looks like a general issue with caching part. <|||||>Same for me. Getting this error while trying to execute following line: tokenizer = LxmertTokenizer.from_pretrained('unc-nlp/lxmert-base-uncased') File "/Users/xxx/anaconda3/envs/test/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1629, in from_pretrained local_files_only=local_files_only, File "/Users/xxx/anaconda3/envs/test/lib/python3.7/site-packages/transformers/file_utils.py", line 955, in cached_path local_files_only=local_files_only, File "/Users/xxx/anaconda3/envs/test/lib/python3.7/site-packages/transformers/file_utils.py", line 1125, in get_from_cache "Connection error, and we cannot find the requested files in the cached path." ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on. <|||||>to me this is not a connection issue. i do have connection but an issue in caching mechanism. On Wed, Nov 25, 2020, 2:33 AM Alkesh <[email protected]> wrote: > Same for me. Getting this error while trying to execute following line: > tokenizer = LxmertTokenizer.from_pretrained('unc-nlp/lxmert-base-uncased') > > File > "/Users/xxx/anaconda3/envs/test/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", > line 1629, in from_pretrained > local_files_only=local_files_only, > File > "/Users/xxx/anaconda3/envs/test/lib/python3.7/site-packages/transformers/file_utils.py", > line 955, in cached_path > local_files_only=local_files_only, > File > "/Users/xxx/anaconda3/envs/test/lib/python3.7/site-packages/transformers/file_utils.py", > line 1125, in get_from_cache > "Connection error, and we cannot find the requested files in the cached > path." > ValueError: Connection error, and we cannot find the requested files in > the cached path. Please try again or make sure your Internet connection is > on. > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/8690#issuecomment-733405868>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ABP4ZCGOHVMHGA33EGSQ6UTSRRNGTANCNFSM4T5CBSUA> > . > <|||||>I am having the same issue too. I am pointing to the cache directory where pytorch is saving the models: `cache_dir = '/home/me/.cache/torch/transformers/' modelpath = "bert-base-uncased" model = AutoModel.from_pretrained(modelpath, cache_dir=cache_dir) tokenizer = AutoTokenizer.from_pretrained(modelpath, cache_dir=cache_dir) ` And I am getting a connection error. pytorch: 1.7.0, transformers: 3.5.1.<|||||>Working on a fix, hopefully fixed for good today. Meanwhile as a workaround please retry a couple minutes later should do the trick<|||||> I deleted all cache, redownloaded all modes and ran again. It seems to be working as of now. <|||||>Scaling of connectivity for model hosting should be way improved now. Please comment here if you still experience connectivity issues from now on. Thanks!<|||||>I am still getting this error with transformers version - 3.5.1 and torch - 1.7.0 on python 3.6.9. Please check. I have tried deleting all cache, installing transformers using pip and source code both. But still getting the same issue again and again.<|||||>@AshishDuhan Are you loading a model in particular? Do you have a code snippet that consistently fails for you? <|||||>_import torch from transformers import PegasusForConditionalGeneration, PegasusTokenizer src_text = ["""<TEXT-HERE>"""] model_name='google/pegasus-cnn_dailymail' torch_device='cuda' if torch.cuda.is_available() else 'cpu' tokenizer=PegasusTokenizer.from_pretrained(model_name) model=PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device) batch=tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest').to(torch_device) translated=model.generate(**batch) tgt_text=tokenizer.batch_decode(translated, skip_special_tokens=True) print('Summary:', tgt_text[0])_ **This is one of the models I am trying to load. Although I have tried other models too and nothing works. Even the basic command fail with following error:** **python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))"** Traceback (most recent call last): File "<string>", line 1, in <module> File "/opt/app/jupyter/environments/env_summarization/lib/python3.6/site-packages/transformers/pipelines.py", line 2828, in pipeline framework = framework or get_framework(model) File "/opt/app/jupyter/environments/env_summarization/lib/python3.6/site-packages/transformers/pipelines.py", line 106, in get_framework model = AutoModel.from_pretrained(model, revision=revision) File "/opt/app/jupyter/environments/env_summarization/lib/python3.6/site-packages/transformers/modeling_auto.py", line 636, in from_pretrained pretrained_model_name_or_path, return_unused_kwargs=True, **kwargs File "/opt/app/jupyter/environments/env_summarization/lib/python3.6/site-packages/transformers/configuration_auto.py", line 333, in from_pretrained config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/opt/app/jupyter/environments/env_summarization/lib/python3.6/site-packages/transformers/configuration_utils.py", line 388, in get_config_dict local_files_only=local_files_only, File "/opt/app/jupyter/environments/env_summarization/lib/python3.6/site-packages/transformers/file_utils.py", line 955, in cached_path local_files_only=local_files_only, File "/opt/app/jupyter/environments/env_summarization/lib/python3.6/site-packages/transformers/file_utils.py", line 1125, in get_from_cache "Connection error, and we cannot find the requested files in the cached path." ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.<|||||>Our connectivity has been good these past 24 hours so this might be a different (local) issue, @AshishDuhan. Are you behind a proxy by any chance? Does `curl -i https://huggingface.co/google/pegasus-cnn_dailymail/resolve/main/config.json` work from your machine? Can you try what you're doing from a machine in the cloud, like a Google Colab?<|||||>I am facing the same issue still - Traceback (most recent call last): File "Untitled.py", line 59, in <module> tokenizer = AutoTokenizer.from_pretrained("emilyalsentzer/Bio_ClinicalBERT") File "/project/6001557/akallada/digipath/lib/python3.7/site-packages/transformers/models/auto/tokenization_auto.py", line 310, in from_pretrained config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs) File "/project/6001557/akallada/digipath/lib/python3.7/site-packages/transformers/models/auto/configuration_auto.py", line 341, in from_pretrained config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/project/6001557/akallada/digipath/lib/python3.7/site-packages/transformers/configuration_utils.py", line 386, in get_config_dict local_files_only=local_files_only, File "/project/6001557/akallada/digipath/lib/python3.7/site-packages/transformers/file_utils.py", line 1007, in cached_path local_files_only=local_files_only, File "/project/6001557/akallada/digipath/lib/python3.7/site-packages/transformers/file_utils.py", line 1177, in get_from_cache "Connection error, and we cannot find the requested files in the cached path." ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on. <|||||>I'm having the same connection issue. I've tried with and without passing my proxies into the BertModel --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-19-c8b8c602a810> in <module> 1 from transformers import BertTokenizer, BertModel ----> 2 model = BertModel.from_pretrained("bert-base-uncased", **proxies) ~/opt/anaconda3/envs/milglue/lib/python3.8/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 865 if not isinstance(config, PretrainedConfig): 866 config_path = config if config is not None else pretrained_model_name_or_path --> 867 config, model_kwargs = cls.config_class.from_pretrained( 868 config_path, 869 *model_args, ~/opt/anaconda3/envs/milglue/lib/python3.8/site-packages/transformers/configuration_utils.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs) 345 346 """ --> 347 config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs) 348 return cls.from_dict(config_dict, **kwargs) 349 ~/opt/anaconda3/envs/milglue/lib/python3.8/site-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs) 380 try: 381 # Load from URL or cache if already cached --> 382 resolved_config_file = cached_path( 383 config_file, 384 cache_dir=cache_dir, ~/opt/anaconda3/envs/milglue/lib/python3.8/site-packages/transformers/file_utils.py in cached_path(url_or_filename, cache_dir, force_download, proxies, resume_download, user_agent, extract_compressed_file, force_extract, local_files_only) 946 if is_remote_url(url_or_filename): 947 # URL, so get it from the cache (downloading if necessary) --> 948 output_path = get_from_cache( 949 url_or_filename, 950 cache_dir=cache_dir, ~/opt/anaconda3/envs/milglue/lib/python3.8/site-packages/transformers/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only) 1122 ) 1123 else: -> 1124 raise ValueError( 1125 "Connection error, and we cannot find the requested files in the cached path." 1126 " Please try again or make sure your Internet connection is on." ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.<|||||>Hard to say without seeing your full networking environment. If you try to `curl -I` the URLs that you get on the arrow icons next to files in e.g. https://huggingface.co/bert-base-uncased/tree/main (or equivalent page for the model you try to download), what happens?<|||||>it happened to me too , is there any fix on that ? <|||||>is it transient or permanent (i.e. if you relaunch the command does it happen again)? You need to give us some more details if we want to help you troubleshoot.<|||||>Hi I am still getting this issue. see blow. I am using transformer 3.5.1, could you tell me if the issue is fixed in this version? if not which version of transformers library I should use? thanks @julien-c ``` 12/13/2020 13:56:10 - INFO - seq2seq.utils.utils - config is reset to the initial values. tp/0 I1213 06:00:34.060680 252396 main shadow.py:122 > Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/urllib3/connection.py", line 170, in _new_conn (self._dns_host, self.port), self.timeout, **extra_kw File "/usr/local/lib/python3.6/dist-packages/urllib3/util/connection.py", line 96, in create_connection raise err File "/usr/local/lib/python3.6/dist-packages/urllib3/util/connection.py", line 86, in create_connection sock.connect(sa) socket.timeout: timed out tp/0 I1213 06:00:34.060720 252396 main shadow.py:122 > tp/0 I1213 06:00:34.060759 252396 main shadow.py:122 > During handling of the above exception, another exception occurred: tp/0 I1213 06:00:34.060825 252396 main shadow.py:122 > tp/0 I1213 06:00:34.060866 252396 main shadow.py:122 > Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py", line 706, in urlopen chunked=chunked, File "/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py", line 382, in _make_request self._validate_conn(conn) File "/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py", line 1010, in _validate_conn conn.connect() File "/usr/local/lib/python3.6/dist-packages/urllib3/connection.py", line 353, in connect conn = self._new_conn() File "/usr/local/lib/python3.6/dist-packages/urllib3/connection.py", line 177, in _new_conn % (self.host, self.timeout), urllib3.exceptions.ConnectTimeoutError: (<urllib3.connection.HTTPSConnection object at 0x7f47db511e80>, 'Connection to s3.amazonaws.com timed out. (connect timeout=10)') tp/0 I1213 06:00:34.060908 252396 main shadow.py:122 > tp/0 I1213 06:00:34.060970 252396 main shadow.py:122 > During handling of the above exception, another exception occurred: tp/0 I1213 06:00:34.061113 252396 main shadow.py:122 > tp/0 I1213 06:00:34.061207 252396 main shadow.py:122 > Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/requests/adapters.py", line 449, in send timeout=timeout File "/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py", line 756, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "/usr/local/lib/python3.6/dist-packages/urllib3/util/retry.py", line 573, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/glue/glue.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f47db511e80>, 'Connection to s3.amazonaws.com timed out. (connect timeout=10)')) tp/0 I1213 06:00:34.061293 252396 main shadow.py:122 > tp/0 I1213 06:00:34.061372 252396 main shadow.py:122 > During handling of the above exception, another exception occurred: tp/0 I1213 06:00:34.061421 252396 main shadow.py:122 > tp/0 I1213 06:00:34.061486 252396 main shadow.py:122 > Traceback (most recent call last): File "finetune_t5_trainer.py", line 361, in <module> main() File "finetune_t5_trainer.py", line 269, in main add_prefix=False if training_args.train_adapters else True) File "/workdir/seq2seq/data/tasks.py", line 70, in get_dataset dataset = self.load_dataset(split=split) File "/workdir/seq2seq/data/tasks.py", line 306, in load_dataset return datasets.load_dataset('glue', 'cola', split=split) File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 589, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 263, in prepare_module head_hf_s3(path, filename=name, dataset=dataset) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 200, in head_hf_s3 return http_head(hf_bucket_url(identifier=identifier, filename=filename, use_cdn=use_cdn, dataset=dataset)) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 403, in http_head url, proxies=proxies, headers=headers, cookies=cookies, allow_redirects=allow_redirects, timeout=timeout File "/usr/local/lib/python3.6/dist-packages/requests/api.py", line 104, in head return request('head', url, **kwargs) File "/usr/local/lib/python3.6/dist-packages/requests/api.py", line 61, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.6/dist-packages/requests/sessions.py", line 542, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.6/dist-packages/requests/sessions.py", line 655, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.6/dist-packages/requests/adapters.py", line 504, in send raise ConnectTimeout(e, request=request) requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/glue/glue.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f47db511e80>, 'Connection to s3.amazonaws.com timed out. (connect timeout=10)')) tp/0 I1213 06:00:35.237288 252396 main waiter_thread.cc:2652 [tp][0] EndSession for client id 1607864609277665002 (server tpe18:6297) ```<|||||>Looks like you are getting a timeout connecting to `s3.amazonaws.com`. There's not much we can do here.<|||||>Hi, I am facing the same issue, the code is running fine on colab but while running it on local system i am getting below error. from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") model = AutoModelForMaskedLM.from_pretrained("bert-base-cased") --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-9-4dd822b7db9b> in <module> 1 from transformers import AutoTokenizer, AutoModelForMaskedLM 2 ----> 3 tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") 4 5 model = AutoModelForMaskedLM.from_pretrained("bert-base-cased") ~\Anaconda3\envs\bert-test\lib\site-packages\transformers\models\auto\tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 308 config = kwargs.pop("config", None) 309 if not isinstance(config, PretrainedConfig): --> 310 config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs) 311 312 if "bert-base-japanese" in str(pretrained_model_name_or_path): ~\Anaconda3\envs\bert-test\lib\site-packages\transformers\models\auto\configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs) 339 {'foo': False} 340 """ --> 341 config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) 342 343 if "model_type" in config_dict: ~\Anaconda3\envs\bert-test\lib\site-packages\transformers\configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs) 384 proxies=proxies, 385 resume_download=resume_download, --> 386 local_files_only=local_files_only, 387 ) 388 # Load config dict ~\Anaconda3\envs\bert-test\lib\site-packages\transformers\file_utils.py in cached_path(url_or_filename, cache_dir, force_download, proxies, resume_download, user_agent, extract_compressed_file, force_extract, local_files_only) 1005 resume_download=resume_download, 1006 user_agent=user_agent, -> 1007 local_files_only=local_files_only, 1008 ) 1009 elif os.path.exists(url_or_filename): ~\Anaconda3\envs\bert-test\lib\site-packages\transformers\file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only) 1175 else: 1176 raise ValueError( -> 1177 "Connection error, and we cannot find the requested files in the cached path." 1178 " Please try again or make sure your Internet connection is on." 1179 ) ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.<|||||>Can you try the debugging procedure mentioned in https://github.com/huggingface/transformers/issues/8690#issuecomment-737246999?<|||||>i am able to open 8690 in web browser. but the error still remains: qa = text.SimpleQA(INDEXDIR) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) ~\AppData\Local\Continuum\anaconda3\lib\site-packages\ktrain\text\qa\core.py in __init__(self, bert_squad_model, bert_emb_model) 67 try: ---> 68 self.model = TFAutoModelForQuestionAnswering.from_pretrained(self.model_name) 69 except: ~\AppData\Local\Continuum\anaconda3\lib\site-packages\transformers\modeling_tf_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 1204 config, kwargs = AutoConfig.from_pretrained( -> 1205 pretrained_model_name_or_path, return_unused_kwargs=True, **kwargs 1206 ) ~\AppData\Local\Continuum\anaconda3\lib\site-packages\transformers\configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs) 332 """ --> 333 config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) 334 ~\AppData\Local\Continuum\anaconda3\lib\site-packages\transformers\configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs) 387 resume_download=resume_download, --> 388 local_files_only=local_files_only, 389 ) ~\AppData\Local\Continuum\anaconda3\lib\site-packages\transformers\file_utils.py in cached_path(url_or_filename, cache_dir, force_download, proxies, resume_download, user_agent, extract_compressed_file, force_extract, local_files_only) 954 user_agent=user_agent, --> 955 local_files_only=local_files_only, 956 ) ~\AppData\Local\Continuum\anaconda3\lib\site-packages\transformers\file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only) 1124 raise ValueError( -> 1125 "Connection error, and we cannot find the requested files in the cached path." 1126 " Please try again or make sure your Internet connection is on." ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on. During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) <ipython-input-72-18505d037255> in <module> 1 # ask questions (setting higher batch size can further speed up answer retrieval) ----> 2 qa = text.SimpleQA(INDEXDIR) 3 #answers = qa.ask('What is lotus sutra?', batch_size=8) ~\AppData\Local\Continuum\anaconda3\lib\site-packages\ktrain\text\qa\core.py in __init__(self, index_dir, bert_squad_model, bert_emb_model) 348 except: 349 raise ValueError('index_dir has not yet been created - please call SimpleQA.initialize_index("%s")' % (self.index_dir)) --> 350 super().__init__(bert_squad_model=bert_squad_model, bert_emb_model=bert_emb_model) 351 352 ~\AppData\Local\Continuum\anaconda3\lib\site-packages\ktrain\text\qa\core.py in __init__(self, bert_squad_model, bert_emb_model) 68 self.model = TFAutoModelForQuestionAnswering.from_pretrained(self.model_name) 69 except: ---> 70 self.model = TFAutoModelForQuestionAnswering.from_pretrained(self.model_name, from_pt=True) 71 self.tokenizer = AutoTokenizer.from_pretrained(self.model_name) 72 self.maxlen = 512 ~\AppData\Local\Continuum\anaconda3\lib\site-packages\transformers\modeling_tf_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 1203 if not isinstance(config, PretrainedConfig): 1204 config, kwargs = AutoConfig.from_pretrained( -> 1205 pretrained_model_name_or_path, return_unused_kwargs=True, **kwargs 1206 ) 1207 ~\AppData\Local\Continuum\anaconda3\lib\site-packages\transformers\configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs) 331 {'foo': False} 332 """ --> 333 config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) 334 335 if "model_type" in config_dict: ~\AppData\Local\Continuum\anaconda3\lib\site-packages\transformers\configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs) 386 proxies=proxies, 387 resume_download=resume_download, --> 388 local_files_only=local_files_only, 389 ) 390 # Load config dict ~\AppData\Local\Continuum\anaconda3\lib\site-packages\transformers\file_utils.py in cached_path(url_or_filename, cache_dir, force_download, proxies, resume_download, user_agent, extract_compressed_file, force_extract, local_files_only) 953 resume_download=resume_download, 954 user_agent=user_agent, --> 955 local_files_only=local_files_only, 956 ) 957 elif os.path.exists(url_or_filename): ~\AppData\Local\Continuum\anaconda3\lib\site-packages\transformers\file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only) 1123 else: 1124 raise ValueError( -> 1125 "Connection error, and we cannot find the requested files in the cached path." 1126 " Please try again or make sure your Internet connection is on." 1127 ) ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on. <|||||>still get this error for transformer 4.1.1 with torch 1.7.1 error message here: ``` Traceback (most recent call last): File "run_distributed_eval.py", line 273, in <module> run_generate() File "run_distributed_eval.py", line 206, in run_generate **generate_kwargs, File "run_distributed_eval.py", line 88, in eval_data_dir tokenizer = AutoTokenizer.from_pretrained(model_name) File "/data/User/v5/acl/venv/lib/python3.6/site-packages/transformers/models/auto/tokenization_auto.py", line 378, in from_pretrained return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/data/User/v5/acl/venv/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 1781, in from_pretrained use_auth_token=use_auth_token, File "/data/User/v5/acl/venv/lib/python3.6/site-packages/transformers/file_utils.py", line 1085, in cached_path local_files_only=local_files_only, File "/data/User/v5/acl/venv/lib/python3.6/site-packages/transformers/file_utils.py", line 1264, in get_from_cache "Connection error, and we cannot find the requested files in the cached path." ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on. ```<|||||>try transformers 4.00 transformers:4.1 Same error #8690 (comment) This can be accessed and downloaded ``` Traceback (most recent call last): File "f:\software\anaconda\envs\py38\lib\runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "f:\software\anaconda\envs\py38\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "F:\Software\Anaconda\envs\py38\Scripts\rasa.exe\__main__.py", line 7, in <module> File "f:\software\anaconda\envs\py38\lib\site-packages\rasa\__main__.py", line 116, in main cmdline_arguments.func(cmdline_arguments) File "f:\software\anaconda\envs\py38\lib\site-packages\rasa\cli\train.py", line 58, in <lambda> train_parser.set_defaults(func=lambda args: train(args, can_exit=True)) File "f:\software\anaconda\envs\py38\lib\site-packages\rasa\cli\train.py", line 90, in train training_result = rasa.train( File "f:\software\anaconda\envs\py38\lib\site-packages\rasa\train.py", line 94, in train return rasa.utils.common.run_in_loop( File "f:\software\anaconda\envs\py38\lib\site-packages\rasa\utils\common.py", line 308, in run_in_loop result = loop.run_until_complete(f) File "f:\software\anaconda\envs\py38\lib\asyncio\base_events.py", line 616, in run_until_complete return future.result() File "f:\software\anaconda\envs\py38\lib\site-packages\rasa\train.py", line 163, in train_async return await _train_async_internal( File "f:\software\anaconda\envs\py38\lib\site-packages\rasa\train.py", line 342, in _train_async_internal await _do_training( File "f:\software\anaconda\envs\py38\lib\site-packages\rasa\train.py", line 388, in _do_training model_path = await _train_nlu_with_validated_data( File "f:\software\anaconda\envs\py38\lib\site-packages\rasa\train.py", line 811, in _train_nlu_with_validated_data await rasa.nlu.train( File "f:\software\anaconda\envs\py38\lib\site-packages\rasa\nlu\train.py", line 97, in train trainer = Trainer( File "f:\software\anaconda\envs\py38\lib\site-packages\rasa\nlu\model.py", line 163, in __init__ self.pipeline = self._build_pipeline(cfg, component_builder) File "f:\software\anaconda\envs\py38\lib\site-packages\rasa\nlu\model.py", line 174, in _build_pipeline component = component_builder.create_component(component_cfg, cfg) File "f:\software\anaconda\envs\py38\lib\site-packages\rasa\nlu\components.py", line 852, in create_component component = registry.create_component_by_config(component_config, cfg) File "f:\software\anaconda\envs\py38\lib\site-packages\rasa\nlu\registry.py", line 193, in create_component_by_config return component_class.create(component_config, config) File "f:\software\anaconda\envs\py38\lib\site-packages\rasa\nlu\components.py", line 525, in create return cls(component_config) File "f:\software\anaconda\envs\py38\lib\site-packages\rasa\nlu\utils\hugging_face\hf_transformers.py", line 65, in __init__ self._load_model_instance(skip_model_load) File "f:\software\anaconda\envs\py38\lib\site-packages\rasa\nlu\utils\hugging_face\hf_transformers.py", line 121, in _load_model_instance self.tokenizer = model_tokenizer_dict[self.model_name].from_pretrained( File "f:\software\anaconda\envs\py38\lib\site-packages\transformers\tokenization_utils_base.py", line 1774, in from_pretrained resolved_vocab_files[file_id] = cached_path( File "f:\software\anaconda\envs\py38\lib\site-packages\transformers\file_utils.py", line 1077, in cached_path output_path = get_from_cache( File "f:\software\anaconda\envs\py38\lib\site-packages\transformers\file_utils.py", line 1263, in get_from_cache raise ValueError( ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on ```<|||||>I also ran into this error while trying to download any huggingface model. Turns out for me the cause was that I had set an `export REQUESTS_CA_BUNDLE=path/to/some/certificate` in my .bash_profile, which I needed to get some poetry stuff working. Once I removed this line and restarted, the download was working again.<|||||>It appears to be an SSL/TLS certificate error as @robinderat alludes to, but there are several possible reasons. Here's how I've debugged this, hopefully it helps others although your root cause may be different. ## Debugging Original error, fetching model from `https://huggingface.co/elastic/distilbert-base-cased-finetuned-conll03-english`: ``` ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on. ``` Check with `curl`: ``` $ curl -I https://huggingface.co/elastic/distilbert-base-cased-finetuned-conll03-english/resolve/main/config.json curl: (60) SSL certificate problem: certificate is not yet valid More details here: https://curl.haxx.se/docs/sslcerts.html curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above. ``` Checking with `requests`: ``` $ python -c "import requests; requests.get('https://huggingface.co/elastic/distilbert-base-cased-finetuned-conll03-english/resolve/main/config.json')" Traceback (most recent call last): <snip> File "/usr/lib/python3.7/ssl.py", line 412, in wrap_socket session=session File "/usr/lib/python3.7/ssl.py", line 853, in _create self.do_handshake() File "/usr/lib/python3.7/ssl.py", line 1117, in do_handshake self._sslobj.do_handshake() ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate is not yet valid (_ssl.c:1056) ``` Disabling curl's certificate validation with `-k` flag works: ``` $ curl -k -I https://huggingface.co/elastic/distilbert-base-cased-finetuned-conll03-english/resolve/main/config.json HTTP/1.1 200 OK ``` And now in Python, using `verify=False`: ``` $ python -c "import requests; r = requests.get('https://huggingface.co/elastic/distilbert-base-cased-finetuned-conll03-english/resolve/main/config.json', verify=False); print(r)" /home/josh/source/examples/Machine Learning/Query Optimization/venv/lib/python3.7/site-packages/urllib3/connectionpool.py:1020: InsecureRequestWarning: Unverified HTTPS request is being made to host 'huggingface.co'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings InsecureRequestWarning, <Response [200]> ``` ## Resolution So the "problem" is in the certificate. Checking in a browser, the root certificate of `huggingface.co` expires 30 April, 2021 but is valid only from 30 January, 2020. Checking my server clock shows that it was out of date (27 January 20201) and critically, *before* the certificate is valid *from*, which makes sense that the root error was "certificate verify failed: certificate is not yet valid". Set the clock to the real time and check again: ``` $ sudo date -s "Feb 11 09:34:03 UTC 2021" $ python -c "import requests; r = requests.get('https://huggingface.co/elastic/distilbert-base-cased-finetuned-conll03-english/resolve/main/config.json'); print(r)" <Response [200]> ``` I now suspect that this host in GCP, which was suspended for a while, did not automatically update it's local time causing this specific problem. ## Conclusion @julien-c I would only suggest at this point that making the root cause visible in the error coming out of `transformers` would be really helpful to more immediately see the problem. 🎉 <|||||>@joshdevins nice troubleshooting! The issue here is that on this line https://github.com/huggingface/transformers/blob/6710d1d5ef9dd7922cb688d0b56af1410604f412/src/transformers/file_utils.py#L1231 we catch `requests`' ConnectionError (if I'm not mistaken, triggered when you're offline) but `SSLError` (and `ProxyError` for that matter), which we wouldn't want to catch, inherit from ConnectionError. See `requests`'s exceptions at https://requests.readthedocs.io/en/master/_modules/requests/exceptions/ We could at least probably rethrow the exceptions in those cases.<|||||>see tentative fix over at https://github.com/huggingface/huggingface_hub/pull/14/commits/34b7b70d07ab1c9fc2f7da603d47cb344e256af6 @joshdevins let me know if this looks good<|||||>@julien-c Looks good. I was able to recreate the original problem and applying your patch makes the root cause error much more visible. Thanks! 👍 <|||||>just restart the system and then reconnect the internet ....will solve the issue..happy day<|||||> > just restart the system and will solve the issue..happy day Super bro... thanks a lot.. its working<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, Can anyone please tell me how you were able to resolve this issue? I am facing the connection error as below. ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.<|||||>i face the save error; 11.7s | 1 | /opt/conda/lib/python3.7/site-packages/papermill/iorw.py:50: FutureWarning: pyarrow.HadoopFileSystem is deprecated as of 2.0.0, please use pyarrow.fs.HadoopFileSystem instead. -- | -- | -- 11.7s | 2 | from pyarrow import HadoopFileSystem 40.1s | 3 | If you want to use your W&B account, go to Add-ons -> Secrets and provide your W&B access token. Use the Label name as wandb_api. 40.1s | 4 | Get your W&B access token from here: https://wandb.ai/authorize 63.7s | 5 | Traceback (most recent call last): 63.7s | 6 | File "<string>", line 1, in <module> 63.7s | 7 | File "/opt/conda/lib/python3.7/site-packages/papermill/execute.py", line 122, in execute_notebook 63.7s | 8 | raise_for_execution_errors(nb, output_path) 63.7s | 9 | File "/opt/conda/lib/python3.7/site-packages/papermill/execute.py", line 234, in raise_for_execution_errors 63.7s | 10 | raise error 63.7s | 11 | papermill.exceptions.PapermillExecutionError: 63.7s | 12 | --------------------------------------------------------------------------- 63.7s | 13 | Exception encountered at "In [2]": 63.7s | 14 | --------------------------------------------------------------------------- 63.7s | 15 | ValueError Traceback (most recent call last) 63.7s | 16 | /tmp/ipykernel_21/2060779141.py in <module> 63.7s | 17 | 16 "device": torch.device("cuda:0" if torch.cuda.is_available() else "cpu") 63.7s | 18 | 17 } 63.7s | 19 | ---> 18 CONFIG["tokenizer"] = AutoTokenizer.from_pretrained(CONFIG['model_name']) 63.7s | 20 | 19 def id_generator(size=12, chars=string.ascii_lowercase + string.digits): 63.7s | 21 | 20 return ''.join(random.SystemRandom().choice(chars) for _ in range(size)) 63.7s | 22 |   63.7s | 23 | /opt/conda/lib/python3.7/site-packages/transformers/models/auto/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 63.7s | 24 | 388 kwargs["_from_auto"] = True 63.7s | 25 | 389 if not isinstance(config, PretrainedConfig): 63.7s | 26 | --> 390 config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs) 63.7s | 27 | 391 63.7s | 28 | 392 use_fast = kwargs.pop("use_fast", True) 63.7s | 29 |   63.7s | 30 | /opt/conda/lib/python3.7/site-packages/transformers/models/auto/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs) 63.7s | 31 | 396 """ 63.7s | 32 | 397 kwargs["_from_auto"] = True 63.7s | 33 | --> 398 config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) 63.7s | 34 | 399 if "model_type" in config_dict: 63.7s | 35 | 400 config_class = CONFIG_MAPPING[config_dict["model_type"]] 63.7s | 36 |   63.7s | 37 | /opt/conda/lib/python3.7/site-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs) 63.7s | 38 | 464 local_files_only=local_files_only, 63.7s | 39 | 465 use_auth_token=use_auth_token, 63.7s | 40 | --> 466 user_agent=user_agent, 63.7s | 41 | 467 ) 63.7s | 42 | 468 # Load config dict 63.7s | 43 |   63.7s | 44 | /opt/conda/lib/python3.7/site-packages/transformers/file_utils.py in cached_path(url_or_filename, cache_dir, force_download, proxies, resume_download, user_agent, extract_compressed_file, force_extract, use_auth_token, local_files_only) 63.7s | 45 | 1171 user_agent=user_agent, 63.7s | 46 | 1172 use_auth_token=use_auth_token, 63.7s | 47 | -> 1173 local_files_only=local_files_only, 63.7s | 48 | 1174 ) 63.7s | 49 | 1175 elif os.path.exists(url_or_filename): 63.7s | 50 |   63.7s | 51 | /opt/conda/lib/python3.7/site-packages/transformers/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, use_auth_token, local_files_only) 63.7s | 52 | 1387 else: 63.7s | 53 | 1388 raise ValueError( 63.7s | 54 | -> 1389 "Connection error, and we cannot find the requested files in the cached path." 63.7s | 55 | 1390 " Please try again or make sure your Internet connection is on." 63.7s | 56 | 1391 ) 63.7s | 57 |   63.7s | 58 | ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on. 63.7s | 59 |   66.0s | 60 | /opt/conda/lib/python3.7/site-packages/traitlets/traitlets.py:2567: FutureWarning: --Exporter.preprocessors=["remove_papermill_header.RemovePapermillHeader"] for containers is deprecated in traitlets 5.0. You can pass `--Exporter.preprocessors item` ... multiple times to add items to a list. 66.0s | 61 | FutureWarning, 66.0s | 62 | [NbConvertApp] Converting notebook __notebook__.ipynb to notebook 66.3s | 63 | [NbConvertApp] Writing 39941 bytes to __notebook__.ipynb 68.5s | 64 | /opt/conda/lib/python3.7/site-packages/traitlets/traitlets.py:2567: FutureWarning: --Exporter.preprocessors=["nbconvert.preprocessors.ExtractOutputPreprocessor"] for containers is deprecated in traitlets 5.0. You can pass `--Exporter.preprocessors item` ... multiple times to add items to a list. 68.5s | 65 | FutureWarning, 68.5s | 66 | [NbConvertApp] Converting notebook __notebook__.ipynb to html 69.2s | 67 | [NbConvertApp] Writing 355186 bytes to __results__.html <|||||>why don't you restart the system.<|||||>Hi, I'm trying to use a simple text classification pipeline, and whether I try to clone the model's repo or download it by importing the model, I receive this error. When cloning: ``` Cloning into 'distilbert-base-uncased-finetuned-sst-2-english'... fatal: unable to access 'https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english/': OpenSSL SSL_connect: Connection was reset in connection to huggingface.co:443 ``` When importing: ``` ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on. ``` I guess this error is triggered because of my location (I am in Iran). I also tried with and without a VPN and neither worked. Can there be any hope for me to download a transformer model?<|||||>FYI I was getting this error when training on multiple gpus with multi-processing maybe due to too many requests at the same time. I could flakily reproduce with: ``` from concurrent.futures import ThreadPoolExecutor from transformers import T5Tokenizer with ThreadPoolExecutor(max_workers=16) as executor: jobs = [] for _ in range(16): jobs.append(executor.submit(T5Tokenizer.from_pretrained, "t5-small")) _ = [(print(i), job.result()) for i, job in enumerate(jobs)] ``` The solution for me was to force offline mode: ``` T5Tokenizer.from_pretrained("t5-small", local_files_only=True) ```<|||||>> I deleted all cache, redownloaded all modes and ran again. It seems to be working as of now. How do you delete cache of GPT-2 model?<|||||>@danielbellhv you can pass `force_download=True` to `from_pretrained` which will override the cache and re-download the files. <|||||>> @danielbellhv > > you can pass `force_download=True` to `from_pretrained` which will override the cache and re-download the files. got this error `TypeError: from_pretrained() got an unexpected keyword argument 'force_download'`<|||||>I have encountered this error more than once. The solution can be various, e.g., sometimes I delete all my cached files, and sometimes I just delete some big files (model files), and also sometimes I just wait for it for several minutes then it works again without doing anything... I am really confused by this error. Personally, I think this error can be caused by many reasons. Hope a more detailed and specific error log could be provided in the future.<|||||>> I have encountered this error more than once. > > The solution can be various, e.g., sometimes I delete all my cached files, and sometimes I just delete some big files (model files), and also sometimes I just wait for it for several minutes then it works again without doing anything... > > I am really confused by this error. Personally, I think this error can be caused by many reasons. Hope a more detailed and specific error log could be provided in the future. My issue was because there were no internet connection by default. So I had to solve the internet problem and it worked for me.
transformers
8,689
closed
[Question] Pegasus tokenizer
@sshleifer - sorry to ping you here on this. Would be amazing if you find some time to explain the Pegasus tokenizer a bit. A couple of things I don't understand: - In the official Pegasus Tokenizer and from reading the paper it seems that exactly 2 mask tokens are necessary. See https://github.com/google-research/pegasus/blob/master/pegasus/ops/pretrain_parsing_ops.cc#L66 a) ID=2 seems to correspond to the sentence mask token, called `[MASK_1]` and b) ID=3 seems to correspond to the word mask token, called `[MASK_2]` => Why don't we have `[MASK_1]` and `[MASK_2]` tokens in the tokenizer's special tokens? I would actually add them at the id's 2 and 3 instead of having `unk_2` and `unk_3` there. Wdyt? - Why do we call the tokens unk_2 - unk_104 ? Why unk? And why aren't those part of the `special_tokens_map` - is this on purpose? - Why does Pegasus inherit from the Reformer Tokenizer -> I don't really see what they have in common... Would be awesome if you could take 10min to reply :-)
11-20-2020 15:03:31
11-20-2020 15:03:31
No problem at all! + The inheritance is just for the purpose of not duplicating code. + You can change to whatever unk you would like, i wasn't at all careful about this stuff since i didn't try to replicate/test pre-training, just fine-tuning and generation. + Your changes sound like obvious low risk improvements. + I don't know whether standard mask-filling will work for integration testing purposes, given the seq2seq pre-training objective .
transformers
8,688
closed
Document adam betas TrainingArguments
# What does this PR do? #5592 introduced two new fields in `TrainingArguments` (`adam_beta1` and `adam_beta2`) without documenting them in the docstring. This PR fixes that.
11-20-2020 14:17:08
11-20-2020 14:17:08
transformers
8,687
closed
added bangla-bert-sentiment model card
Hi, I added model card for bangla-bert-sentiment model. Please check and if possible merge. thanks and regards
11-20-2020 13:09:45
11-20-2020 13:09:45
transformers
8,686
closed
moved temperature warper before topP/topK warpers
# What does this PR do? Moves the `temperature` warper in `generation_utils.py` before `top_p` and `top_k` warper so that temperature affects sampling. This is how it used to be [before refactoring](https://github.com/huggingface/transformers/blob/v3.4.0/src/transformers/generation_utils.py#L571-L575) in `v.3.5.x`. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? @patrickvonplaten
11-20-2020 12:53:34
11-20-2020 12:53:34
transformers
8,685
closed
Pegasus Xsum Returning Tokens Not In Source Text
I'm currently using `sshleifer/distill-pegasus-xsum-16-8` model to perform abstractive text summarization, I've found this particular model to be most useful for my desired application. However, when attempting to summarize on inputted source text, the output returns tokens returned are nowhere in the source text. I suspect Pegasus is returning tokens from the dataset that it was trained. That said, is finetuning needed? Should hyperparameter tweaking solve this? I wonder if PEGASUS + GAN could help teach the model to abstract from tokens in the input text? **_Here's an example_** **Source Text:** German shares suffered their weakest day since early June on Wednesday as the government agreed on an emergency lockdown to combat surging COVID-19 cases, with other European markets following suit on fears of more curbs around the continent. The German DAX sank as much as 5% before cutting some losses to close down 4.2% at its lowest in five months. The precise measures were still subject to negotiation, with sources saying the government had agreed to shut bars and restaurants from Nov. 2. The pan-European STOXX 600 index fell 3% in its sharpest one-day drop in five weeks. France's main index dropped 3.4% ahead of a televised address by President Emmanuel Macron at 8:00 pm when he is expected to issue stay-at-home orders. ```python # XSUM 16-8 model_name = "sshleifer/distill-pegasus-xsum-16-8" tokenizer = AutoTokenizer.from_pretrained(model_name) model_pegasus_distill_xsum_16_8 = AutoModelForSeq2SeqLM.from_pretrained(model_name).to(torch_device) batch = tokenizer.prepare_seq2seq_batch([src_text], truncation=True, padding='longest').to(torch_device) translated = model_pegasus_distill_xsum_16_8.generate(**batch,num_beams=9, num_return_sequences=3, temperature=1, length_penalty=5, max_length = 256, min_length=0) tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)` ``` **Output Text:** Shares in Europe have fallen sharply after the German government agreed to shut down bars and restaurants in a bid to curb the spread of carbon monoxide (CO) in the country's capital, Berlin. The pan-European STOXX 600 index fell 3% in its sharpest one-day drop in five weeks, while the FTSE 100 index closed down 3.7% in its sharpest one-day fall in five weeks. From the outputted text, one can see that nowhere in the input text was `carbon monoxide (CO)` or `Berlin` or `FTSE 100` mentioned.
11-20-2020 12:39:01
11-20-2020 12:39:01
Not an expert in summarization, but abstractive text summarization does not extract sequences/tokens from the initial text to produce a summary. That would be extractive text summarization. Abstractive text summarization instead can be done with rephrasing, as it seems to be the case here. On a second note, I believe the Pegasus checkpoints were trained on very long sequences, so I'm not entirely sure how it would deal with smaller sequences as the one you used here. On a third note, we try to keep the github issues reserved for issues/feature requests; you would have more luck asking this over on the [forum](https://discuss.huggingface.co). @patrickvonplaten or @patil-suraj can chime in if I'm wrong.<|||||>The hyperparameters seem very extreme to me... also `temperature=1` does not do anything and `length_penalty=5` is very high - also note that a length_penalty > 1 actually incentivizes longer sequences. @sshleifer 's model already has good hyper-parameters set as default values that you can see here: https://huggingface.co/sshleifer/distill-pegasus-xsum-16-8/blob/main/config.json If you just use those, *e.g.*: ```python translated = model_pegasus_distill_xsum_16_8.generate(**batch) ``` you get this summary: ``` European shares fell sharply on Wednesday as investors remained cautious ahead of a speech by France's president later in the day. ``` You can try it yourself here: https://huggingface.co/sshleifer/distill-pegasus-xsum-16-8?text=German+shares+suffered+their+weakest+day+since+early+June+on+Wednesday+as+the+government+agreed+on+an+emergency+lockdown+to+combat+surging+COVID-19+cases%2C+with+other+European+markets+following+suit+on+fears+of+more+curbs+around+the+continent.+The+German+DAX+sank+as+much+as+5%25+before+cutting+some+losses+to+close+down+4.2%25+at+its+lowest+in+five+months.+The+precise+measures+were+still+subject+to+negotiation%2C+with+sources+saying+the+government+had+agreed+to+shut+bars+and+restaurants+from+Nov.+2.+The+pan-European+STOXX+600+index+fell+3%25+in+its+sharpest+one-day+drop+in+five+weeks.+France%27s+main+index+dropped+3.4%25+ahead+of+a+televised+address+by+President+Emmanuel+Macron+at+8%3A00+pm+when+he+is+expected+to+issue+stay-at-home+orders. ``` My conclusion would be that it's just the hyperparameters that are badly chosen - not sure if @sshleifer has something to add...<|||||>- Lysandre is correct about abstractive vs. extractive. - Hallucination is a known issue with Neural Text Generation. It will happen more often if you generate summaries that are more than ~30% the length of the input document (which your length_penalty and max_length encourage). - `"sshleifer/distill-pegasus-xsum-16-4"` is better and faster. See Table 6 of the [best paper in AI history](https://arxiv.org/pdf/2010.13002.pdf) ;). - I would set `num_beams=4` if I cared at all about speed.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
8,684
closed
Bert variants pretrained on Wikipedia are easily downloaded. Are the optimizers from the pretraining also available?
Is there a pretrained optimizer checkpoint available that can be loaded in the same way as a pretrained model? I noticed that though the pretrained models are available trained on Wikipedia (ex can load a pretrained distillbert using: <br/>`model = DistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased'`) But, I cannot find the optimizer from the end of the training run on wikipedia. There is no `checkpoint['optimizer']` For my task, looking at optimizer internals (momentum, second moment, etc) from the end of training on wikipedia may be more useful to me than looking at optimizer internals from training on a downstream task (eg. GLUE). Does such a checkpoint exist (either for TF or Torch?) Environment info (not really relevant) <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.1.0 - Platform: Linux-4.15.0-123-generic-x86_64-with-debian-buster-sid - Python version: 3.7.7 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script: Yes - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik Trainer: @sgugger --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below)
11-20-2020 11:19:42
11-20-2020 11:19:42
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
8,683
closed
use the torchscript in a gpt model is slower than origin one.
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version:2.1.1 - Platform:Linux version 4.15.0-76-generic (buildd@lcy01-amd64-029) (gcc version 7.4.0 (Ubuntu 7.4.0-1ubuntu1~18.04.1)) - Python version:3.6.9 - PyTorch version (GPU?): 1.6.0+cu101 - Using GPU in script?:No -GPU-tesla k80 ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao Blenderbot: @patrickvonplaten Bart: @patrickvonplaten Marian: @patrickvonplaten Pegasus: @patrickvonplaten mBART: @patrickvonplaten T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSMT: @stas00 examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information when i am using torchscipts to speed up the interference of my gpt2 model, I found it is slower than the origin one traced model 0.6959998607635498 origin model 0.3259282112121582 The problem arises when using: * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] my own task : gpt2 LM ## To reproduce Steps to reproduce the behavior: follow the code below https://github.com/lonelydancer/algorithm/blob/master/test.py <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> the traced model is faster.
11-20-2020 09:24:40
11-20-2020 09:24:40
Hi! TorchScript requires tracing the model beforehand, which slows down the first forward pass through the model. Could you print the timing of the iterations following the initial one?<|||||>Hi, @LysandreJik do you mean in the first iteration "loaded_model(input_ids)" will slow? I already traced model before that. traced_model = torch.jit.trace(model, input_ids) torch.jit.save(traced_model, 'trace_gpt2.pt') loaded_model = torch.jit.load('trace_gpt2.pt').to('cuda') loaded_model.eval() #print (loaded_model) start = time.time() for i in range(100): with torch.no_grad(): loaded_model(input_ids) end = time.time() print ('traced model',(end-start)) <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
8,682
closed
create README.md
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
11-20-2020 08:44:38
11-20-2020 08:44:38
transformers
8,681
closed
Create README.txt
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
11-20-2020 08:43:27
11-20-2020 08:43:27
Can you please add metadata as in https://huggingface.co/docs#what-metadata-can-i-add-to-my-model-card? Thank you!<|||||>Closing this one as duplicate was already merged! For context please also read https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
transformers
8,680
closed
Result changes if we don't pass attension mask in TFDistilbert model on SQUADv1 dataset
## Environment info - `transformers` version: latest - Platform: Colab - Python version: - Tensorflow version (GPU?): 2.3.0 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help I used the below code for getting Model ``` from transformers import AutoTokenizer, TFAutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained('distilbert-base-uncased-distilled-squad') model = TFAutoModelForQuestionAnswering.from_pretrained('distilbert-base-uncased-distilled-squad', return_dict=True) ``` tokenizers: @mfuntowicz examples/seq2seq: @patil-suraj tensorflow: @jplu ## Information The model I am using **TFDistilbert** pretrained. The problem arises when using: * my own modified scripts: This is the Notebook [Colab](https://github.com/bhadreshpsavani/UnderstandingNLP/blob/master/TFLiteExperimentsQALatest.ipynb) The tasks I am working on is: * an official SQUaD task ## To reproduce Steps to reproduce the behavior: [Colab](https://github.com/bhadreshpsavani/UnderstandingNLP/blob/master/TFLiteExperimentsQALatest.ipynb) ## Expected behavior The performance should be the same because the Attention mask is the optional argument, if we don't pass it will create it internally. With Attention Mask: ``` OrderedDict([('exact', 77.71050141912077), ('f1', 85.5370981182013), ('total', 10570)]) ``` Without Attention Mask: ``` OrderedDict([('exact', 72.82876064333927), ('f1', 80.71521545953475), ('total', 10570)]) ```
11-20-2020 07:53:11
11-20-2020 07:53:11
`attention_mask` is an optional argument, but that doesn't mean that it should not be passed to the function. If `attention_mask` is `None` then it is initialized to attend all tokens (all 1's in the `attention_mask` tensor), which is incorrect if the input is a batch that includes padding tokens. => It's better to simply pass the `attention_mask` to the forward function.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
8,679
closed
gpt2 and t5 model parallelism with tests
# Model Parallelism for GPT2 and T5 Note: version compatible with v4 Adds two new methods to t5 and gpt2 models to enable you to generate and fine-tune models using model parallelism. This feature is most applicable for `gpt2-large` and `gpt2-xl`. Minor modifications are made to the `TrainingArguments` and `Trainer` classes to avoid conflicting data parallelism behavior and related batch_size increases which would negate model parallelism. Note that nearly 64GB of GPU (4 Tesla v100s) are needed to fine-tune `gpt2-xl` @ 1024 tokens. It is critically important to provide users the ability to specify where to put the blocks of a model because the GPU sizes and numbers are likely to be very diverse. This is done with a dictionary called `device_map`. I am planning on providing some examples and guidelines for the p3, p2 and g3 AWS instances. Model parallelism has to be baked into the model class itself. Currently working on the T5 model. From my calculations the 11B model cannot fit on the largest p3 instance that I have access to (8 Tesla v100 GPUs). The 3B model can. The methods are: - `parallelize`, which will distribute the attention blocks of the model across several devices according to a device map - `deparallelize`, which will move the model back to cpu # Example ``` model = GPT2LMHeadModel.from_pretrained('gpt2-xl') device_map = {0: [0, 1, 2, 3, 4, 5, 6, 7, 8], 1: [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21], 2: [22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34], 3: [35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]} model.parallelize(device_map) # Distributes the model's attention blocks across several devices model.deparallelize() # Puts the model back on cpu and calls torch.cuda.empty_cache() to liberate GPU memory ``` ## Reviewers @LysandreJik
11-20-2020 06:09:01
11-20-2020 06:09:01
transformers
8,678
closed
Update the bibtex with EMNLP demo
11-20-2020 05:22:08
11-20-2020 05:22:08
transformers
8,677
closed
Model parallel v4
# Model Parallelism for GPT2 and T5 Note: this is a clean pull request for [PR # 7772](https://github.com/huggingface/transformers/pull/7772) that uses code from transformers v4.0.0. Adds two new methods to `GPT2LMHead` and the `GPT2Model` classes to enable you to generate and fine-tune models using model parallelism. This feature is most applicable for `gpt2-large` and `gpt2-xl`. Minor modifications are made to the `TrainingArguments` and `Trainer` classes to avoid conflicting data parallelism behavior and related batch_size increases which would negate model parallelism. Note that nearly 64GB of GPU (4 Tesla v100s) are needed to fine-tune `gpt2-xl` @ 1024 tokens. It is critically important to provide users the ability to specify where to put the blocks of a model because the GPU sizes and numbers are likely to be very diverse. This is done with a dictionary called `device_map`. I am planning on providing some examples and guidelines for the p3, p2 and g3 AWS instances. Model parallelism has to be baked into the model class itself. Currently working on the T5 model. From my calculations the 11B model cannot fit on the largest p3 instance that I have access to (8 Tesla v100 GPUs). The 3B model can. The methods are: - `parallelize`, which will distribute the attention blocks of the model across several devices according to a device map - `deparallelize`, which will move the model back to cpu # Example ``` model = GPT2LMHeadModel.from_pretrained('gpt2-xl') device_map = {0: [0, 1, 2, 3, 4, 5, 6, 7, 8], 1: [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21], 2: [22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34], 3: [35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]} model.parallelize(device_map) # Distributes the model's attention blocks across several devices model.deparallelize() # Puts the model back on cpu and calls torch.cuda.empty_cache() to liberate GPU memory ``` ## Reviewers @LysandreJik
11-20-2020 05:19:26
11-20-2020 05:19:26
transformers
8,676
closed
2 typos in modeling_rag.py
# What does this PR do? Fix 2 typos in `modeling_rag.py` `from_encoder_generator_configs` --> `from_question_encoder_generator_configs` ## Who can review? @lhoestq
11-20-2020 03:04:30
11-20-2020 03:04:30
Hi! Could you run `make style` on your branch so that the code quality check passes? Thanks!<|||||>Hi guys, I only have mobile phone until Dec. 1. I will do it as soon as I can access PC.<|||||>@lhoestq @LysandreJik done applying style. Sorry for late!
transformers
8,675
closed
[WIP] Rewrite ProphetNet to adapt converting ONNX friendly
# What does this PR do? We want to convert ProphetNet (pytorch model) to ONNX, but it needs some source code change to adapt it. The current code cannot convert to ONNX because (1) The current pytorch model generates very large TorchScript IR graph (38k for decoder). We rewrite the way it generates bias: ~~Let's use numpy and then formulate the torch tensor finally.~~ Numpy way can help convert ONNX via tracing, but we prefer using scripting here. Add script decorator so that the model can be converted via scripting. This reduces IR graph to 5k for decoder. (2) `torch.new` generates constant dimension for Tensor in IR graph, which is not suitable if we want to do dynamic input axes for the converter. So we use `torch.full` instead. This PR does not (should not) change any model behavior. Fixes # (issue) After this PR, the model can be converted to ONNX via scripting. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @qiweizhen @patrickvonplaten @Zhylkaaa
11-20-2020 02:26:51
11-20-2020 02:26:51
@mfuntowicz - could you take a look maybe? :-) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
8,674
closed
Issues Fine-tuning XLNET
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.8.0 - Platform: Google colab - Python version: 3.6.9 - PyTorch version (GPU?): 1.7.0+cu101 - Tensorflow version (GPU?): 2.3.0 - Using GPU in script?: yes - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Text Generation: @patrickvonplaten @TevenLeScao TransfoXL/XLNet: @TevenLeScao --> ## Information Model I am using XLNET: The problem arises when using: * [ ] the official example scripts: The old version of transformers. The script is run_language_modeling.py * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] Fine-tuning ## To reproduce Steps to reproduce the behavior: 1. !git clone https://github.com/huggingface/transformers import os os.chdir('/content/transformers') !git checkout b1ff0b2ae7d368b7db3a8a8472a29cc195d278d8 !pip install . !pip install -r ./examples/requirements.txt os.chdir('/content/transformers/examples') !pip install dict_to_obj 2. !python run_language_modeling.py \ --output_dir='/content/drive/My Drive/finetuned_models/xlnet_large'\ --model_type=xlnet \ --model_name_or_path=xlnet-large-cased \ --should_continue \ --save_total_limit=5 \ --num_train_epochs=1.0 \ --do_train \ --evaluate_during_training \ --logging_steps=500 \ --save_steps=500 \ --train_data_file='/content/drive/My Drive/finetuned_models/train.txt' \ --do_eval \ --eval_data_file='/content/drive/My Drive/finetuned_models/valid.txt' \ --per_gpu_train_batch_size=2 \ --per_gpu_eval_batch_size=2 \ --block_size=128 \ --gradient_accumulation_steps=5 3. [INFO|modeling_utils.py:1065] 2020-11-17 20:59:57,425 >> All the weights of XLNetLMHeadModel were initialized from the model checkpoint at /content/drive/MyDrive/finetuned_models/xlnet_base/checkpoint-26500. If your task is similar to the task the model of the checkpoint was trained on, you can already use XLNetLMHeadModel for predictions without further training. 100%|██████████| 311/311 [01:01<00:00, 5.09ba/s] 100%|██████████| 133/133 [00:25<00:00, 5.13ba/s] 0%| | 0/311 [00:00<?, ?ba/s]Traceback (most recent call last): File "run_clm.py", line 348, in <module> main() File "run_clm.py", line 300, in main load_from_cache_file=not data_args.overwrite_cache, File "/usr/local/lib/python3.6/dist-packages/datasets/dataset_dict.py", line 300, in map for k, dataset in self.items() File "/usr/local/lib/python3.6/dist-packages/datasets/dataset_dict.py", line 300, in <dictcomp> for k, dataset in self.items() File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 1256, in map update_data=update_data, File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 156, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/datasets/fingerprint.py", line 163, in wrapper out = func(self, *args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 1525, in _map_single writer.write_batch(batch) File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_writer.py", line 278, in write_batch pa_table = pa.Table.from_pydict(typed_sequence_examples) File "pyarrow/table.pxi", line 1474, in pyarrow.lib.Table.from_pydict File "pyarrow/array.pxi", line 322, in pyarrow.lib.asarray File "pyarrow/array.pxi", line 222, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_writer.py", line 100, in __arrow_array__ if trying_type and out[0].as_py() != self.data[0]: File "pyarrow/array.pxi", line 1058, in pyarrow.lib.Array.__getitem__ File "pyarrow/array.pxi", line 540, in pyarrow.lib._normalize_index IndexError: index out of bounds 0%| | 0/311 [00:02<?, ?ba/s] ## Expected behavior <!-- I started fine-tuning the XLNET models using google colab. The colab notebook times out after 20 hours which is fine but when I try to continue training, I get the error I above. I have looked at similar issue reports on this repo but I was still unable to get around this error. Please, do you know what I am doing wrong? And what I can do to fix it? Thanks. -->
11-19-2020 23:39:20
11-19-2020 23:39:20
It looks like your using causual language modeling (next word prediction) - _File "run_clm.py", line 300, in main_, but xlnet does not use that. It uses permutation language modeling. Have you tried with the [new scripts](https://github.com/huggingface/transformers/tree/master/examples/language-modeling#xlnet-and-permutation-language-modeling), I believe this will solve it 👍 <|||||>Thank you, Tim. So I know XLNET and Transformer-XL are fine-tuned the same way and tried the *run_plm.py* like you suggested and got the error below: [INFO|tokenization_utils_base.py:1650] 2020-11-20 23:00:33,440 >> loading file https://huggingface.co/transfo-xl-wt103/resolve/main/vocab.pkl from cache at /root/.cache/torch/transformers/6860d92833eb9d2a42cf185e974ca967fbf4cd58fa8d3d9298e56b9ef7ff8d5c.56c8ef92e693414ef2313bde4ba3679a404de1edbcd5a5780def3971f9706850 [INFO|modeling_utils.py:940] 2020-11-20 23:00:34,096 >> loading weights file https://huggingface.co/transfo-xl-wt103/resolve/main/pytorch_model.bin from cache at /root/.cache/torch/transformers/891af5f0c8372327a961a768d4ee40b7ca95c428f9384c534e73b9b655c75468.923bd8e0844a782c35f009eddd08a3600739804fbe13bd234f592f36230ab8a9 Traceback (most recent call last): File "run_plm.py", line 382, in <module> main() File "run_plm.py", line 244, in main cache_dir=model_args.cache_dir, File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py", line 947, in from_pretrained model = cls(config, *model_args, **model_kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_xlnet.py", line 1294, in __init__ self.transformer = XLNetModel(config) File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_xlnet.py", line 940, in __init__ self.reuse_len = config.reuse_len AttributeError: 'TransfoXLConfig' object has no attribute 'reuse_len' On Fri, Nov 20, 2020 at 1:52 PM Tim Isbister <[email protected]> wrote: > It looks like your using causual language modeling (next word prediction) > - *run_clm.py*, but xlnet does not use that. It uses permutation language > modeling. > > Have you tried with the new scripts > <https://github.com/huggingface/transformers/tree/master/examples/language-modeling#xlnet-and-permutation-language-modeling>, > I believe this will solve it 👍 > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/8674#issuecomment-731348230>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AHYMJAF5X3H4GW7WFSAQEOLSQ23F3ANCNFSM4T4CBVKQ> > . > <|||||>Hmm strange, well actually I now tried running the provided example that I suggested to you. But also failing with `IndexError: index out of bounds` as you got in the first attempt. Edit: Looks like we have problems with loading the data, if I hardcoded a dataset to the `run_plm.py` `datasets = load_dataset('wikitext', 'wikitext-103-raw-v1’)` it works. [Provided example](https://github.com/huggingface/transformers/tree/master/examples/language-modeling#xlnet-and-permutation-language-modeling): ``` python run_plm.py \ --model_name_or_path=xlnet-base-cased \ --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 \ --do_train \ --do_eval \ --output_dir /tmp/test-plm ``` ``` Traceback (most recent call last): File "run_plm.py", line 382, in <module> main() File "run_plm.py", line 321, in main tokenized_datasets = tokenized_datasets.map( File "/home/tim/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/dataset_dict.py", line 286, in map { File "/home/tim/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/dataset_dict.py", line 287, in <dictcomp> k: dataset.map( File "/home/tim/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1243, in map return self._map_single( File "/home/tim/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 157, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/tim/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 163, in wrapper out = func(self, *args, **kwargs) File "/home/tim/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1528, in _map_single writer.write_batch(batch) File "/home/tim/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/arrow_writer.py", line 278, in write_batch pa_table = pa.Table.from_pydict(typed_sequence_examples) File "pyarrow/table.pxi", line 1474, in pyarrow.lib.Table.from_pydict File "pyarrow/array.pxi", line 322, in pyarrow.lib.asarray File "pyarrow/array.pxi", line 222, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/home/tim/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/arrow_writer.py", line 100, in __arrow_array__ if trying_type and out[0].as_py() != self.data[0]: File "pyarrow/array.pxi", line 1058, in pyarrow.lib.Array.__getitem__ File "pyarrow/array.pxi", line 540, in pyarrow.lib._normalize_index IndexError: index out of bounds ```<|||||>Thanks, Tim. Do you have any suggestions for how I can load my own train and validation datasets? And how will it work with the following code below. I am now using the current transformer version. Thanks again, Tim. python run_plm.py \ --model_name_or_path=transfo-xl-wt103 \ --train_file='/content/drive/My Drive/finetuned_models/train.txt' \ --validation_file='/content/drive/My Drive/finetuned_models/valid.txt' \ --save_total_limit=5 \ --num_train_epochs=1.0 \ --do_train \ --do_eval \ --per_gpu_train_batch_size=2 \ --per_gpu_eval_batch_size=2 \ On Fri, Nov 20, 2020 at 7:23 PM Tim Isbister <[email protected]> wrote: > Hmm strange, well actually I now tried running the provided example that I > suggested to you. But also failing with IndexError: index out of bounds > as you got in the first attempt. I have the latest versions of all the > libraries running on Ubuntu 18.04 LTS with Titan RTX. > > Provided example > <https://github.com/huggingface/transformers/tree/master/examples/language-modeling#xlnet-and-permutation-language-modeling> > : > > python run_plm.py \ > --model_name_or_path=xlnet-base-cased \ > --dataset_name wikitext \ > --dataset_config_name wikitext-2-raw-v1 \ > --do_train \ > --do_eval \ > --output_dir /tmp/test-plm > > Traceback (most recent call last): > File "run_plm.py", line 382, in <module> > main() > File "run_plm.py", line 321, in main > tokenized_datasets = tokenized_datasets.map( > File "/home/tim/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/dataset_dict.py", line 286, in map > { > File "/home/tim/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/dataset_dict.py", line 287, in <dictcomp> > k: dataset.map( > File "/home/tim/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1243, in map > return self._map_single( > File "/home/tim/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 157, in wrapper > out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) > File "/home/tim/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 163, in wrapper > out = func(self, *args, **kwargs) > File "/home/tim/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1528, in _map_single > writer.write_batch(batch) > File "/home/tim/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/arrow_writer.py", line 278, in write_batch > pa_table = pa.Table.from_pydict(typed_sequence_examples) > File "pyarrow/table.pxi", line 1474, in pyarrow.lib.Table.from_pydict > File "pyarrow/array.pxi", line 322, in pyarrow.lib.asarray > File "pyarrow/array.pxi", line 222, in pyarrow.lib.array > File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol > File "/home/tim/anaconda3/envs/transformers/lib/python3.8/site-packages/datasets/arrow_writer.py", line 100, in __arrow_array__ > if trying_type and out[0].as_py() != self.data[0]: > File "pyarrow/array.pxi", line 1058, in pyarrow.lib.Array.__getitem__ > File "pyarrow/array.pxi", line 540, in pyarrow.lib._normalize_index > IndexError: index out of bounds > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/8674#issuecomment-731472522>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AHYMJABV347A2NH2SK6NU4LSQ4B6HANCNFSM4T4CBVKQ> > . ><|||||>Maybe @sgugger has an idea!<|||||>The problem is that the script uses the tokenizer max length when no `max_seq_length` is passed, and that the XLNet tokenizer has a ridiculously high maximum sequence length. I have suggested a fix in #8738. While waiting for this PR to be merged, a temporary fix is to just add --max_seq_length 512 (or any value you'd like) to your command.<|||||>I tried using the max_seq_length argument and set it to 400 and got the following error: Traceback (most recent call last): File "run_plm.py", line 379, in <module> main() File "run_plm.py", line 244, in main cache_dir=model_args.cache_dir, File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py", line 947, in from_pretrained model = cls(config, *model_args, **model_kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_xlnet.py", line 1294, in __init__ self.transformer = XLNetModel(config) File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_xlnet.py", line 940, in __init__ self.reuse_len = config.reuse_len AttributeError: 'TransfoXLConfig' object has no attribute 'reuse_len' On Mon, Nov 23, 2020 at 4:02 PM Lysandre Debut <[email protected]> wrote: > Closed #8674 <https://github.com/huggingface/transformers/issues/8674> > via #8738 <https://github.com/huggingface/transformers/pull/8738>. > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/8674#event-4029627504>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AHYMJADWOOMGPRQUSIUPRBDSRLEXRANCNFSM4T4CBVKQ> > . ><|||||>The script only works for XLNet models, you will need to tweak it for other models<|||||>I see. Thank you, Sylvain. On Tue, Nov 24, 2020 at 10:46 AM Sylvain Gugger <[email protected]> wrote: > The script only works for XLNet models, you will need to tweak it for > other models > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/8674#issuecomment-733059560>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AHYMJADG3HM6UCRSXGS56ULSRPIM3ANCNFSM4T4CBVKQ> > . > <|||||>Just to confirm that I am doing the right thing, what is the correct script in the language-modeling folder for fine-tuning Transformer-XL and CTRL? I am using run_clm.py for both of them currently but it keeps returning the same error message for both. See error below: For Transformer-XL: [INFO|modeling_utils.py:1065] 2020-11-24 15:55:30,982 >> All the weights of TransfoXLLMHeadModel were initialized from the model checkpoint at transfo-xl-wt103. If your task is similar to the task the model of the checkpoint was trained on, you can already use TransfoXLLMHeadModel for predictions without further training. 2%|▏ | 6/311 [00:13<11:44, 2.31s/ba]Traceback (most recent call last): File "run_clm.py", line 351, in <module> main() File "run_clm.py", line 261, in main load_from_cache_file=not data_args.overwrite_cache, File "/usr/local/lib/python3.6/dist-packages/datasets/dataset_dict.py", line 303, in map for k, dataset in self.items() File "/usr/local/lib/python3.6/dist-packages/datasets/dataset_dict.py", line 303, in <dictcomp> for k, dataset in self.items() File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 1259, in map update_data=update_data, File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 157, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/datasets/fingerprint.py", line 163, in wrapper out = func(self, *args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 1520, in _map_single batch, indices, check_same_num_examples=len(self.list_indexes()) > 0, offset=offset File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 1438, in apply_function_on_filtered_inputs function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) File "run_clm.py", line 254, in tokenize_function return tokenizer(examples[text_column_name]) File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py", line 2214, in __call__ **kwargs, File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py", line 2399, in batch_encode_plus **kwargs, File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 567, in _batch_encode_plus verbose=verbose, File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 630, in _batch_prepare_for_model return_attention_mask=return_attention_mask, File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py", line 2531, in pad f"type of {first_element} unknown: {type(first_element)}. " ValueError: type of [] unknown: <class 'list'>. Should be one of a python, numpy, pytorch or tensorflow object. 2%|▏ | 6/311 [00:16<13:48, 2.72s/ba] For CTRL: [INFO|modeling_utils.py:1065] 2020-11-24 15:52:33,705 >> All the weights of CTRLLMHeadModel were initialized from the model checkpoint at ctrl. If your task is similar to the task the model of the checkpoint was trained on, you can already use CTRLLMHeadModel for predictions without further training. 0%| | 0/311 [00:00<?, ?ba/s][WARNING|tokenization_utils_base.py:2736] 2020-11-24 15:52:39,268 >> Token indices sequence length is longer than the specified maximum sequence length for this model (293 > 256). Running this sequence through the model will result in indexing errors 2%|▏ | 6/311 [00:05<04:21, 1.17ba/s]Traceback (most recent call last): File "run_clm.py", line 351, in <module> main() File "run_clm.py", line 261, in main load_from_cache_file=not data_args.overwrite_cache, File "/usr/local/lib/python3.6/dist-packages/datasets/dataset_dict.py", line 303, in map for k, dataset in self.items() File "/usr/local/lib/python3.6/dist-packages/datasets/dataset_dict.py", line 303, in <dictcomp> for k, dataset in self.items() File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 1259, in map update_data=update_data, File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 157, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/datasets/fingerprint.py", line 163, in wrapper out = func(self, *args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 1520, in _map_single batch, indices, check_same_num_examples=len(self.list_indexes()) > 0, offset=offset File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 1438, in apply_function_on_filtered_inputs function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) File "run_clm.py", line 254, in tokenize_function return tokenizer(examples[text_column_name]) File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py", line 2214, in __call__ **kwargs, File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py", line 2399, in batch_encode_plus **kwargs, File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 567, in _batch_encode_plus verbose=verbose, File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 630, in _batch_prepare_for_model return_attention_mask=return_attention_mask, File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py", line 2531, in pad f"type of {first_element} unknown: {type(first_element)}. " ValueError: type of [] unknown: <class 'list'>. Should be one of a python, numpy, pytorch or tensorflow object. 2%|▏ | 6/311 [00:08<07:26, 1.46s/ba] On Tue, Nov 24, 2020 at 10:50 AM Adaku Uchendu <[email protected]> wrote: > I see. Thank you, Sylvain. > > On Tue, Nov 24, 2020 at 10:46 AM Sylvain Gugger <[email protected]> > wrote: > >> The script only works for XLNet models, you will need to tweak it for >> other models >> >> — >> You are receiving this because you authored the thread. >> Reply to this email directly, view it on GitHub >> <https://github.com/huggingface/transformers/issues/8674#issuecomment-733059560>, >> or unsubscribe >> <https://github.com/notifications/unsubscribe-auth/AHYMJADG3HM6UCRSXGS56ULSRPIM3ANCNFSM4T4CBVKQ> >> . >> > > > -- > *Adaku Uchendu* > > *McNair Scholar* > *Mathematics major* > *Statistic minor * > *Math Lab Tutor* > *Pre-Calculus LA* > *University of Maryland, Baltimore County * > *Class of 2018* ><|||||>It looks like it comes from a bug in the slow tokenizers that can't handle an empty sequence at the beginning. We're looking into it. <|||||>Thank you, Sylvain. On Tue, Nov 24, 2020 at 11:50 AM Sylvain Gugger <[email protected]> wrote: > It looks like it comes from a bug in the slow tokenizers that can't handle > an empty sequence at the beginning. We're looking into it. > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/8674#issuecomment-733103967>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AHYMJAHQJ4FQ24C6JWITIO3SRPP53ANCNFSM4T4CBVKQ> > . ><|||||>Hi Sylvain, I just wanted to know if the slow tokenizer bug is no longer a problem? Thank you <|||||>It seems to be solved on master, I didn't try on the v4 release but is might also be solved there too.<|||||>Hi Sylvain, I just tried to re-run my code with the recent changes and I still got the same error. Just to make sure I am doing the correct thing, I have attached my code below. And I am using the transformer version 4.0 with the current github transformer repo. Thank you. cd language-modeling/ python run_clm.py \ --model_type=transfo-xl \ --model_name_or_path=transfo-xl-wt103 \ --train_file='/content/drive/My Drive/finetuned_models/train.txt' \ --validation_file='/content/drive/My Drive/finetuned_models/valid.txt' \ --save_total_limit=5 \ --num_train_epochs=1.0 \ --do_train \ --do_eval \ --per_gpu_train_batch_size=2 \ --per_gpu_eval_batch_size=2 \ --output_dir='/content/drive/My Drive/finetuned_models/transformer_xl'<|||||>Indeed, the PR mentioned above will fix that specific issue. Note that `TransfoXLLMHeadModel` is not supported by `Trainer` anyway as it does not return the reduced loss. Once the PR is merged it should work with CTRL however.
transformers
8,673
closed
[model_cards] Add card for gpt2-rnm
# What does this PR do? Adds a new model card for the `e-tony/gpt2-rnm` model. ## Before submitting - [X] This PR fixes a typo or improves the docs. ## Who can review? @julien-c
11-19-2020 22:21:18
11-19-2020 22:21:18
transformers
8,672
closed
Add sentencepiece to the CI and fix tests
# What does this PR do? When removing sentencepiece from the dependencies of Transformers, the CI started to skip all tests that required sentencepiece. As a result some failures due to new breaking changes (switch to fast tokenizer by default and the fact we removed `max_len`, deprecated argument of tokenizers) were unnoticed. This PR adds back sentenpiece install on all CI checks and fix the resulting failing tests.
11-19-2020 21:07:32
11-19-2020 21:07:32
transformers
8,671
closed
Running Roberta on Race Multi choice dataset giving error
I am trying to use the script provided for race dataset training using bert/roberta models. I am running the script and getting this error : ``` python3 run_multiple_choice.py --task_name race --model_name_or_path roberta-base--do_train --do_eval --data_dir $SWAG_DIR --learning_rate 5e-5 --num_train_epochs 3 --max_seq_length 80 --output_dir models_bert/swag_base --per_gpu_eval_batch_size=16 --per_device_train_batch_size=16 --gradient_accumulation_steps 2 --overwrite_output /home/admin/Monk/lib/python3.6/site-packages/transformers/training_args.py:332: FutureWarning: The `evaluate_during_training` argument is deprecated in favor of `evaluation_strategy` (which has more options) FutureWarning, 11/19/2020 12:22:36 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 4, distributed training: False, 16-bits training: False Some weights of the model checkpoint at roberta-base were not used when initializing RobertaForMultipleChoice: ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight'] - This IS expected if you are initializing RobertaForMultipleChoice from the checkpoint of a model trained on another taskor with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model). - This IS NOT expected if you are initializing RobertaForMultipleChoice from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of RobertaForMultipleChoice were not initialized from the model checkpoint at roberta-base and are newly initialized: ['classifier.weight', 'classifier.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Traceback (most recent call last): File "run_multiple_choice.py", line 237, in <module> main() File "run_multiple_choice.py", line 171, in main if training_args.do_train File "/home/admin/Monk/hf_mcq/utils_multiple_choice.py", line 113, in __init__ with FileLock(lock_path): File "/home/admin/Monk/lib/python3.6/site-packages/filelock.py", line 323, in __enter__ self.acquire() File "/home/admin/Monk/lib/python3.6/site-packages/filelock.py", line 271, in acquire self._acquire() File "/home/admin/Monk/lib/python3.6/site-packages/filelock.py", line 384, in _acquire fd = os.open(self._lock_file, open_mode) FileNotFoundError: [Errno 2] No such file or directory: '/RACE/cached_train_RobertaTokenizer_80_race.lock' ```
11-19-2020 20:26:44
11-19-2020 20:26:44
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
8,670
closed
Is Reformer supported under Encoder-Decoder framework?
Hi, Is it possible to use Reformer with the encoder-decoder framework (i.e Reformer2Reformer)?
11-19-2020 20:24:51
11-19-2020 20:24:51
It's not yet supported sadly<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
8,669
closed
Make signature of `compute_metrics` parameter in Trainer class more flexible
# 🚀 Feature request Current typing signature for the `compute_metrics` parameter in the `Trainer` class is: ```python class Trainer: ... def __init__( ... compute_metrics: Optional[Callable[[EvalPrediction], Dict]] = None, ... ``` As it is described now in with the Python typing system, the only parameter that you can pass to the function is `EvalPrediction`, containing the model predictions to calculate the metrics. I propose to make the function signature of `compute_metrics` a little bit more flexible, for example: ```python compute_metrics: Optional[Callable[[EvalPrediction, Optional[Any]], Dict]] = None, ``` or ```python compute_metrics: Optional[Callable[[EvalPrediction, Optional[Dict]], Dict]] = None, ``` so users can pass an extra argument -e.g. a Dict- with additional information that can be used in the function. Solution is not perfect in the sense that the typing system check of IDEs will scream when already defined functions in current user projects, pass a single parameter (see example below) as I haven't found a way of assigning a default value to the `Optional` in the `Callable` signature; using the `Ellipsis` is not possible either unless I've missed something (comments are welcome on this!!!) ## Motivation For some models, I had to either pass some extra arguments to perform the metrics calculation or to access remote services to retrieve some additional data. The current typing signature for `compute_metrics` does not allow to pass these extra params so I had to do dirty workarounds. ```python from abc import ABC from typing import Optional, Callable, Dict, Any def g(a:int): print(f"a in g: {a}") return {} def h(a:int, b: Optional[int] = None): print(f"a in h: {a}") if b: print(f"b passed to g: {b}") return {} class Dummy(ABC): def __init__(self, f: Optional[Callable[[int, Optional[Any]], Dict]] = None): self.f = f def test_f(self): if self.f: print(f"Calling {self.f}") if self.f.__name__ == "g": self.f(1) # <- here the typing system screams a bit elif self.f.__name__ == "h": self.f(2,3) else: print("Not calling anything") if __name__ == '__main__': o = Dummy(g) o.test_f() o = Dummy(h) o.test_f() ``` ## Contribution If someone else has had similar needs, you think this is a good idea, or you have better suggestion, I can provide a PR for this.
11-19-2020 19:56:16
11-19-2020 19:56:16
> For some models, I had to either pass some extra arguments to perform the metrics calculation or to access remote services to retrieve some additional data. How did you do that in the current `Trainer`? I'm not against making `compute_metrics` more flexible but I don't see how you can add more arguments to its call without subclassing Trainer and overriding certain methods, in which case, you can also override the init.<|||||>In fact I have my own subclassed Trainer, but I wanted to reuse as many as possible of the current `__init__` parameters of the regular Trainer. I know I can define my own `compute_metrics` function with a different signature in the `__init__` method of my Trainer, but I was trying to avoid that and reuse the current `compute_metrics` signature :-) Maybe the example above is not clear enough as the the DummyTrainer is not subclassing the standard one. The example was trying to highlight the function signature more than the reuse of the original Trainer. <|||||>This seems like a very edge case to me, so I would leave the current `Trainer` as is, and adapt the `__init__` in your subclasses of `Trainer`.<|||||>Yes, I agree that is kind of a corner case. In my subclassed Trainer I was trying to reuse the `prediction_loop()` method from the base Trainer but I needed to pass some more parameters to my `compute_metrics` function, apart from the `EvalPrediction` param. So the easiest workaround was of course, to copy the original `prediction_loop()` in my subclassed Trainer and, instead of calling: https://github.com/huggingface/transformers/blob/8062fa63c564d4cc0d29573acd89092f1eb1df64/src/transformers/trainer.py#L1398 , I call my version of `compute_metrics` with the extra parameters. With this proposal was trying to avoid that ugly copy-paste I had to do by 1) changing the `compute_metrics` signature as I described above, 2) defining an attribute in Trainer to serve as a placeholder for the extra metrics arguments (and which can be set from subclassed Trainers,) e.g.: ```python self.metrics_extra_args: Optional[Dict] = None ``` and 3) changing line 1398 to something like: ```python metrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids), self.metrics_extra_args) ``` But I understand that is kind of convoluted. Thanks @sgugger in any case for your consideration!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I really need this
transformers
8,668
closed
Update bert-base-multilingual-cased-README.md
The heading was originally uncased, which did not reflect the contents of this README. Changed it to cased. # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
11-19-2020 19:43:18
11-19-2020 19:43:18
Good catch, thanks!
transformers
8,667
closed
Alternative to globals()
# What does this PR do? This PR does two things: - remove some tokenizer classes that were used in a dictionary before being erased which was super weird - add a function that goes from tokenizer class name to tokenizer class to avoid using `globals`
11-19-2020 19:21:10
11-19-2020 19:21:10
transformers
8,666
closed
Fix a few last paths for the new repo org
# What does this PR do? Fixes a few old paths in documentation or examples.
11-19-2020 16:55:47
11-19-2020 16:55:47
transformers
8,665
closed
Use return_dict in RagModel forward pass
There were changes in the output format of models bu it looks like the RagModel forward pass was not updated to use `return_dict` as noticed in #8653 I'm running the slow tests right now. If they pass I will update from draft pull request to open request
11-19-2020 16:42:12
11-19-2020 16:42:12
Closing since it was fixed in #8585
transformers
8,664
closed
Fix run_ner script
# What does this PR do? There have been a few breaking changes in the Datasets library that resulted in `run_ner` not working. This PR addresses that. Fixes #8654
11-19-2020 16:29:13
11-19-2020 16:29:13
transformers
8,663
closed
transformers-cli: LFS multipart uploads (> 5GB)
### Implementation of a custom transfer agent for the transfer type "multipart" for git-lfs. This lets users upload large files >5GB 🔥. Spec for LFS custom transfer agent is: https://github.com/git-lfs/git-lfs/blob/master/docs/custom-transfers.md The PR introduces two commands to the CLI: ``` transformers-cli lfs-enable-largefiles ./path/to/repo ``` ^ Do this once per model repo where you want to push >5GB files. It's documented in the error message you get if you just try to `git push` a 5GB file without having enabled it before. ``` transformers-cli lfs-multipart-upload ``` ^ is the custom transfer agent itself. This is not meant to be called by the user, but by lfs directly. ### Things to experiment with: - [ ] upload speed. Is it sufficient? Please comment with your upload speeds e.g. for https://huggingface.co/t5-3b Experiment: ```bash time git clone https://huggingface.co/t5-3b cd t5-3b git remote set-url origin https://huggingface.co/$USER/t5-3b-clone # ^ After having created this model repo in your account transformers-cli lfs-enable-largefiles . git reset 5e0a32db352b33091ea9fb2f8d8782d47a505986 # go back to initial commit for lfs to reupload files git add pytorch_model.bin git commit -m "ok" time git push ```
11-19-2020 16:14:30
11-19-2020 16:14:30
:+1: nice job!<|||||>Hi, this command may need a args? ``` transformers-cli lfs-enable-largefiles repo_path ``` <|||||>> Hi, this command may need a args? Yes, correct <|||||>Thanks a lot for the PR, and thanks for letting me know. I will give my feedback after testing it with the 11B model soon. <|||||>Hi, thanks a lot for the PR. I try to reinstall transformers with this new branch and follow with this comment: ``` git clone https://huggingface.co/mymusise/CPM-Third-Party cd CPM-Third-Party git lfs install transformers-cli lfs-enable-largefiles . cp ../models/tf_model.h5 ./ git add . && git commit -m 'add model' git push ``` Then, after half an hour later, it raised an error: ``` $ git push Git LFS: (0 of 1 files) 0 B / 9.68 GB Git LFS: (0 of 1 files) 4.66 GB / 9.68 GB EOFoading LFS objects: 0% (0/1), 9.31 B / 9.68 GB error: failed to push some refs to 'https://huggingface.co/mymusise/CPM-Third-Party' ``` Did I do something wrong?<|||||>@mymusise might have been an intermittent server error. Can you try again?<|||||>> @mymusise might have been an intermittent server error. Can you try again? Yes, I try again and again. But it always raises this error at `9.31 GB / 9.68 GB`. :eyes: But, now `git push` will return a ` 502 ` error : ``` fatal: unable to access 'https://huggingface.co/mymusise/CPM-Third-Party/': The requested URL returned error: 502 ```<|||||>@mymusise Yes, looks like this is crashing/locking the server 😱 Do you mind trying again on Monday? As we'll have more bandwidth to fix then. Sorry about that :/<|||||>It's ok, guy, haha. Waiting for your good news. Happy weekend!<|||||>>Yes, I try again and again. But it always raises this error at 9.31 GB / 9.68 GB. Hi, I try again today and push the big model file without any exception! :tada: Thank guys!<|||||>Hi, here I push another model file(4.9GB) again, but this time it gives me a **504** Gateway Time-out error :sweat: ``` root@iZt4n9z4x3ph9oc3hhrdneZ:~/CPM-FP16-Third-Party# time git push Username for 'https://huggingface.co': mymusise Password for 'https://[email protected]': Counting objects: 3, done. Compressing objects: 100% (2/2), done. Writing objects: 66% (2/3), 2.52 GiB | 3.62 MiB/s Writing objects: 100% (3/3), 4.46 GiB | 3.77 MiB/s, done. Total 3 (delta 0), reused 1 (delta 0) error: RPC failed; HTTP 504 curl 22 The requested URL returned error: 504 Gateway Time-out fatal: The remote end hung up unexpectedly fatal: The remote end hung up unexpectedly Everything up-to-date real 22m8.653s user 0m29.758s sys 0m17.136s ``` Seems the big file is uploaded completely, it looks like there is some problem with the server configuration about the timeout. @julien-c Tried three times with the same result.<|||||>@Pierrci - I ran some tests on ~10 GB files (`t5-3b`) and didn't encounter any problems! However when doing it for ~45GB files (`t5-11b`), I encounter some problems. ``` cd t5-11b-repo transformers-cli lfs-enable-largefiles /path/to/repo git add . git commit -m "add" git push # <= this command failse ``` In case it is useful, here is a link to the `git trace` error message: https://github.com/patrickvonplaten/files_to_link_to/blob/master/output.txt I can always go back to the way of manually uploading to the git-lfs hash path. Maybe the error message is helpful though :-) <|||||>Thanks @patrickvonplaten, I see where it might be coming from, gonna look at it now!<|||||>I deployed a fix that should address your problem, can you try again @patrickvonplaten? @mymusise Is it possible for you to try again with `GIT_CURL_VERBOSE=1 git push` so that we can try to get more information? From what you shared so far, your error seems to be different from Patrick's one.<|||||>Thanks, @Pierrci. Yes, I think my error is different from Patrick's one. Here is the [information](https://gist.github.com/mymusise/bfa331e3effe0876efbc0011334ead96) with `GIT_CURL_VERBOSE=1 git push`. Hope it help. <|||||>> I deployed a fix that should address your problem, can you try again @patrickvonplaten? > > @mymusise Is it possible for you to try again with `GIT_CURL_VERBOSE=1 git push` so that we can try to get more information? From what you shared so far, your error seems to be different from Patrick's one. Awesome it works now @Pierrci - thanks! :-) <|||||>> Thanks, @Pierrci. Yes, I think my error is different from Patrick's one. Here is the [information](https://gist.github.com/mymusise/bfa331e3effe0876efbc0011334ead96) with `GIT_CURL_VERBOSE=1 git push`. > Hope it help. @mymusise Are you sure LFS is properly installed and configured for the repo? From your logs it seems your `git push` command isn't doing any LFS work (like running a `pre-push` hook or calling our LFS endpoint), trying instead to push all the files through the classic git endpoint, which can't work.
transformers
8,662
closed
Can't upload the larger model file(9GB)
Hey, guy, I got a problem when I upload my new model file, it raises an error. ``` /data2/CPM-TF/CPM-Third-Party on  main ⌚ 23:30:47 $ git push Username for 'https://huggingface.co': mymusises Password for 'https://[email protected]': LFS: Client error: https://s3.amazonaws.com/lfs.huggingface.co/mymusise/CPM-Third-Party/d853c...?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AK...%2F20201119%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201119T153653Z&X-Amz-Expires=900&X-Amz-Signature=84...&X-Amz-SignedHeaders=host error: failed to push some refs to 'https://huggingface.co/mymusise/CPM-Third-Party' (env) /data2 ``` Then, I delete the folder and try it again, and I noted there's an error when I add the model file: ``` /data2/CPM-TF/CPM-Third-Party on  main! ⌚ 23:24:04 $ git add --all Encountered 1 file(s) that may not have been copied correctly on Windows: tf_model.h5 ``` And I sure I have installed `glf` with `git lfs install` before doing this. What should I do? ## system info system: ubuntu 18.04.4 git version: 2.17.1 git-lfs version: 2.12.1 Model Cards: @julien-c
11-19-2020 15:42:01
11-19-2020 15:42:01
This is a known issue that's being tracked at #8663
transformers
8,661
closed
cannot load t5-base config
Hi Here is my two lines to get t5-config: ``` from transformers import AutoConfig config = AutoConfig.from_pretrained('t5-base') ``` Here are the errors, is there an issue with t5-base storage? I am really confused, thank you for your help on this. thanks ``` file t5-base/config.json not found Traceback (most recent call last): File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/configuration_utils.py", line 388, in get_config_dict local_files_only=local_files_only, File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/file_utils.py", line 962, in cached_path raise EnvironmentError("file {} not found".format(url_or_filename)) OSError: file t5-base/config.json not found During handling of the above exception, another exception occurred: Traceback (most recent call last): File "test.py", line 2, in <module> config = AutoConfig.from_pretrained('t5-base') File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/configuration_auto.py", line 333, in from_pretrained config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/configuration_utils.py", line 400, in get_config_dict raise EnvironmentError(msg) OSError: Can't load config for 't5-base'. Make sure that: - 't5-base' is a correct model identifier listed on 'https://huggingface.co/models' - or 't5-base' is the correct path to a directory containing a config.json file ```
11-19-2020 15:40:45
11-19-2020 15:40:45
Hello, could you please fill the issue template when opening issues? Otherwise we cannot help you. Thanks.<|||||>## Environment info - `transformers` version: 3.5.1 - Platform: cpu - Python version: 3.7 - PyTorch version (GPU?): 1.0.4 - Tensorflow version (GPU?): tensorflow-datasets 4.1.0 <pip> tensorflow-metadata 0.25.0 <pip> - Using GPU in script?: - - Using distributed or parallel set-up in script?: - ### Who can help Here is my two lines to get t5-config: ``` from transformers import AutoConfig config = AutoConfig.from_pretrained('t5-base') ``` Model Cards: @julien-c T5: @patrickvonplaten ## Information Here are the errors, is there an issue with t5-base storage? I am really confused, thank you for your help on this. thanks ``` file t5-base/config.json not found Traceback (most recent call last): File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/configuration_utils.py", line 388, in get_config_dict local_files_only=local_files_only, File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/file_utils.py", line 962, in cached_path raise EnvironmentError("file {} not found".format(url_or_filename)) OSError: file t5-base/config.json not found During handling of the above exception, another exception occurred: Traceback (most recent call last): File "test.py", line 2, in <module> config = AutoConfig.from_pretrained('t5-base') File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/configuration_auto.py", line 333, in from_pretrained config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/configuration_utils.py", line 400, in get_config_dict raise EnvironmentError(msg) OSError: Can't load config for 't5-base'. Make sure that: - 't5-base' is a correct model identifier listed on 'https://huggingface.co/models' - or 't5-base' is the correct path to a directory containing a config.json file ``` ## To reproduce please run the two lines above. ## Expected behavior loading the config <|||||>Hi, sure, done<|||||>Hi @LysandreJik this issues is really weird and really blocking me, I greatly appreciate having a look. thanks <|||||>Hey @rabeehk - I cannot reproduce the error. I suspect the following: You run the code from a directory that includes a local folder that is called `t5-base` which does not have a `config.json`. Could you try to run: ``` from transformers import AutoConfig config = AutoConfig.from_pretrained('t5-base') ``` from another folder or delete the `t5-base` folder that might be the reason? <|||||>Hi patrick thank you for the reply, this is what happening when I call this line: config = T5Config.from_pretrained( model_args.config_name if model_args.config_name else model_args.model_name_or_path, ) the code by itself creates an empty t5-base directory, I delete it and then it recreates it. Do you have an idea on this? thanks Best Rabeeh On Thu, Nov 19, 2020 at 9:32 PM Patrick von Platen <[email protected]> wrote: > Hey @rabeehk <https://github.com/rabeehk> - I cannot reproduce the error. > I suspect the following: > > You run the code from a directory that includes a local folder that is > called t5-base which does not have a config.json. > Could you try to run: > > from transformers import AutoConfig > config = AutoConfig.from_pretrained('t5-base') > > from another folder or delete the t5-base folder that might be the reason? > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/8661#issuecomment-730620761>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ABP4ZCGIXURMYC7WWVFZPXDSQV6GTANCNFSM4T3SASPQ> > . > <|||||>Could you tell me please how to set the path when using datasets that it does not cache in the default path? to me there might not be space for the model and this is all happening. Overall all the caches done in huggingface code, could you tell me how to set the different path for them thanks On Fri, Nov 20, 2020 at 1:12 PM Rabeeh Karimi <[email protected]> wrote: > Hi patrick > thank you for the reply, this is what happening when I call this line: > > config = T5Config.from_pretrained( > model_args.config_name if model_args.config_name else > model_args.model_name_or_path, > ) > > the code by itself creates an empty t5-base directory, I delete it and > then it recreates it. > Do you have an idea on this? > thanks > Best > Rabeeh > > On Thu, Nov 19, 2020 at 9:32 PM Patrick von Platen < > [email protected]> wrote: > >> Hey @rabeehk <https://github.com/rabeehk> - I cannot reproduce the >> error. I suspect the following: >> >> You run the code from a directory that includes a local folder that is >> called t5-base which does not have a config.json. >> Could you try to run: >> >> from transformers import AutoConfig >> config = AutoConfig.from_pretrained('t5-base') >> >> from another folder or delete the t5-base folder that might be the >> reason? >> >> — >> You are receiving this because you were mentioned. >> Reply to this email directly, view it on GitHub >> <https://github.com/huggingface/transformers/issues/8661#issuecomment-730620761>, >> or unsubscribe >> <https://github.com/notifications/unsubscribe-auth/ABP4ZCGIXURMYC7WWVFZPXDSQV6GTANCNFSM4T3SASPQ> >> . >> > <|||||>So when I run the codes, there is caching done here cahce dir /idiap/home/rkarimi/.cache/huggingface/datasets cahce dir /idiap/home/rkarimi/.cache/huggingface/datasets/downloads could I change this? thanks Best Rabeeh On Fri, Nov 20, 2020 at 1:44 PM Rabeeh Karimi <[email protected]> wrote: > Could you tell me please how to set the path when using datasets that it > does not cache in the default path? > to me there might not be space for the model and this is all happening. > Overall all the caches done in huggingface code, could you tell me how to > set the different path for them > thanks > > On Fri, Nov 20, 2020 at 1:12 PM Rabeeh Karimi <[email protected]> > wrote: > >> Hi patrick >> thank you for the reply, this is what happening when I call this line: >> >> config = T5Config.from_pretrained( >> model_args.config_name if model_args.config_name else >> model_args.model_name_or_path, >> ) >> >> the code by itself creates an empty t5-base directory, I delete it and >> then it recreates it. >> Do you have an idea on this? >> thanks >> Best >> Rabeeh >> >> On Thu, Nov 19, 2020 at 9:32 PM Patrick von Platen < >> [email protected]> wrote: >> >>> Hey @rabeehk <https://github.com/rabeehk> - I cannot reproduce the >>> error. I suspect the following: >>> >>> You run the code from a directory that includes a local folder that is >>> called t5-base which does not have a config.json. >>> Could you try to run: >>> >>> from transformers import AutoConfig >>> config = AutoConfig.from_pretrained('t5-base') >>> >>> from another folder or delete the t5-base folder that might be the >>> reason? >>> >>> — >>> You are receiving this because you were mentioned. >>> Reply to this email directly, view it on GitHub >>> <https://github.com/huggingface/transformers/issues/8661#issuecomment-730620761>, >>> or unsubscribe >>> <https://github.com/notifications/unsubscribe-auth/ABP4ZCGIXURMYC7WWVFZPXDSQV6GTANCNFSM4T3SASPQ> >>> . >>> >> <|||||>> Hi patrick thank you for the reply, this is what happening when I call this line: config = T5Config.from_pretrained( model_args.config_name if model_args.config_name else model_args.model_name_or_path, ) the code by itself creates an empty t5-base directory, I delete it and then it recreates it. Do you have an idea on this? thanks Best Rabeeh > […](#) > On Thu, Nov 19, 2020 at 9:32 PM Patrick von Platen ***@***.***> wrote: Hey @rabeehk <https://github.com/rabeehk> - I cannot reproduce the error. I suspect the following: You run the code from a directory that includes a local folder that is called t5-base which does not have a config.json. Could you try to run: from transformers import AutoConfig config = AutoConfig.from_pretrained('t5-base') from another folder or delete the t5-base folder that might be the reason? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <[#8661 (comment)](https://github.com/huggingface/transformers/issues/8661#issuecomment-730620761)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ABP4ZCGIXURMYC7WWVFZPXDSQV6GTANCNFSM4T3SASPQ> . I don't think `from_pretrained(...)` ever creates a directory -> this should not happen. Not sure what's going on there. Can you maybe add a colab where I can reproduce your error?<|||||>> Could you tell me please how to set the path when using datasets that it does not cache in the default path? to me there might not be space for the model and this is all happening. Overall all the caches done in huggingface code, could you tell me how to set the different path for them thanks > […](#) > On Fri, Nov 20, 2020 at 1:12 PM Rabeeh Karimi ***@***.***> wrote: Hi patrick thank you for the reply, this is what happening when I call this line: config = T5Config.from_pretrained( model_args.config_name if model_args.config_name else model_args.model_name_or_path, ) the code by itself creates an empty t5-base directory, I delete it and then it recreates it. Do you have an idea on this? thanks Best Rabeeh On Thu, Nov 19, 2020 at 9:32 PM Patrick von Platen < ***@***.***> wrote: > Hey @rabeehk <https://github.com/rabeehk> - I cannot reproduce the > error. I suspect the following: > > You run the code from a directory that includes a local folder that is > called t5-base which does not have a config.json. > Could you try to run: > > from transformers import AutoConfig > config = AutoConfig.from_pretrained('t5-base') > > from another folder or delete the t5-base folder that might be the > reason? > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <[#8661 (comment)](https://github.com/huggingface/transformers/issues/8661#issuecomment-730620761)>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ABP4ZCGIXURMYC7WWVFZPXDSQV6GTANCNFSM4T3SASPQ> > . > Is this issue about `t5-base` config or about datasets? I don't follow here<|||||>Hi Patrick this is now solved, I was mistakenly choose the output_path as t5-base and this was the reason for the creation of empty t5-base directory. I would like to thank you so much for the help. Best Rabeeh On Fri, Nov 20, 2020 at 3:01 PM Patrick von Platen <[email protected]> wrote: > Could you tell me please how to set the path when using datasets that it > does not cache in the default path? to me there might not be space for the > model and this is all happening. Overall all the caches done in huggingface > code, could you tell me how to set the different path for them thanks > … <#m_3116611502979953478_> > On Fri, Nov 20, 2020 at 1:12 PM Rabeeh Karimi *@*.*> wrote: Hi patrick > thank you for the reply, this is what happening when I call this line: > config = T5Config.from_pretrained( model_args.config_name if > model_args.config_name else model_args.model_name_or_path, ) the code by > itself creates an empty t5-base directory, I delete it and then it > recreates it. Do you have an idea on this? thanks Best Rabeeh On Thu, Nov > 19, 2020 at 9:32 PM Patrick von Platen < @.*> wrote: > Hey @rabeehk > <https://github.com/rabeehk> https://github.com/rabeehk - I cannot > reproduce the > error. I suspect the following: > > You run the code from a > directory that includes a local folder that is > called t5-base which does > not have a config.json. > Could you try to run: > > from transformers > import AutoConfig > config = AutoConfig.from_pretrained('t5-base') > > from > another folder or delete the t5-base folder that might be the > reason? > > > — > You are receiving this because you were mentioned. > Reply to this > email directly, view it on GitHub > <#8661 (comment) > <https://github.com/huggingface/transformers/issues/8661#issuecomment-730620761>>, > > or unsubscribe > > https://github.com/notifications/unsubscribe-auth/ABP4ZCGIXURMYC7WWVFZPXDSQV6GTANCNFSM4T3SASPQ > > . > > > Is this issue about t5-base config or about datasets? I don't follow here > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/8661#issuecomment-731187426>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ABP4ZCATGEHAA7275ZAEFZLSQZZFPANCNFSM4T3SASPQ> > . >
transformers
8,660
closed
Fix bug in x-attentions output for roberta and harden test to catch it
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> @patrickvonplaten this fixed a bug i missed in #8071 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
11-19-2020 14:26:32
11-19-2020 14:26:32
transformers
8,659
closed
Improve bert-japanese tokenizer handling
remove an inelegant string-testing hack in favor of the tooling we now have (support for `config.tokenizer_class`, and versioning of model files in the hub): see commits enabling this on huggingface.co side: - https://huggingface.co/yosuke/bert-base-japanese-char/commit/e8365f5c923b98b8cfbf258cfeb14ac536477a31 - https://huggingface.co/daigo/bert-base-japanese-sentiment/commit/88b611269eb04ce60afa4d448b27ffee0a48f5a0 - https://huggingface.co/bandainamco-mirai/distilbert-base-japanese/commit/f411ce0e53839adab9e39187ef179e3b5c836f7c
11-19-2020 13:24:38
11-19-2020 13:24:38
transformers
8,658
closed
ConvBERT
11-19-2020 13:10:01
11-19-2020 13:10:01
transformers
8,657
closed
Fix embeddings resizing in TF models
# What does this PR do? Currenlty when the embeddings are resized the biases are not resized in same time. In TF there is no explicit link between the decoder weights and biases in a dense layer contrarily than in PT. This PR fixes this issue by resizing in same time the biases, even thought I don't know if this is the best solution. @LysandreJik @sgugger what do you think?
11-19-2020 12:44:34
11-19-2020 12:44:34
Thanks @sgugger for your useful comments. I was thinking the same about `get_output_embeddings` but I didn't want to change to much things in same time. I like very much the solution you proposed and I'm totally fine with it!<|||||>@sgugger I have reworked the resizing for the bias and applied it on BERT at first for testing. Are you agree with this new way to do? If yes, I will do the same for the other models.<|||||>@sgugger @patrickvonplaten @LysandreJik This PR takes care of resizing all the bias, and if we start to change how the embeddings are resized + modify the generation, I think it would be a bit too much and out of the scope of this PR. Then, what I propose is to keep how it was at the beginning in `generation_tf_utils.py` and the `self.get_output_embeddings` methods and move this discussion on another PR. In this another PR I would like as well to fully review how the resizing is done, because the number of line of codes can be largely reduced and simplified. What do you think?<|||||>It would be awesome if we can keep the `get_output_embeddings()` method and leave `generate()` as it is and only focus on the resizing problem here. I'm 100% on board with fixing the resizing problem and it'd be awesome to do this orthogonally to `get_output_embeddings()`. A couple of reasons why I would like to keep `get_output_embeddings()` (I can copy this to the new PR as well): 1) Consistency with PyTorch. In PyTorch `get_output_embeddings()` is even more integrated with other functionalities (like weight tying) and I think we should stay consistent in TF and PT 2) `get_output_embeddings()` is an important function IMO to quickly get the correct logit matrix. Without this function it's not at all always obvious how to get the output embeddings for some models (especially EncoderDecoder, RAG, ...). A unified API for all models is of great help here IMO and I use it a lot actually 3) Don't want to tie the capability of a model to `generate()` with the `MODEL_FOR_....` classes - this is inconsistent with PyTorch and unnecessarily creates a dependency IMO. <|||||>Thanks a lot @patrickvonplaten for sharing this! I think we should move this talk to a more suited place, and meanwhile I will revert that part of the changes.<|||||>I disagree with you on this @patrickvonplaten > 1. Consistency with PyTorch. In PyTorch get_output_embeddings() is even more integrated with other functionalities (like weight tying) and I think we should stay consistent in TF and PT The weight tying cannot be done the same way in TF (and honestly the resizing on the PyTorch side is a bit hacky and very hard to understand it kind of goes against our principle of no magic code), so this alone is not an argument for keeping the `get_output_embeddings` method > 2. get_output_embeddings() is an important function IMO to quickly get the correct logit matrix. Without this function it's not at all always obvious how to get the output embeddings for some models (especially EncoderDecoder, RAG, ...). A unified API for all models is of great help here IMO and I use it a lot actually The problem is that this function is always implemented to return the input embeddings, so the function as it is does not do anything more than `get_input_embeddings` while giving the user a false sense of what it returns. (Note that there is no model in TF apart from mobileBERT that has the capability of having different weights for the embeddings and the decoder, the weights are **always** tied). > 3. Don't want to tie the capability of a model to `generate()` with the `MODEL_FOR_....` classes - this is inconsistent with PyTorch and unnecessarily creates a dependency IMO. The PyTorch side has no assert, so in that case, the consistent thing is to remove the assert entirely. I could be convinced to leave the `get_output_embeddings` method for mobileBERT only since it's the only model where it returns something useful, but it's dangerous to have it otherwise (unless we had a way to untie the weights, but that's for another PR!)<|||||>Ok we debriefed a bit with @patrickvonplaten to avoid spamming the PR. I had missed that some models are already using an output embeddings that is different from the input embeddings (most models are tied), like T5 or mT5. So those, like `mobileBERT`, will definitely need the `get_output_embeddings` method. Right now though, the resizing does not work for those models. In the end, we both agree on keeping that method, add the `get_output_bias` method and the `resize_embeddings` should use the outputs of those two methods as well as `get_input_embeddings` in all the things it has to resize. To check if the input embeddings and output_embeddings are the same (and not resize them twice) we could use the `._handle_name` attribute of their weights (or something else if you have a better idea). Does that all make sense?<|||||>Ok, I'm totally fine with this 👍 ! Nevertheless, there are still few things I don't get. > Right now though, the resizing does not work for those models. What do you mean by the resizing does not work? Which one? Do you have a more specific example? > To check if the input embeddings and output_embeddings are the same (and not resize them twice) we could use the ._handle_name attribute of their weights (or something else if you have a better idea). I don't understand this sentence, do you have an example? What do we have to check if the input/output embeddings are different if we get them with two separate methods (namely get_input_embeddings and get_output_embeddings).<|||||>The new T5 and mT5 models have an output embedding layer that is sometimes tied to the input embeddings (so same weights like BERT) and sometimes different. When it's different, it is not resized. > I don't understand this sentence, do you have an example? What do we have to check if the input/output embeddings are different if we get them with two separate methods (namely get_input_embeddings and get_output_embeddings). The output embeddings are, very often, the same as the input embeddings (BERT situation) so in most instances `get_output_embeddings` will return the same thing as `get_input_embeddings` (which is why we initially decided to remove `get_output_embeddings` when discussing together). However, in some cases, it returns something different (mT5 and T5 as mentioned above, or mobileBERT) which is (with avoiding a breaking change) the main argument to keep this `get_output_embeddings` method. However, when taking its result in `resize_embeddings`, we should check if we get a different result from `get_input_embeddings`. Is that clearer?<|||||>Crystal clear!!! Thanks a lot for the details! I will proceed to the changes once the sprint is finished 👍 <|||||>@patrickvonplaten I have put back the `get_output_embeddings`, does it seems ok for you now, or did I forget something?<|||||>Did I miss anything else?<|||||>@LysandreJik any idea why all the tests are failing with a timeout?<|||||>Yes, I can see why. Seeing with how to fix it, you'll probably have to rebase.<|||||>@jplu, please kindly rebase your branch - this is yet another edge case I haven't expected - fixed in master. Thank you!<|||||>Thanks @stas00 and @LysandreJik for having fixed the issue. @LysandreJik @sgugger @patrickvonplaten is there anything else I have to do in this PR? or it looks ok for you?<|||||>@patrickvonplaten any other comments? Or you are fine with the current version?<|||||>Not sure why it says @mfuntowicz force-pushed on this PR, but sadly it seems like the history was messed up a bit. Maybe it can be resolved by just `git reset --hard` to the commit before Morgan's force-push? <|||||>Oh no @mfuntowicz killed all my open PRs 😄 ok trying to fix this :)<|||||>Ok should be good now!<|||||>LGTM! @LysandreJik just missing your approval. The Flax tests do not pass and I don't know why :(
transformers
8,656
closed
Return output probabilities with Generate function
# 🚀 Feature request Output the token probabilities along with the tokens when generating sequence. ## Motivation For understanding model confidence, this is quite useful. Also, for abstractive QA with long contexts, one needs to use doc-strides to take into account the contexts & then choose the best answer according to the probability of the generated text. ## Your contribution I can try submitting a PR for non-beam decoding, but guidance would be appreciated. Also, are there any existing solutions to this issue? If so, what & where?
11-19-2020 12:44:19
11-19-2020 12:44:19
Duplicate of https://github.com/huggingface/transformers/issues/7654 . But yes, it seems like many people are asking for this feature and it should be quite straight-forward to implement it... -> feel free to give a shot :-) <|||||>Hi. When can we expect this to be released? 😊<|||||>Next week Tuesday or Wednesday :-) <|||||>Hey! Can anyone point me to the API/code location of this feature? Sorry if I have missed something. :) Thank you!<|||||>@moqingyan https://huggingface.co/transformers/internal/generation_utils.html<|||||>> @moqingyan https://huggingface.co/transformers/internal/generation_utils.html Thank you! My problem was: I set the configuration `output_scores` to `True` in the `generate` function and failed obtain the scores in the returned results. After I struggled in the libraries for hours, I finally figured out I also need to set `return_dict_in_generate` to `True` to obtain the attention values from the `generate` function. :) I think this behavior is non-intuitive, as I have specified I need the scores in the output already. But anyway I figured this out. Hope this comment is helpful to anyone who runs into this issue. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
8,655
closed
[model card] : fix Geotrend/bert-base-15lang-cased
@julien-c it's me again. The table in [Geotrend/bert-base-15lang-cased](https://huggingface.co/Geotrend/bert-base-15lang-cased) is badly formatted even if it looks good on [Github](https://github.com/huggingface/transformers/blob/master/model_cards/Geotrend/bert-base-15lang-cased/README.md). I guess I have to add a **double line break**. Thanks ! # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
11-19-2020 10:06:17
11-19-2020 10:06:17
we use marked.js as a markdown parser so rendering should be pretty close to GitHub in general but there can always be some small inconsistencies
transformers
8,654
closed
Error in NER examples, run.sh
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.5.1 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.7.0+cu101 (True) - Tensorflow version (GPU?): 2.3.0 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. @stefan-it albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao Blenderbot: @patrickvonplaten Bart: @patrickvonplaten Marian: @patrickvonplaten Pegasus: @patrickvonplaten mBART: @patrickvonplaten T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSMT: @stas00 examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): Bert The problem arises when using: * [x] the official example scripts: (give details below) examples/token-classification/run.sh * [ ] my own modified scripts: (give details below) The tasks I am working on is: NER with conll2003 dataset * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. !sh examples/token-classification/run.sh 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> Error traceback Traceback (most recent call last): File "run_ner.py", line 383, in <module> main() File "run_ner.py", line 285, in main load_from_cache_file=not data_args.overwrite_cache, File "/usr/local/lib/python3.6/dist-packages/datasets/dataset_dict.py", line 300, in map for k, dataset in self.items() File "/usr/local/lib/python3.6/dist-packages/datasets/dataset_dict.py", line 300, in <dictcomp> for k, dataset in self.items() File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 1256, in map update_data=update_data, File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 156, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/datasets/fingerprint.py", line 158, in wrapper self._fingerprint, transform, kwargs_for_fingerprint File "/usr/local/lib/python3.6/dist-packages/datasets/fingerprint.py", line 105, in update_fingerprint hasher.update(transform_args[key]) File "/usr/local/lib/python3.6/dist-packages/datasets/fingerprint.py", line 57, in update self.m.update(self.hash(value).encode("utf-8")) File "/usr/local/lib/python3.6/dist-packages/datasets/fingerprint.py", line 53, in hash return cls.hash_default(value) File "/usr/local/lib/python3.6/dist-packages/datasets/fingerprint.py", line 46, in hash_default return cls.hash_bytes(dumps(value)) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/py_utils.py", line 367, in dumps dump(obj, file) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/py_utils.py", line 339, in dump Pickler(file, recurse=True).dump(obj) File "/usr/local/lib/python3.6/dist-packages/dill/_dill.py", line 454, in dump StockPickler.dump(self, obj) File "/usr/lib/python3.6/pickle.py", line 409, in dump self.save(obj) File "/usr/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/usr/local/lib/python3.6/dist-packages/dill/_dill.py", line 1447, in save_function obj.__dict__, fkwdefaults), obj=obj) File "/usr/lib/python3.6/pickle.py", line 610, in save_reduce save(args) File "/usr/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/usr/lib/python3.6/pickle.py", line 751, in save_tuple save(element) File "/usr/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/usr/lib/python3.6/pickle.py", line 751, in save_tuple save(element) File "/usr/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/usr/local/lib/python3.6/dist-packages/dill/_dill.py", line 1178, in save_cell pickler.save_reduce(_create_cell, (f,), obj=obj) File "/usr/lib/python3.6/pickle.py", line 610, in save_reduce save(args) File "/usr/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/usr/lib/python3.6/pickle.py", line 736, in save_tuple save(element) File "/usr/lib/python3.6/pickle.py", line 521, in save self.save_reduce(obj=obj, *rv) File "/usr/lib/python3.6/pickle.py", line 605, in save_reduce save(cls) File "/usr/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/usr/local/lib/python3.6/dist-packages/dill/_dill.py", line 1374, in save_type obj.__bases__, _dict), obj=obj) File "/usr/lib/python3.6/pickle.py", line 610, in save_reduce save(args) File "/usr/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/usr/lib/python3.6/pickle.py", line 751, in save_tuple save(element) File "/usr/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/usr/local/lib/python3.6/dist-packages/dill/_dill.py", line 941, in save_module_dict StockPickler.save_dict(pickler, obj) File "/usr/lib/python3.6/pickle.py", line 821, in save_dict self._batch_setitems(obj.items()) File "/usr/lib/python3.6/pickle.py", line 847, in _batch_setitems save(v) File "/usr/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/usr/local/lib/python3.6/dist-packages/dill/_dill.py", line 941, in save_module_dict StockPickler.save_dict(pickler, obj) File "/usr/lib/python3.6/pickle.py", line 821, in save_dict self._batch_setitems(obj.items()) File "/usr/lib/python3.6/pickle.py", line 847, in _batch_setitems save(v) File "/usr/lib/python3.6/pickle.py", line 507, in save self.save_global(obj, rv) File "/usr/lib/python3.6/pickle.py", line 927, in save_global (obj, module_name, name)) _pickle.PicklingError: Can't pickle typing.Union[str, NoneType]: it's not the same object as typing.Union ## Expected behavior It should train and evaluate, give accuracy details. <!-- A clear and concise description of what you would expect to happen. -->
11-19-2020 10:02:13
11-19-2020 10:02:13
Can confirm that this also appears on latest master (0a80959bddd5da08742d22dca07e0facf0b4cd11)<|||||>Related to: #8212<|||||>Yes. Thanks, I managed to install py 3.8 in Colab and ran it successfully.
transformers
8,653
closed
Fix missing return_dict in RAG example to use a custom knowledge source
We did some changes regarding the output of models but didn't change the return_dict parameter of this RAG example script. It works as expected now
11-19-2020 09:40:38
11-19-2020 09:40:38
@lhoestq What the version of transformers you have used in this PR? <|||||>Hi ! The one on the master branch<|||||>> Hi ! > The one on the master branch So you have installed from sources, right **(Version: 4.0.0.dev0)**? I tried to execute **use_own_knowledge_dataset.py** with your [previous PR](https://github.com/huggingface/transformers/pull/8585). But I got the following error. Seems like **question_enc_outputs** is not a dict but just the tensor. `/transformers/src/transformers/models/rag/modeling_rag.py", line 628, in forward question_enc_hidden_states = question_enc_outputs.hidden_states AttributeError: 'tuple' object has no attribute 'hidden_states'` <|||||>Thanks for letting me know ! I'll fix that as well :) <|||||>Perfect. Btw it works perfectly for version 3.4.0. On Thu, Nov 19, 2020, 23:20 Quentin Lhoest <[email protected]> wrote: > Thanks for letting me know ! I'll fix that as well :) > > — > You are receiving this because you commented. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/pull/8653#issuecomment-730274786>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AEA4FGQVTDUZUCDJWDMROG3SQTWQBANCNFSM4T3EZTTA> > . >
transformers
8,652
closed
WNLI benchmark results clarification
The BERT paper mentions that the accuracy for WNLI is 62.3% [BERT Repository](https://github.com/google-research/bert) but the model card on HuggingFace reports the WNLI accuracy as 45.07%. Is there any particular for the big gap between the 2 models?
11-19-2020 09:24:59
11-19-2020 09:24:59
Wondering the same! Is it a typo? Since they report the exact same number for tiny, mini, small & medium. However, one would not expect that to be higher than [bert-base-cased that recieved 45.07%](https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-pytorch-version).<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
8,651
closed
RAG: OSError: Can't load tokenizer for 'facebook/rag-sequence-nq/question_encoder_tokenizer'
Hi I am trying to run [use_own_knowledge_dataset.py] (https://github.com/huggingface/transformers/blob/master/examples/rag/use_own_knowledge_dataset.py)) with **Transformers Version: 3.5.1** ((from your latest [PR](https://github.com/huggingface/transformers/pull/8585)). But it gives the following error. ``` OSError: Can't load tokenizer for 'facebook/rag-sequence-nq/question_encoder_tokenizer'. Make sure that: - 'facebook/rag-sequence-nq/question_encoder_tokenizer' is a correct model identifier listed on 'https://huggingface.co/models' - or 'facebook/rag-seq ```uence-nq/question_encoder_tokenizer' is the correct path to a directory containing relevant tokenizer files
11-19-2020 09:19:56
11-19-2020 09:19:56
Can you try installing transformers from master?<|||||>Thanks for working perfectly when loading from sources.
transformers
8,650
closed
Why use 'BertLayerNorm' instead of torch.nn.LayerNorm ?
# 🌟 New model addition What's the difference between 'BertLayerNorm' and torch.nn.LayerNorm ## Model description 1.pytorch torch.nn.LayerNorm https://pytorch.org/docs/stable/generated/torch.nn.LayerNorm.html?highlight=layernorm#torch.nn.LayerNorm 2.modeling.py class BertLayerNorm(Module): def __init__(self, hidden_size, eps=1e-12): super(BertLayerNorm, self).__init__() self.shape = torch.Size((hidden_size,)) self.eps = eps self.weight = nn.Parameter(torch.ones(hidden_size)) self.bias = nn.Parameter(torch.zeros(hidden_size)) self.apex_enabled = APEX_IS_AVAILABLE @torch.jit.unused def fused_layer_norm(self, x): return FusedLayerNormAffineFunction.apply( x, self.weight, self.bias, self.shape, self.eps) def forward(self, x): if self.apex_enabled and not torch.jit.is_scripting(): x = self.fused_layer_norm(x) else: u = x.mean(-1, keepdim=True) s = (x - u).pow(2).mean(-1, keepdim=True) x = (x - u) / torch.sqrt(s + self.eps) x = self.weight * x + self.bias return x <!-- Important information --> It seems like torch.nn.LayerNorm has the same function of belows ops in BertLayerNorm u = x.mean(-1, keepdim=True) s = (x - u).pow(2).mean(-1, keepdim=True) x = (x - u) / torch.sqrt(s + self.eps) x = self.weight * x + self.bias Why we don't use torch.nn.LayerNorm ? Thanks a lot for answering my question ## Open source status * [ ] the model implementation is available: (give details) * [ ] the model weights are available: (give details) * [ ] who are the authors: (mention them, if possible by @gh-username)
11-19-2020 07:50:31
11-19-2020 07:50:31
There once was a difference, but there is none anymore. I believe the `BertLayerNorm` was removed and is not available anymore in recent versions.<|||||>I have done some test, self.assertTensorsEqual( out_BertLayerNorm, out_mlu_nativeLayerNorm, 0, use_RAE=True) and the diff is 2.8356e-07
transformers
8,649
closed
from_pretrained()'s load() blocks forever in subprocess
## Environment info - `transformers` version: 3.5.1 - Platform: Linux-5.4.58 x86_64-with-glibc2.10 - Python version: 3.8.3 - PyTorch version (GPU?): 1.6.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: Yes ### Who can help Anyone familiar with the from_pretrained() code path. Perhaps @sgugger? Thank you! ## Information [from_pretrained()'s load()](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_utils.py#L1004) blocks forever loading roberta-base, due specifically to the call to `nn.Module._load_from_state_dict` that would load the "embeddings.word_embeddings". Occurs when loading the model in both the first process and a subprocess started via `multiprocessing`. I observe the same behavior when loading via keyword vs loading local files cached via `save_pretrained`. Model I am using (Bert, XLNet ...): roberta-base The problem arises when using: * [ ] the official example scripts: * [x] my own modified scripts: see sample script below. ## To reproduce Steps to reproduce the behavior: ```python import torch import transformers import multiprocessing as mp def load_model_in_subprocess(): print("Started subprocess.") model2 = transformers.RobertaModel.from_pretrained('roberta-base') print("Model loaded in subprocess.") def main(): model1 = transformers.RobertaModel.from_pretrained('roberta-base') print("Model loaded in main process.") p = mp.Process(target=load_model_in_subprocess, daemon=True) p.start() p.join() print("Main thread terminating.") if __name__ == "__main__": main() ``` Output: ``` Model loaded in main process. Started subprocess. <never terminates> ``` ## Expected behavior Model loads and is functional in both main process and subprocess.
11-19-2020 06:50:39
11-19-2020 06:50:39
If you've identified the issue to be coming from `nn.Module._load_from_state_dict`, then I guess this is more of a PyTorch issue than a `transformers` one? Do you have an idea what might cause this hang with that method?<|||||>Well, I don't know enough about torch state_dict behavior to understand why `transformers` would be directly calling the underscored "internal use" method `_load_from_state_dict` in the first place, but it strikes me that transformers is making assumptions about the functioning of this internal method that may not hold in practice; I don't see anything obvious in `_load_from_state_dict` that would cause it to lock up under these (or any) conditions, but we may be violating a usage assumption (e.g. providing a bad pre-load hook). <|||||>Oh, I see. Looking at the `torch.load_state_dict` however, it doesn't seem to be doing something very differently to what we do. Have you managed to load several models using `torch.load()` with the same multiprocessing approach you have used?<|||||>Well, a fair test would be to load the _same_ (roBERTa-base) model, but I'm not sure how to write the code to do that... that's why I'm using `transformers`! But it's easy to verify that there's no problem with multi-process loading of PyTorch models: ```python import torch import torch.nn as nn import multiprocessing as mp USE_STATE_DICT = True class SimpleNet(nn.Module): def __init__(self): super(SimpleNet, self).__init__() self.fc1 = nn.Linear(768, 1) def forward(self, x): x = self.fc1(x) return x def save_model(): model = SimpleNet() torch.save(model, './full_model.pt') torch.save(model.state_dict(), './model_state_dict.pt') def load_model_in_subprocess(): print("Started subprocess.") if USE_STATE_DICT: model = SimpleNet() model.load_state_dict(torch.load('./model_state_dict.pt')) else: model = torch.load('./full_model.pt') print(f"Model loaded in subprocess: {model}") def main(): save_model() print("Saved model.") if USE_STATE_DICT: model = SimpleNet() model.load_state_dict(torch.load('./model_state_dict.pt')) else: model = torch.load('./full_model.pt') print(f"Model loaded in main process: {model}") p = mp.Process(target=load_model_in_subprocess, daemon=True) p.start() p.join() print("Main thread terminating.") if __name__ == "__main__": main() ``` This script terminates fine when loading from state dict or a pickled model file.<|||||>Adding some debug prints to `transformers` load in modeling_utils.py, I can confirm that it is the call to `nn.Module._load_from_state_dict` when: ``` prefix = roberta.embeddings.word_embeddings. local_metadata = {'version': 1} missing_keys = ['roberta.embeddings.position_ids'] unexpected_keys = [] error_msgs = [] strict = True ``` The keys in the state_dict are: ``` roberta.embeddings.word_embeddings.weight roberta.embeddings.position_embeddings.weight roberta.embeddings.token_type_embeddings.weight roberta.embeddings.LayerNorm.weight roberta.embeddings.LayerNorm.bias <snipping all of the individual layer keys e.g. roberta.encoder.layer.0.attention.self.query.weight> roberta.pooler.dense.weight roberta.pooler.dense.bias lm_head.bias lm_head.dense.weight lm_head.dense.bias lm_head.layer_norm.weight lm_head.layer_norm.bias lm_head.decoder.weight ``` The shape of the `roberta.embeddings.word_embeddings.weight` tensor is [50265,768]. (note: same blocking behavior when loading bert-base-uncased) <|||||>Okay, I did more investigation, and the problem is a blocking call to `Tensor.copy_` that copies the Parameter in the state_dict into the Parameter in the Module (in this case, the `Embedding(50265, 768, padding_idx=1)` parameter in the roBERTa model). The [documentation](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.copy_) indicates a non_blocking parameter that can be used when copying between CPU and GPU, but we are copying between CPU and CPU. I confirmed that non_blocking does nothing, and that the `device` of both Parameters is `cpu`. That's where I'm going to stop pursuing this bug. I don't know the structure of the C++ code, but it seems likely that this is an issue with the PyTorch CPU copy implementation and the idiosyncrasies of the specific OS I'm using. If this problem can be reproduced on others systems it may be worth investigating further, but it does seem like the fault probably lies with `PyTorch` and not with `transformers`. Hopefully this affects only a small set of OSes. @LysandreJik, you may want to close this issue?<|||||>Thank you very much for your deep investigation of this issue. Unfortunately I don't see how we could change that on our front to make it work, so we'll close this for now. If we get other reports of this we'll investigate further. <|||||>Got the same issue in this environment : Platform: Linux clem-MacBookAir 5.13.0-40-generic #45~20.04.1-Ubuntu x86_64 x86_64 x86_64 GNU/Linux Python version: 3.9.7 PyTorch version (GPU?): 1.11.0+cu102 (False) Tensorflow version (GPU?): not installed (NA) Using GPU in script?: No Using distributed or parallel set-up in script?: Yes<|||||>Experiencing the same issue as well with a `torchvision.models` which seems to be coming from `nn.Module._load_from_state_dict` running as subprocess on CPU, unsure why this has just started to happen. moving the model to the GPU before loading works as a workaround ``` model.to(device) model.load_from_state_dict(ckpt) ```
transformers
8,648
closed
Create README.md
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
11-19-2020 06:41:55
11-19-2020 06:41:55
Closing this one as duplicate was already merged! For context please also read https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
transformers
8,647
closed
How can get the input embeddings_output for BERT?
How can get the input `embeddings_output ` for BERT?
11-19-2020 03:34:14
11-19-2020 03:34:14
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discusss.huggingface.co) instead? Thanks!
transformers
8,646
closed
CPM LM
# 🌟 New model addition (2.6B params) ## Model description CPM(Chinese Pre-Trained Language Models), which has 2.6B parameters, can be used for zero shot、one shot、few shot learning. Code and model is available. ## Open source status * [✅] the model implementation is available: [CPM-Generate pytorch](https://github.com/TsinghuaAI/CPM-Generate) [CPM-LM-TF2](https://github.com/qhduan/CPM-LM-TF2) * [✅] the model weights are available: can be found [here](https://github.com/TsinghuaAI/CPM-Generate) * [✅] who are the authors: (Research team of Beijing Zhiyuan Institute of artificial intelligence and Tsinghua University @ TsinghuaAI)
11-19-2020 01:46:55
11-19-2020 01:46:55
@JetRunner might be interested in that!<|||||>Hi, I didn't notice this before I finished the translation to `Transformer` just now. May [this script](https://github.com/mymusise/CPM-TF2Transformer/blob/main/transfor_CMP.ipynb) help. BTW, I met some problems when uploading. #8662<|||||>> @JetRunner might be interested in that! Yes I was working on it but it seems @mymusise has already worked it out! @mymusise I will assist you through the uploading process!<|||||>@mymusise I think the generated result from your repo is a little buggy here. Any idea why? ``` [{'generated_text': '你好 ▁ , ▁我 ▁是 ▁ 个 ▁ <unk> ▁ <unk> ▁ <unk> ▁ <unk> ▁ <unk> ▁ <unk>'}] ``` https://github.com/mymusise/CPM-TF2Transformer/blob/e5ea4799603f19ab7f92596f7ad7472198c505c6/transfor_CMP.ipynb#L881<|||||>|``` |[{'generated_text': '你好 ▁ , ▁我 ▁是 ▁ 个 ▁ <unk> ▁ <unk> ▁ <unk> ▁ <unk> ▁ <unk> ▁ <unk>'}] |``` @JetRunner Hi, because I use `BertTokenizer` here, didn't use the `bpe` method. And seems `GPT2Tokenizer` does not yet support other languages such as Chinese, the [byte_encoder](https://github.com/huggingface/transformers/blob/dd52804f5fce0a568ffbb3dc7fd088d2de0a0e56/src/transformers/models/gpt2/tokenization_gpt2.py#L246) here will encode other languages to unknown token. Any advice?<|||||>I see. It's not a big problem since we can now specify the tokenizer type in the configuration file. I can take care of those once you've uploaded the model file. Let's wait for @julien-c to solve the big file uploading problem first.<|||||>OK, thank you, guy.<|||||>Hi, the model has been uploaded, see: https://huggingface.co/mymusise/CPM-Third-Party<|||||>@mymusise Awesome news! Let me take care of the rest and I will keep you updated.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>😄 Is there any news? I have tested this before and the result is different from the official repo. Has this problem been solved? (I don't have a card now so I can't test it, sorry😢.) If so, I will close this issue. @mymusise @JetRunner <|||||>> I have tested this before and the result is different from the official repo. Hi, there are a few problems with the old one. I've recreated the model card, please try the new one: [mymusise/CPM-GPT2](https://huggingface.co/mymusise/CPM-GPT2) and the FP16 version has been uploaded already: [mymusise/CPM-GPT2-FP16](https://huggingface.co/mymusise/CPM-GPT2-FP16) <|||||>Thanks, it works perfectly! 😄
transformers
8,645
closed
[core] implement support for run-time dependency version checking
As discussed at https://github.com/huggingface/transformers/pull/8073#issuecomment-729181330 this PR: * [x] adds mechanics for run-time dependency version checks (module with fixed and unversioned too) * [x] adds thorough tests (this is where it's the easiest to see how things work) * [x] creates one source for all dependency versions in setup.py - need to rerun setup.py on its update to re-generated src/transformers/dependency_versions_table.py, which is then used by transformers * [x] adds support and deploys python runtime version check * [x] deploys runtime checks for versioned modules in setup.py's `install_requires` (i.e. must modules) * [x] switches `examples/lightning_base.py` to a fatal-on-failure requirement check. * [x] deploys the version lookup `setup.py`'s `extras` definitions and `install_requires` * [x] adds new `Makefile` target `deps_table_update` that updates the dep table, and inserts it into `style/quality/fixup` targets so the sync shouldn't take too long if forgotten to be run explicitly @sgugger, @LysandreJik, @patrickvonplaten, @thomwolf
11-19-2020 01:32:23
11-19-2020 01:32:23
@sgugger, wrt your comment in your [announcement](https://discuss.huggingface.co/t/transformers-v4-0-0-announcement/1990/1) > => Resulting breaking change: some people will have to install sentencepiece explicitly while they didn’t have to before with the command pip install transformers[sentencepiece]. After this PR, you can just `require_version("sentencepiece", "pip install transformers[sentencepiece]")` in the code that needs it at run-time. You can expand the hint (second arg) as you wish to be self-explanatory.<|||||>@LysandreJik and @sgugger - it's ready to merge whenever you have a chance to review. Thank you. Probably it is best to merge post v4-release, in case I missed something.<|||||>Will do a final review tomorrow!
transformers
8,644
closed
Fix small typo
Fixed a small typo on the XLNet and permutation language modelling section @patrickvonplaten # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
11-19-2020 00:56:25
11-19-2020 00:56:25
@patrickvonplaten pls review & approve a this small fix of a typo in the XLNet section. Thank you!
transformers
8,643
closed
Model embedding size and tokenizer size mismatch; resizing embedding will cause CUDA assert error
## Environment info Google colab ### Who can help T5: @patrickvonplaten ## Information I'm noticing something strange with T5. The model embedding size and the tokenizer size does not match. When I try to resize the model to have a smaller embedding this crashes CUDA. This is probably two bugs - one for the size mismatch, and one for shortening the embedding causing a crash. ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("t5-base") print (len(tokenizer)) model = AutoModel.from_pretrained("t5-base") print (model.shared) model.resize_token_embeddings(len(tokenizer)) model.to('cuda') ``` ## Expected behavior Expected behaviour is regular loading of the model into cuda. What I got instead was: 32100 Some weights of T5Model were not initialized from the model checkpoint at t5-base and are newly initialized: ['encoder.embed_tokens.weight', 'decoder.embed_tokens.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Embedding(32128, 768) --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-18-145ba8d0b52c> in <module>() 5 print (model.shared) 6 model.resize_token_embeddings(len(tokenizer)) ----> 7 model.to('cuda') 3 frames /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in convert(t) 608 if convert_to_format is not None and t.dim() == 4: 609 return t.to(device, dtype if t.is_floating_point() else None, non_blocking, memory_format=convert_to_format) --> 610 return t.to(device, dtype if t.is_floating_point() else None, non_blocking) 611 612 return self._apply(convert) RuntimeError: CUDA error: device-side assert triggered
11-19-2020 00:32:18
11-19-2020 00:32:18
Hey @ontocord, I cannot reproduce your error on master... ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("t5-base") print (len(tokenizer)) model = AutoModel.from_pretrained("t5-base") print (model.shared) model.resize_token_embeddings(len(tokenizer)) model.to('cuda') ``` works fine for me.<|||||>I am able to correctly shorten the embedding matrix<|||||>@patrickvonplaten Thank you. It's also working now in my code too with latest version of transformer. Thanks for looking into this!
transformers
8,642
closed
Setting Evaluation Strategy in the TrainingArgs does not print validation metrics
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.5.1 - Platform: Linux-5.4.0-1029-aws-x86_64-with-debian-buster-sid - Python version: 3.7.6 - PyTorch version (GPU?): 1.7.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @sgugger ## Information Model I am using (Bert, XLNet ...): BertForSequenceClassification The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: Set training args to ```self.training_args.do_eval = True self.training_args.evaluate_during_training = True self.training_args.evaluation_strategy = "steps" self.training_args.eval_steps=128 self.training_args.logging_steps=128 ``` Then pass to Trainer ``` trainer = Trainer( model=model, args=self.training_args, train_dataset=training_set, eval_dataset=eval_set, compute_metrics=self.compute_metrics, ) ``` ## Expected behavior Validation metrics are printed out on every 128th step. Right now, only logging steps appear in logs on the console. I looked thorough forums and others don't seem to have this issue. Any help on resolving this would be hugely appreciated since I can't train without validation metrics. It looked like evaluate_during_training isn't required, but it won't work with or without it set.
11-19-2020 00:06:08
11-19-2020 00:06:08
The steps you indicate to reproduce are incomplete, there is little we can do without knowing which script you're running and having access to the full code. For instance ``` python examples/text-classification/run_glue.py \ --model_name_or_path bert-base-cased \ --task_name mrpc \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 4e-5 \ --num_train_epochs 3.0 \ --output_dir ~/tmp/mnli/ \ --overwrite_output_dir \ --save_total_limit 5 \ --evaluation_strategy steps \ --eval_steps 128 \ --logging_steps 128 ``` does print the metrics every 128 steps. My guess is this is all because you're not creating a `training_args` using the `TrainingArguments` init, thus `self.training_args.evaluation_strategy` is improperly set. Try using ``` self.training_args.evaluation_strategy = EvaluationStrategy.STEPS ``` (but you really should be using the `TrainingArguments` init that has more checks and properly sets those arguments).<|||||>@sgugger After trying to set the strategy in the constructor, it works as intended! Thank you for a quick solve. I was setting some of the `Arg` object's field without the constructor, so I was getting the unexpected behaviour.<|||||>So be sure to use that then. The init wraps the string in the enum for you, that's why you don't need to do it when using it. Closing this issue since you're saying it's solved, don't hesitate to reopen if needed.
transformers
8,641
closed
Bi-Directional Reformer text multi class classification
# 🚀 Feature request I am fairly new to transformers, but good experience in ML. I have tried to start by building a RoBERTa multi class classification model but the doc and examples are not clear. Can you point me to the set of literature that would get me going with RoBERTa? Afterwards, I would love to support the effort building the Bi-Directional Reformer text multi class classification model. Of if you send me the docs, I can jump directly to the Reformer model. Many thanks, Z
11-18-2020 23:10:15
11-18-2020 23:10:15
Have you taken a look at the [RobertaForSequenceClassification](https://huggingface.co/transformers/model_doc/roberta.html#tfrobertaforsequenceclassification) documentation page? You might also be interested in the [sequence classification task](https://huggingface.co/transformers/task_summary.html#sequence-classification) documentation page. We aim to have the exact same API for all models, so while this example showcases BERT with the auto models, you can use a RoBERTa architecture instead. You can see the [ReformerForSequenceClassification](https://huggingface.co/transformers/model_doc/reformer.html#reformerforsequenceclassification) documentation page if you're particularly interested in Reformer. <|||||>Excellent! Thanks, Z<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
8,640
closed
Bump notebook from 6.1.4 to 6.1.5 in /examples/lxmert
Bumps [notebook](https://github.com/jupyter/jupyterhub) from 6.1.4 to 6.1.5. <details> <summary>Commits</summary> <ul> <li>See full diff in <a href="https://github.com/jupyter/jupyterhub/commits">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=notebook&package-manager=pip&previous-version=6.1.4&new-version=6.1.5)](https://docs.github.com/en/github/managing-security-vulnerabilities/configuring-github-dependabot-security-updates) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
11-18-2020 23:02:01
11-18-2020 23:02:01
This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
8,639
closed
grammar
# What does this PR do? Fixes a typo in the pull request template. <!-- Remove if not applicable --> ## Before submitting - [x ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. documentation: @sgugger
11-18-2020 22:49:00
11-18-2020 22:49:00
transformers
8,638
closed
AttributeError: module 'typing' has no attribute '_ClassVar'
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.5.1 - Platform: Linux - Python version: 3.8 - PyTorch version (GPU?): 1.7.0+cpu - Using GPU in script?: No - Using distributed or parallel set-up in script?: Yes ### Who can help @sgugger .... @LysandreJik ... mb? ## Information Model I am using: Distilbert The problem arises when using: Just this: ``` python from transformers import AutoTokenizer, AutoModelForQuestionAnswering ``` The tasks I am working on is: * [ ] my own task or dataset: I am using default Distilbert for my flask API ## To reproduce Steps to reproduce the behavior: That's a big part of the question. It works just fine on my local machine, but gives this error when run on my AWS server. ``` python from flask import Flask, request, jsonify from flask import Flask from flask_restful import Api, Resource from transformers import AutoTokenizer, AutoModelForQuestionAnswering from transformers.pipelines import pipeline tokenizer = AutoTokenizer.from_pretrained('./Dis_Save/') model = AutoModelForQuestionAnswering.from_pretrained('./Dis_Save/') nlp_qa = pipeline('question-answering', tokenizer=tokenizer,model=model) app = Flask(__name__) @app.route('/api/QandA', methods=['GET', 'POST']) def QandA(): content = request.json print(content['userMessages']) X = nlp_qa(context=content['userMessages'], question=content['question']) return(jsonify({"answer":X["answer"], "score":X["score"]})) if __name__ == "__main__": app.run(debug=True) ``` #This is all the code that I have. Here is the full error that I get: ``` File "access/main.py", line 4, in <module> from transformers import AutoTokenizer, AutoModelForQuestionAnswering File "/home/ubuntu/access/transformers/__init__.py", line 22, in <module> from .integrations import ( # isort:skip File "/home/ubuntu/access/transformers/integrations.py", line 82, in <module> from .trainer_callback import TrainerCallback # noqa: E402 File "/home/ubuntu/access/transformers/trainer_callback.py", line 27, in <module> from .training_args import TrainingArguments File "/home/ubuntu/access/transformers/training_args.py", line 36, in <module> class TrainingArguments: File "/home/ubuntu/access/dataclasses.py", line 958, in dataclass return wrap(_cls) File "/home/ubuntu/access/dataclasses.py", line 950, in wrap return _process_class(cls, init, repr, eq, order, unsafe_hash, frozen) File "/home/ubuntu/access/dataclasses.py", line 800, in _process_class cls_fields = [_get_field(cls, name, type) File "/home/ubuntu/access/dataclasses.py", line 800, in <listcomp> cls_fields = [_get_field(cls, name, type) File "/home/ubuntu/access/dataclasses.py", line 659, in _get_field if (_is_classvar(a_type, typing) File "/home/ubuntu/access/dataclasses.py", line 550, in _is_classvar return type(a_type) is typing._ClassVar AttributeError: module 'typing' has no attribute '_ClassVar' ```
11-18-2020 22:48:22
11-18-2020 22:48:22
This is weird! Looking at the stacktrace, I have a few questions: - Are you running a file in `access/main.py`? - If yes, is there a `dataclasses.py` file in that `access` folder? It seems far-fetched but if that's the case, it might be possible that this file is interfering with the `dataclasses` module.<|||||>It seems related to an incompatibility of python > 3.6 with package dataclasses, as explained here: https://github.com/google/flax/pull/270<|||||>Yes, but we only install `dataclasses` on Python versions that are inferior to 3.7: https://github.com/huggingface/transformers/blob/master/setup.py#L137<|||||>I encountered the same problem with the following setup: - transformers version: 3.5.1 - Platform: Linux - Python version: 3.8 - PyTorch version (GPU?): 1.7.0+cpu - Using GPU in script?: No - Using distributed or parallel set-up in script?: No Local execution works fine, but when running the code on Google App Engine (Standard Environment), it fails with error `AttributeError: module 'typing' has no attribute '_ClassVar'`. There is no file called `dataclasses.py` anywhere in the project, Stacktrace: ``` File "/srv/application/stance_pred/bert_inference.py", line 8, in <module> from transformers import DistilBertForSequenceClassification, DistilBertTokenizer, DistilBertConfig File "/layers/google.python.pip/pip/transformers/__init__.py", line 22, in <module> from .integrations import ( # isort:skip File "/layers/google.python.pip/pip/transformers/integrations.py", line 82, in <module> from .trainer_callback import TrainerCallback # noqa: E402 File "/layers/google.python.pip/pip/transformers/trainer_callback.py", line 27, in <module> from .training_args import TrainingArguments File "/layers/google.python.pip/pip/transformers/training_args.py", line 36, in <module> class TrainingArguments: File "/layers/google.python.pip/pip/dataclasses.py", line 958, in dataclass return wrap(_cls) File "/layers/google.python.pip/pip/dataclasses.py", line 950, in wrap return _process_class(cls, init, repr, eq, order, unsafe_hash, frozen) File "/layers/google.python.pip/pip/dataclasses.py", line 800, in _process_class cls_fields = [_get_field(cls, name, type) File "/layers/google.python.pip/pip/dataclasses.py", line 800, in <listcomp> cls_fields = [_get_field(cls, name, type) File "/layers/google.python.pip/pip/dataclasses.py", line 659, in _get_field if (_is_classvar(a_type, typing) File "/layers/google.python.pip/pip/dataclasses.py", line 550, in _is_classvar return type(a_type) is typing._ClassVar AttributeError: module 'typing' has no attribute '_ClassVar' ```<|||||>Any solution?<|||||>Could one of you post the result of `pip list` in the environment where that is failing? Or even better paste the result of `pip freeze`, alongside a few lines of code that reproduce the issue. Thank you!<|||||>I solved this problem by removing `dataclasses*`<|||||>I have solved this problem by downgrading to Python version: 3.6 Thanks @attardi <|||||>note that `fairseq` 0.10.1 requires `dataclasses` even for py>3.7 where it's built-in and HF Trainer breaks when `dataclasses` are installed for these versions. So if some project pulls in `fairseq` which will force the install of `dataclasses` HF Trainer will break. Probably need to ask `fairseq` to fix their dependencies. Until then @thesby's solution is the easiest one. ``` pip uninstall dataclasses -y ```<|||||>I don't know why, but for me changing the version of Python worked (from 3.9.12 to 3.9.5.)<|||||>> note that `fairseq` 0.10.1 requires `dataclasses` even for py>3.7 where it's built-in and HF Trainer breaks when `dataclasses` are installed for these versions. So if some project pulls in `fairseq` which will force the install of `dataclasses` HF Trainer will break. Probably need to ask `fairseq` to fix their dependencies. > > Until then @thesby's solution is the easiest one. > > ``` > pip uninstall dataclasses -y > ``` In my case `pip` command itself was broken so I had to do by hand: ``` rm -rf lib/dataclasses-0.6.dist-info rm lib/dataclasses.py ``` in the location where `dataclasses` has been installed!<|||||>Dependency to `dataclasses` in setup.py should be python 3.6 only: ```diff -'dataclasses' +'dataclasses; python_version < "3.7"' ``` or use `if sys.version_info >= (3, 7):` check to dynamically configure `_deps` in https://github.com/huggingface/transformers/blob/main/setup.py#L98.<|||||>I met the same error without having dataclasses installed in my environment (python 3.9). I installed dataclasses and uninstalled it again - from this point on the error disappeared.
transformers
8,637
closed
Add FastFormers to the example directory
# What does this PR do? Add FastFormers into the example directory. https://github.com/huggingface/transformers/issues/8083 https://arxiv.org/abs/2010.13382 https://github.com/microsoft/fastformers ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @JetRunner @LysandreJik
11-18-2020 22:46:19
11-18-2020 22:46:19
Thanks, @JetRunner ! I will clean up the code and fix the CI checks.<|||||>Thanks for the opinions, @LysandreJik! 1. The onnx related files are necessary before it's merges into master branch of onnxruntime 2. I agree with you regarding the binary files. I can put the binary files in a different place and put a script to download them, here. 3. That sounds good. I will look at the dataset library to see how to utilize it.<|||||>That's great, thanks a lot @ykim362! I think we can do with the ONNX files while your PR is in waiting over at onnxruntime.<|||||>Thanks for the review @patrickvonplaten ! I can make changes most of them as you recommended. Regarding `attention_head_size`, this is necessary for head pruned transformers. I can think of two ways to keep backward compatibility. 1. Create a new model class by subclassing `RoBERTa`. FastFormers supports BERT and RoBERTa, so that would work for both. 2. In the current BERT model, add a default behavior (same as current logic) when `attention_head_size` doesn't exist. Then, it could be used only when `attention_head_size` parameter exists in the config file. Or, I am open to any suggestions. :)<|||||>What is the status with this project? Anything I can help with?<|||||>> What is the status with this project? Anything I can help with? I think it's mostly about this: https://github.com/huggingface/transformers/pull/8637/files?file-filters%5B%5D=.png&file-filters%5B%5D=.py&file-filters%5B%5D=.whl#r545051113 -> in a first PR we should not touch this logic IMO<|||||>I am sorry, but I have been fully loaded with some other stuffs. I won't be able to make a progress. I'd like to close this to avoid any confusion.<|||||>Thank you for the clarification @ykim362, I hope we may still collaborate in the future!<|||||>Thanks, @LysandreJik ! Likewise for the future collaboration! :)
transformers
8,636
closed
Updated the Extractive Question Answering code snippets
# What does this PR do? The Extractive Question Answering code snippets do not work anymore since the models return task-specific output objects. This commit fixes the pytorch and tensorflow examples but adding `.values()` to the model call. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. documentation: @sgugger
11-18-2020 22:39:15
11-18-2020 22:39:15
Thanks for flagging this! I think it would be better to show how to use the attributes, so something like: ``` outputs = model(**inputs) answer_start_scores = outputs.start_logits answer_end_scores = outputs.end_logits ```<|||||>Yes, you are right. @sgugger
transformers
8,635
closed
Small formatting fix
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Just adding the bash formatting for the markdown in the run_mlm_wwm.py snippet ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 --> documentation: @sgugger
11-18-2020 22:35:56
11-18-2020 22:35:56
transformers
8,634
closed
Fix a bunch of slow tests
This PR fixes a bunch of slow tests. DPR had a few issues, which this PR fixes. There was an issue where the `DPRReader` object was not using `token_type_ids`, but for some unknown reason the interaction with the underlying `TFBertMainLayer` that requires those crashed when using the oh-so-terrifying `tf.saved_model.save`. I chose to add the `token_type_ids` to that model, as imo an additional feature is not a bad idea, even if it wasn't in the original model. I can imagine a bunch of reasons why users might want to have `token_type_ids` in that model even though it doesn't exist now. All in all, after an hour of debugging I feel that this is the only way to have the slow `tf.saved_model.save` test passing on TFDPR. @patrickvonplaten @lhoestq please tell me what you think.
11-18-2020 22:09:54
11-18-2020 22:09:54
Good for me - thanks a lot for taking care of it! It would probably all save us a lot of time to find once and for all a good solution for the `test_saved_model_with_attentions_output` and `test_saved_model_with_hidden_states_output` functions. I've spent way too much time trying to fix those for TFT5 as well and without finding a good solution. If you have a good idea of how to deal with this functionality/test in the future let me know @LysandreJik :-) @sgugger - not sure where the `MODIFY` statements are coming from...I think we can delete it along with `return_dict=True` now<|||||>@patrickvonplaten I tried but then test failed ;-)<|||||>> @patrickvonplaten I tried but then test failed ;-) Hmm maybe @lhoestq has an idea<|||||>The test_modeling_dpr was added recently in #8203 Maybe @ratthachat knows why the `# MODIFY` are there ? We should indeed remove them Also I'm ok with adding token_type_ids since it's a common additional input to models based on bert<|||||>Hi guys, first of all I apologize if there's a problem at the `MODIFY` tag which is about `return_dict` argument. I translated `test_modeling_tf_dpr` from the Pytorch's one. If I remember correctly, I found out that there's some tests in `test_modeling_tf_common.py` need `return_dict=False` argument. (and when I looked at the tests, I judged that all tests just need to ensure the correct values of output, not about `return_dict` argument.) That's why I changed the config to `return_dict=False` as default, and left the `MODIFY` comments just to note that this part was modified from the Pytorch's one. (Again, I thought the main tests are on outputs' values) It's my first time to write this kind of test file here, so I apologize again if I made something wrong!<|||||>> Hi guys, first of all I apologize if there's a problem at the `MODIFY` tag which is about `return_dict` argument. > > I translated `test_modeling_tf_dpr` from the Pytorch's one. > If I remember correctly, I found out that there's some tests in `test_modeling_tf_common.py` > need `return_dict=False` argument. > (and when I looked at the tests, I judged that all tests just need to ensure the correct values of output, > not about `return_dict` argument.) > That's why I changed the config to `return_dict=False` as default, and left the `MODIFY` comments > just to note that this part was modified from the Pytorch's one. > (Again, I thought the main tests are on outputs' values) > > It's my first time to write this kind of test file here, so I apologize again if I made something wrong! Absolutely no problem! I should have been more careful when reviewing your PR -> don't worry at all :-) We also have some difficulties with those `test_compile_tf_model` tests in TF, so I only understand it too well why you added those `return_dict=False/True` statements ;-) If you run into similar problems with TF compilation/ TF graph tests when integrating TFRAG, you can just point it out to us. It's more important to have TFRag fully work in eager mode in the beginning and then we are more then happy to help you out if you encounter problems with graph mode / compilation<|||||>Thanks again for your kind help @patrickvonplaten !! Yes, as you predicted, there are similar (many more) hacks I did to make TFRag works at the moment. When submitting PR I will make sure to list everything to you guys :)<|||||>Thanks for your reviews/comments/fixes!
transformers
8,633
closed
Better filtering of the model outputs in Trainer
# What does this PR do? As discovered since merging #8530, sometimes (e.g. when using nvidia apex with the O2 optimization) the new model outputs lose their type and become regular dictionaries. This means we can't index into them with integers and some rework in the internals of `Trainer` has become necessary. This PR: - fixes the training by indexing in the outputs by string if they are dict, int otherwise when grabbing the loss - fixes the evaluation by indexing in the outputs by string if they are dict, int otherwise when grabbing the loss but it also takes advantage of the new dict outputs to better filter the outputs at inference. We had several issues recently when using models outputing past states (such as Reformer, XLNet, GPT-2) during evaluation in `Trainer`. This PR introduces a new API that looks at a possible key in the config of the model to get some attributes to ignore in the ouputs during evaluation (those outputs are then discarded from the predictions returned by the function `Trainer.predict` or passed along to metric computation in `Trainer.evaluate`). Since a user might have some use cases where they want to ignore more keys or output those keys, a new argument is added to both `Trainer.predict` and `Trainer.evaluate` to fully control the keys ignored in those dictionaries. If the model outputs tuple, this is all ignored. Fixes #8523 among others
11-18-2020 20:43:42
11-18-2020 20:43:42
transformers
8,632
closed
[s2s] distillation.py fails with apex
Splitting off from https://github.com/huggingface/transformers/pull/8631, `finetune.py` works with apex, but `distillation.py` doesn't (no idea whether it ever did): ``` $ python distillation.py --teacher facebook/bart-large-xsum --data_dir xsum --tokenizer_name facebook/bart-large-xsum --student_decoder_layers 6 --student_encoder_layers 12 --freeze_encoder --freeze_embeds --learning_rate=3e-4 --do_train --do_predict --fp16 --val_check_interval 0.1 --n_val 1 --eval_beams 1 --length_penalty=0.5 --max_target_length=60 --val_max_target_length=60 --test_max_target_length=100 --model_name_or_path IGNORED --alpha_hid=3. --train_batch_size=16 --eval_batch_size=16 --gradient_accumulation_steps=2 --sortish_sampler --num_train_epochs=6 --warmup_steps 1 --output_dir distilbart_xsum_12_6 --amp_backend=apex --n_train 1 --gpus 1 [...] 2020-11-18 12:25:48.713431: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 using module SummarizationDistiller /home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py:45: UserWarning: Checkpoint directory /mnt/nvme1/code/huggingface/transformers-s2s-dict/examples/seq2seq/distilbart_xsum_12_6 exists and is not empty. With save_top_k=1, all files in this directory will be deleted when a checkpoint is saved! warnings.warn(*args, **kwargs) GPU available: True, used: True TPU available: False, using: 0 TPU cores LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [1] Using APEX 16bit precision. Selected optimization level O2: FP16 training with FP32 batchnorm and FP32 master weights. Defaults for this optimization level are: enabled : True opt_level : O2 cast_model_type : torch.float16 patch_torch_functions : False keep_batchnorm_fp32 : True master_weights : True loss_scale : dynamic Processing user overrides (additional kwargs that are not None)... After processing overrides, optimization options are: enabled : True opt_level : O2 cast_model_type : torch.float16 patch_torch_functions : False keep_batchnorm_fp32 : True master_weights : True loss_scale : dynamic Warning: multi_tensor_applier fused unscale kernel is unavailable, possibly because apex was installed without --cuda_ext --cpp_ext. Using Python fallback. Original ImportError was: ModuleNotFoundError("No module named 'amp_C'") /home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:200: UserWarning: Please also save or load the state of the optimzer when saving or loading the scheduler. warnings.warn(SAVE_STATE_WARNING, UserWarning) Validation sanity check: 0it [00:00, ?it/s]Traceback (most recent call last): File "distillation.py", line 308, in <module> distill_main(args) File "distillation.py", line 299, in distill_main return ft_main(args, model=model) File "/mnt/nvme1/code/huggingface/transformers-s2s-dict/examples/seq2seq/finetune.py", line 409, in main trainer: pl.Trainer = generic_train( File "/mnt/nvme1/code/huggingface/transformers-s2s-dict/examples/lightning_base.py", line 398, in generic_train trainer.fit(model) File "/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 444, in fit results = self.accelerator_backend.train() File "/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/pytorch_lightning/accelerators/gpu_accelerator.py", line 63, in train results = self.train_or_test() File "/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 74, in train_or_test results = self.trainer.train() File "/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 466, in train self.run_sanity_check(self.get_model()) File "/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 658, in run_sanity_check _, eval_results = self.run_evaluation(test_mode=False, max_batches=self.num_sanity_val_batches) File "/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 578, in run_evaluation output = self.evaluation_loop.evaluation_step(test_mode, batch, batch_idx, dataloader_idx) File "/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 171, in evaluation_step output = self.trainer.accelerator_backend.validation_step(args) File "/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/pytorch_lightning/accelerators/gpu_accelerator.py", line 87, in validation_step output = self.__validation_step(args) File "/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/pytorch_lightning/accelerators/gpu_accelerator.py", line 95, in __validation_step output = self.trainer.model.validation_step(*args) File "/mnt/nvme1/code/huggingface/transformers-s2s-dict/examples/seq2seq/finetune.py", line 182, in validation_step return self._generative_step(batch) File "/mnt/nvme1/code/huggingface/transformers-s2s-dict/examples/seq2seq/finetune.py", line 226, in _generative_step loss_tensors = self._step(batch) File "distillation.py", line 193, in _step teacher_outputs = self.teacher( File "/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/bart/modeling_bart.py", line 1022, in forward outputs = self.model( File "/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/bart/modeling_bart.py", line 905, in forward decoder_outputs = self.decoder( File "/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/bart/modeling_bart.py", line 593, in forward x, layer_self_attn, layer_past, layer_cross_attn = decoder_layer( File "/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/bart/modeling_bart.py", line 453, in forward x, cross_attn_weights = self.encoder_attn( File "/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/bart/modeling_bart.py", line 695, in forward k = self.k_proj(key) File "/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 91, in forward return F.linear(input, self.weight, self.bias) File "/home/stas/anaconda3/envs/py38-pt16/lib/python3.8/site-packages/torch/nn/functional.py", line 1676, in linear output = input.matmul(weight.t()) RuntimeError: expected scalar type Float but found Half ``` @patil-suraj, @patrickvonplaten
11-18-2020 20:33:45
11-18-2020 20:33:45
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
8,631
closed
[s2s] distillation apex breaks return_dict obj
This is a continuation of https://github.com/huggingface/transformers/pull/8612 for `distillation.py` - this PR is switching from `.property` to `["property"]`. Unfortunately, the script itself doesn't seem to work under apex even after the fix - perhaps it never was. But it's probably still OK to merge, since it no longer fails with the #8530-related symptoms and is in sync with `finetune.py` now. I filed a separate issue about it: https://github.com/huggingface/transformers/issues/8632 @sgugger
11-18-2020 20:31:14
11-18-2020 20:31:14
transformers
8,630
closed
Create README.md
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
11-18-2020 20:29:14
11-18-2020 20:29:14
transformers
8,629
closed
Fix mark-up (missing opening code-tag)
Small mark up fix
11-18-2020 19:21:26
11-18-2020 19:21:26
I think this has already been fixed by https://github.com/huggingface/transformers/pull/8635, which was opened a bit after yours ... sorry about that! Next time don't hesitate to tag @sgugger directly when doing documentation changes so he's aware of such PRs!
transformers
8,628
closed
CUDA error when training roBERTa from scratch with data parallel.
## Environment info - `transformers` version: 3.5.0 - Platform: Linux-5.4.0-52-generic-x86_64-Ubuntu 18.04.5 LTS - Python version: 3.8.3 - PyTorch version (GPU?): 1.7.0+cu110 - Tensorflow version (GPU?): 2.3.1 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ### Who can help @LysandreJik @sgugger ## Information Model I am using (Bert, XLNet ...): roberta-base The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Run transformers\examples\language-modeling\run_language_modeling.py as MLM task with WikiText-2 dataset (as mentioned in official README.md): ```shell export CUDA_VISIBLE_DEVICES=0,1,2,3 export TRAIN_FILE=data/wiki.train.raw export TEST_FILE=data/wiki.test.raw python run_language_modeling.py \ --output_dir=output \ --model_type=roberta \ --model_name_or_path=roberta-base \ --do_train \ --train_data_file=$TRAIN_FILE \ --do_eval \ --eval_data_file=$TEST_FILE \ --mlm \ --whole_word_mask ``` When we enable (with export CUDA_VISIBLE_DEVICES=0,1,2,3) all four GPUs (4xNVIDIA A100) training will throw RuntimeError: CUDA error: device-side assert triggered. If we keep only one GPU enabled (with export CUDA_VISIBLE_DEVICES=0) training works flawlessly. What we tried: - run run_language_modeling.py with WikiText-2 dataset - run run_language_modeling.py with custom sentenced text dataset - official example for EsperBERTo in jupyter All of the above mentioned have failed with the same error as soon as we enabled multiple GPUs! Full error for run_language_modeling.py: ```console ./run_mlm_wiki.sh 11/18/2020 15:29:20 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 4, distributed training: False, 16-bits training: False 11/18/2020 15:29:20 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir='output', overwrite_output_dir=False, do_train=True, do_eval=True, do_predict=False, evaluate_during_training=False, evaluation_strategy=<EvaluationStrategy.NO: 'no'>, prediction_loss_only=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Nov18_15-29-20_a4000-20an1', logging_first_step=False, logging_steps=500, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name='output', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None) /home/aime/.local/lib/python3.8/site-packages/transformers/modeling_auto.py:845: FutureWarning: The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models. warnings.warn( Some weights of RobertaForMaskedLM were not initialized from the model checkpoint at roberta-base and are newly initialized: ['lm_head.decoder.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. /home/aime/.local/lib/python3.8/site-packages/transformers/tokenization_utils_base.py:1541: FutureWarning: The `max_len` attribute has been deprecated and will be removed in a future version, use `model_max_length` instead. warnings.warn( /home/aime/.local/lib/python3.8/site-packages/transformers/data/datasets/language_modeling.py:40: FutureWarning: This dataset will be removed from the library soon, preprocessing should be handled with the 🤗 Datasets library. You can have a look at this example script for pointers: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py warnings.warn( 11/18/2020 15:29:27 - INFO - filelock - Lock 139760254204416 acquired on data/cached_lm_RobertaTokenizerFast_510_wiki.train.raw.lock 11/18/2020 15:29:27 - INFO - filelock - Lock 139760254204416 released on data/cached_lm_RobertaTokenizerFast_510_wiki.train.raw.lock 11/18/2020 15:29:27 - INFO - filelock - Lock 139760254204848 acquired on data/cached_lm_RobertaTokenizerFast_510_wiki.test.raw.lock 11/18/2020 15:29:27 - INFO - filelock - Lock 139760254204848 released on data/cached_lm_RobertaTokenizerFast_510_wiki.test.raw.lock /home/aime/.local/lib/python3.8/site-packages/transformers/trainer.py:277: FutureWarning: Passing `prediction_loss_only` as a keyword argument is deprecated and won't be possible in a future version. Use `args.prediction_loss_only` instead. Setting `args.prediction_loss_only=True warnings.warn( 0%| | 0/447 [00:00<?, ?it/s]/home/aime/.local/lib/python3.8/site-packages/torch/nn/parallel/_functions.py:64: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector. warnings.warn('Was asked to gather along dimension 0, but all ' 0%|▍ | 1/447 [00:07<58:40, 7.89s/it]Traceback (most recent call last): File "run_language_modeling.py", line 351, in <module> main() File "run_language_modeling.py", line 315, in main trainer.train(model_path=model_path) File "/home/aime/.local/lib/python3.8/site-packages/transformers/trainer.py", line 775, in train tr_loss += self.training_step(model, inputs) File "/home/aime/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1112, in training_step loss = self.compute_loss(model, inputs) File "/home/aime/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1136, in compute_loss outputs = model(**inputs) File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 161, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 171, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply output.reraise() File "/home/aime/.local/lib/python3.8/site-packages/torch/_utils.py", line 428, in reraise raise self.exc_type(msg) RuntimeError: Caught RuntimeError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker output = module(*input, **kwargs) File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/aime/.local/lib/python3.8/site-packages/transformers/modeling_roberta.py", line 894, in forward outputs = self.roberta( File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/aime/.local/lib/python3.8/site-packages/transformers/modeling_roberta.py", line 686, in forward encoder_outputs = self.encoder( File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/aime/.local/lib/python3.8/site-packages/transformers/modeling_roberta.py", line 421, in forward layer_outputs = layer_module( File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/aime/.local/lib/python3.8/site-packages/transformers/modeling_roberta.py", line 341, in forward self_attention_outputs = self.attention( File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/aime/.local/lib/python3.8/site-packages/transformers/modeling_roberta.py", line 273, in forward self_outputs = self.self( File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/aime/.local/lib/python3.8/site-packages/transformers/modeling_roberta.py", line 203, in forward attention_probs = nn.Softmax(dim=-1)(attention_scores) File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/modules/activation.py", line 1198, in forward return F.softmax(input, self.dim, _stacklevel=5) File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/functional.py", line 1512, in softmax ret = input.softmax(dim) RuntimeError: CUDA error: device-side assert triggered /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes` failed. /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed. /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [2,0,0] Assertion `t >= 0 && t < n_classes` failed. /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [3,0,0] Assertion `t >= 0 && t < n_classes` failed. /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [4,0,0] Assertion `t >= 0 && t < n_classes` failed. /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [5,0,0] Assertion `t >= 0 && t < n_classes` failed. /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed. ... ``` ## Expected behavior Training should start in parallel on all four Nvidia A100 GPUs without errors.
11-18-2020 18:26:05
11-18-2020 18:26:05
The flag `whole_word_mask` cannot work wirth RoBERTa as it's only compatible with the BERT tokenizer and RoBERTa uses a different tokenizer. I'm surprised it was working on one GPU. In any case, the `run_language_modeling.py` script is not maintained anymore, it has been replaced by new versions (`run_clm`, `run_mlm`, `run_plm`) that you can find in the `language-modeling` folder. Those new scripts are tested on a multi-GPU setup.<|||||>@sgugger Thanks for quick reply. As I mentioned we tried other examples sa well. #### EsperBERTo with Trainer class: https://github.com/huggingface/blog/blob/master/how-to-train.md. Please don't mind the whole_word_mask flag because we also tried the model with & without it. Every time as soon as we use multiple GPUs we get above mentioned error: ```console /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes` failed. ``` #### Here we installed the newest transformers master with new version of language-modeling\run_mlm.py Run parameters exact from README.md: ```shell python run_mlm.py \ --model_name_or_path roberta-base \ --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 \ --do_train \ --do_eval \ --output_dir /tmp/test-mlm ``` And again we get the same error log (I shortened the error THCUNN part): ```console /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [29,0,0] Assertion `t >= 0 && t < n_classes` failed. /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [30,0,0] Assertion `t >= 0 && t < n_classes` failed. /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [31,0,0] Assertion `t >= 0 && t < n_classes` failed. Traceback (most recent call last): File "run_mlm.py", line 392, in <module> main() File "run_mlm.py", line 362, in main trainer.train(model_path=model_path) File "/home/aime/.local/lib/python3.8/site-packages/transformers/trainer.py", line 747, in train tr_loss += self.training_step(model, inputs) File "/home/aime/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1075, in training_step loss = self.compute_loss(model, inputs) File "/home/aime/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1099, in compute_loss outputs = model(**inputs) File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 162, in forward return self.gather(outputs, self.output_device) File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 174, in gather return gather(outputs, output_device, dim=self.dim) File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/parallel/scatter_gather.py", line 68, in gather res = gather_map(outputs) File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/parallel/scatter_gather.py", line 61, in gather_map return type(out)(((k, gather_map([d[k] for d in outputs])) File "<string>", line 7, in __init__ File "/home/aime/.local/lib/python3.8/site-packages/transformers/file_utils.py", line 1305, in __post_init__ for element in iterator: File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/parallel/scatter_gather.py", line 61, in <genexpr> return type(out)(((k, gather_map([d[k] for d in outputs])) File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/parallel/scatter_gather.py", line 55, in gather_map return Gather.apply(target_device, dim, *outputs) File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/parallel/_functions.py", line 71, in forward return comm.gather(inputs, ctx.dim, ctx.target_device) File "/home/aime/.local/lib/python3.8/site-packages/torch/nn/parallel/comm.py", line 230, in gather return torch._C._gather(tensors, dim, destination) RuntimeError: CUDA error: device-side assert triggered 0%|▍ | 1/450 [00:09<1:07:51, 9.07s/it] ``` #### We even tried a custom simplified training code inspired by your docs. https://huggingface.co/transformers/custom_datasets.html#fine-tuning-with-native-pytorch-tensorflow ```python from transformers import LineByLineTextDataset from transformers import DataCollatorForLanguageModeling, Trainer, TrainingArguments, RobertaConfig from transformers import RobertaForMaskedLM from transformers import AdamW from torch.utils.data import DataLoader dataset = LineByLineTextDataset( tokenizer=tokenizer, file_path=r'./data/oscar.eo.txt', block_size=512, ) data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=True, mlm_probability=0.15 ) training_args = TrainingArguments( output_dir=r'./EsperBERTo', overwrite_output_dir=True, num_train_epochs=1, per_device_train_batch_size=4, save_steps=10_000, save_total_limit=2, prediction_loss_only=True ) print(training_args.n_gpu) config = RobertaConfig( vocab_size=52_000, max_position_embeddings=514, num_attention_heads=12, num_hidden_layers=6, type_vocab_size=1, ) args = training_args device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') model = RobertaForMaskedLM(config=config) model = torch.nn.DataParallel(model) model = model.to(device) model.train() train_loader = DataLoader(dataset, batch_size=4, shuffle=True) optim = AdamW(model.parameters(), lr=5e-5) for epoch in range(3): for batch in train_loader: optim.zero_grad() input_ids = batch['input_ids'].to(device) outputs = model(input_ids) loss = outputs[0] if args.n_gpu > 1: loss = loss.mean() # mean() to average on multi-gpu parallel training loss.backward() optim.step() model.eval() ``` But we always get the same error.<|||||>I don't have the error on my side with two GPUs and the same command, so I think the bug comes from something in your enviromnent. The fact the simple training loop also fails encourages me in the same direction. If you try to use just two GPUs with `CUDA_VISIBLE_DEVICES`, does the problem persist? Maybe one of your GPUs is in a bad state?<|||||>After some laborious debugging we figured out that the problem was indeed in our HW configuration. For others if your machine has an AMD EPYC 7402 you will probably needed to **disable IOMMU** (AMD I/O Virtualization Technology) in BIOS. After disabling all examples work. I apologise for the inconvenience.
transformers
8,627
closed
Diverse beam search
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Implementation of diverse beam search decoding as described in the paper: https://arxiv.org/pdf/1610.02424.pdf diversity function reference taken from: https://github.com/ashwinkalyan/dbs ## Implementation details Consider a T5 summarization task. `article="Justin Timberlake and Jessica Biel, welcome to parenthood. The celebrity couple announced the arrival of their son, Silas Randall Timberlake, in statements to People. "Silas was the middle name of Timberlake's maternal grandfather Bill Bomar, who died in 2012, while Randall is the musician's own middle name, as well as his father's first," People reports. The couple announced the pregnancy in January, with an Instagram post. It is the first baby for both."` Generation using normal beam search can be done as: `model.generate( input_ids=input_ids, num_beams=2, num_return_sequences=2 )` This generates: `['the couple announced the pregnancy in January. it is the first baby for both.', 'the couple announced the pregnancy in January. it is the first baby for both of them ']` Generation using diverse beam search can be done as: `model.generate( input_ids=input_ids, num_beams=2, num_return_sequences=2, beam_groups=2, diversity_penalty=1.5 )` This generates: `['the couple announced the pregnancy in January. it is the first baby for both.', 'Justin Timberlake and Jessica Biel have welcomed their son, Silas Randall ']` This means that 2 beams will be divided into 2 groups of 1 beam each, ensuring diversity between each group. NOTE: If `beam_groups=1`, then it will be same as the normal beam search as all the beams belong to the same group. Higher `diversity_penalty` will ensure more diversity between the groups of beams. When doing generation using diverse beam search, we need to ensure that `num_beams>=beam_groups` and also `num_beams` is divisible by `beam_groups`. ## Who can review? @patrickvonplaten, @TevenLeScao <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
11-18-2020 17:42:59
11-18-2020 17:42:59
@patrickvonplaten I am implementing diverse beam search. Please do suggest code design for this. 😃 <|||||>> @patrickvonplaten I am implementing diverse beam search. Please do suggest code design for this. Awesome that you work on this! I think this looks like the right approach! However, I'd also recommend creating a new beam_scorer to be sure to not break backwards compatilibily. We can see at a later stage if we can try to merge some code together with the current beam search code :-) Also, can you add a link to the paper in this PR ? this would be great :-) <|||||>@patrickvonplaten please review. I have made the required changes :)<|||||>@patrickvonplaten just a gentle reminder to review the PR. Thanks!<|||||>> @patrickvonplaten just a gentle reminder to review the PR. Thanks! Sorry, I'll review the PR this week! Also wondering how this PR relates to this one: #8840<|||||>@patrickvonplaten I think #8840 ensures that first token of every predicted sequence is different. This PR ensures diversity between group of beams at every time step of sequence generation. I think this will be more generic. Also we can change extent of diversity using `diversity_penalty` parameter.<|||||>@patrickvonplaten Also I was thinking that currently I am subtracting the diversity penalty directly from the `beam_scores`. So, finally when we are doing `beam_scorer.finalize()`, the `final_beam_scores` will also include the effect of `diversity_penalty`. I was thinking maybe we should penalise the `beam_scores` with diversity penalty only when we are selecting top `2*group_size` beam candidates: `next_token_scores, next_tokens = torch.topk( next_token_scores, 2 * group_size, dim=1, largest=True, sorted=True )` But for choosing the final beams in the end the scores shouldn't include the penalty due to diversity. What do you think?<|||||>> @patrickvonplaten Also I was thinking that currently I am subtracting the diversity penalty directly from the `beam_scores`. So, finally when we are doing `beam_scorer.finalize()`, the `final_beam_scores` will also include the effect of `diversity_penalty`. > > I was thinking maybe we should penalise the `beam_scores` with diversity penalty only when we are selecting top `2*group_size` beam candidates: > `next_token_scores, next_tokens = torch.topk( next_token_scores, 2 * group_size, dim=1, largest=True, sorted=True )` > > But for choosing the final beams in the end the scores shouldn't include the penalty due to diversity. What do you think? Hey @ayushtiku5, That's a good point! I do think though that we should leave the `beam_scores` as there are in the end as well. My main arguments are: 1) It helps to have more diversity in the output. If we only use the diversity penalty for choosing the next beam_token, but not add it to the `_beam_scores`, the beam_scores will be very high for beams of similar tokens, which I think is what we want to prevent here. I think `beam_scores` should be penalized for every token in the corresponding `beam_idx` that is also present in another `beam_idx` of the same `beam_group`. It's also more consistent and logical IMO: We should update the `beam_score` with the `probability` that the current beam_id was selected. 2) It would be very ugly to implement and I'd like to avoid it... Is that fine for you?<|||||>> > @patrickvonplaten Also I was thinking that currently I am subtracting the diversity penalty directly from the `beam_scores`. So, finally when we are doing `beam_scorer.finalize()`, the `final_beam_scores` will also include the effect of `diversity_penalty`. > > I was thinking maybe we should penalise the `beam_scores` with diversity penalty only when we are selecting top `2*group_size` beam candidates: > > `next_token_scores, next_tokens = torch.topk( next_token_scores, 2 * group_size, dim=1, largest=True, sorted=True )` > > But for choosing the final beams in the end the scores shouldn't include the penalty due to diversity. What do you think? > > Hey @ayushtiku5, > > That's a good point! I do think though that we should leave the `beam_scores` as there are in the end as well. My main arguments are: > > 1. It helps to have more diversity in the output. If we only use the diversity penalty for choosing the next beam_token, but not add it to the `_beam_scores`, the beam_scores will be very high for beams of similar tokens, which I think is what we want to prevent here. I think `beam_scores` should be penalized for every token in the corresponding `beam_idx` that is also present in another `beam_idx` of the same `beam_group`. It's also more consistent and logical IMO: We should update the `beam_score` with the `probability` that the current beam_id was selected. > 2. It would be very ugly to implement and I'd like to avoid it... > > Is that fine for you? @patrickvonplaten yeah sure, I am fine with this.<|||||>@ayushtiku5 - hope it's ok that I fiddled quite a bit with your PR. The functionality is kept 1:1 the same (I added an integration test in the very beginning to be sure of that), but the design is slightly different with the main goal to keep the method as general as possible. IMO, the PR is now good to merge :-) Could you take a final look at whether the new names and design is ok for you? Afterward, we can think about a nice code snippet / use case to advertise the big new feature of `transformers` :-) Awesome job!<|||||>@ayushtiku5 do you think the following code snippet could be a nice use case of diverse beam search? ```python from transformers import pipeline summarizer = pipeline("summarization", model="sshleifer/distilbart-xsum-12-6") ARTICLE = """Part of the Broad Road was closed to traffic on Sunday at about 18:00 GMT. The three adults and three children have been taken to Altnagelvin Hospital with non life-threatening injuries. The Fire Service, Northern Ireland Ambulance Service and police attended the crash. The Broad Road has since been reopened.""" # normal beam search summarizer(ARTICLE, num_return_sequences=2) # => [' Five people, including three children, have been taken to hospital following a two-vehicle crash in Londonderry.', # ' Five people, including three children, have been taken to hospital after a two-vehicle crash in Londonderry.'] # diverse beam search summarizer(ARTICLE, num_return_sequences=2, num_beam_groups=6, diversity_penalty=10.0) # => ['Three men are in hospital after a car and a lorry crashed in Londonderry.', # 'Six pedestrians were injured when a car and two vehicles crashed in County Antrim.'] ```<|||||>> @ayushtiku5 - hope it's ok that I fiddled quite a bit with your PR. The functionality is kept 1:1 the same (I added an integration test in the very beginning to be sure of that), but the design is slightly different with the main goal to keep the method as general as possible. > > IMO, the PR is now good to merge :-) Could you take a final look at whether the new names and design is ok for you? > > Afterward, we can think about a nice code snippet / use case to advertise the big new feature of `transformers` :-) > Awesome job! Hey @patrickvonplaten , Just one thing. In the `BeamScorer`'s `finalize()` method, we are directly selecting top `num_beams` beams from the `final_beam_scores`. This assumes that the beam scores in `final_beam_scores` will be sorted in decreasing order for a particular `batch_idx`. However, this will not be the case for our diverse beam search. `final_beam_scores` will be sorted for the beams inside a particular group, but not necessarily for all the beams for a particular `batch_idx`. So, I think we will have to sort the `final_beam_scores` for every `batch_idx`. I did this previously [here](https://github.com/huggingface/transformers/pull/8627/commits/14d5b6ca6e527eac2cdb9e9400d4c00f6d7add01#diff-098eb3834a12a0788445325f6795950fc5d59ec8fc8d34fef115ae5e379e18f2R292) The rest looks good to me. Thanks for refactoring! [UPDATE]: added this in [this](https://github.com/huggingface/transformers/pull/8627/commits/c99eb5a8dc57a7b0d33a8ac06d8c6a32a7812ad4) commit<|||||>Hey @ayushtiku5, sorry I forgot to mention on why I deleted those lines. IMO we don't need to add this functionality because it doesn't matter whether the scores are sorted or not. In this line: https://github.com/huggingface/transformers/blob/72d6c9c68ba19b2e991b0d7a32989410399b33f5/src/transformers/generation_beam_search.py#L330 you can see that the `add(...)` method automatically keeps the best scores and throws out the worse scores. Since the loop goes through all scores anyway it does not matter IMO whether they are sorted or not. What do you think? IMO, we can revert the last commit.<|||||>> Hey @ayushtiku5, > > sorry I forgot to mention on why I deleted those lines. IMO we don't need to add this functionality because it doesn't matter whether the scores are sorted or not. In this line: > > https://github.com/huggingface/transformers/blob/72d6c9c68ba19b2e991b0d7a32989410399b33f5/src/transformers/generation_beam_search.py#L330 > > you can see that the `add(...)` method automatically keeps the best scores and throws out the worse scores. Since the loop goes through all scores anyway it does not matter IMO whether they are sorted or not. > What do you think? IMO, we can revert the last commit. Yeah sorry! I completely missed it. Reverted the commit.<|||||>> > Hey @ayushtiku5, > > sorry I forgot to mention on why I deleted those lines. IMO we don't need to add this functionality because it doesn't matter whether the scores are sorted or not. In this line: > > https://github.com/huggingface/transformers/blob/72d6c9c68ba19b2e991b0d7a32989410399b33f5/src/transformers/generation_beam_search.py#L330 > > > > you can see that the `add(...)` method automatically keeps the best scores and throws out the worse scores. Since the loop goes through all scores anyway it does not matter IMO whether they are sorted or not. > > What do you think? IMO, we can revert the last commit. > > Yeah sorry! I completely missed it. Reverted the commit. No worries :-) The comment wasn't the best either - I updated it. Think it's a bit clearer now.<|||||>@ayushtiku5 - super sorry, we messed up the previous branch yesterday. I opened a new PR with the same authorship -> so it should be good to merge :-)
transformers
8,626
closed
run_pl_glue.py (almost equivalent performance with non-english bert models)
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: `4.0.0.dev0` - Platform: `Ubuntu 20.04.1 LTS` - Python version: `3.8.5` - PyTorch version (GPU?): `1.7.0` (GPU - yes) - Tensorflow version (GPU?): - Using GPU in script?: Yes (GeForce GTX Titan X) - Using distributed or parallel set-up in script?: distributed ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao Blenderbot: @patrickvonplaten Bart: @patrickvonplaten Marian: @patrickvonplaten Pegasus: @patrickvonplaten mBART: @patrickvonplaten T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSMT: @stas00 examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> perhaps @sgugger ## Information I tested running the glue benchmark with a few non-english models such as; Arabic, Swedish and Chinese. Models I am using: `asafaya/bert-base-arabic`, `KB/bert-base-swedish-cased`, `bert-base-chinese`. I recieve almost identical results as in [Run PyTorch version](https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-pytorch-version), it differs with a few percentages for each task, where some are even slightly better than using the default `bert-base-cased` Am not sure this is a bug, but it seems a bit strange that with using different embeddings that are really far away from English such as Arabic and Chinese I get very similair results. The problem arises when using: * [X] the official example scripts: [run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue.py) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [X] an official GLUE/SQUaD task: GLUE, (sts-b in this example) * [ ] my own task or dataset: (give details below) ## To reproduce I get almost identical results when running a non-english bert on the glue benchmark. In this case on `stsb` using the `bert-base-chinese`, `asafaya/bert-base-arabic` and `KB/bert-base-swedish-cased`. ``` export TASK_NAME=stsb python run_glue.py \ --model_name_or_path bert-base-chinese \ --task_name $TASK_NAME \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3.0 \ --output_dir /tmp/$TASK_NAME/ ``` Chinese: ``` 11/18/2020 17:10:42 - INFO - __main__ - ***** Eval results stsb ***** 11/18/2020 17:10:42 - INFO - __main__ - eval_loss = 0.8410218954086304 11/18/2020 17:10:42 - INFO - __main__ - eval_pearson = 0.7922208042884891 11/18/2020 17:10:42 - INFO - __main__ - eval_spearmanr = 0.7956508384154777 11/18/2020 17:10:42 - INFO - __main__ - eval_combined_score = 0.7939358213519834 11/18/2020 17:10:42 - INFO - __main__ - epoch = 3.0 ``` Arabic: ``` 11/18/2020 17:14:04 - INFO - __main__ - ***** Eval results stsb ***** 11/18/2020 17:14:04 - INFO - __main__ - eval_loss = 0.8082903027534485 11/18/2020 17:14:04 - INFO - __main__ - eval_pearson = 0.8357733212850804 11/18/2020 17:14:04 - INFO - __main__ - eval_spearmanr = 0.8386964712863125 11/18/2020 17:14:04 - INFO - __main__ - eval_combined_score = 0.8372348962856965 11/18/2020 17:14:04 - INFO - __main__ - epoch = 3.0 ``` Swedish: ``` 11/18/2020 17:32:26 - INFO - __main__ - ***** Eval results stsb ***** 11/18/2020 17:32:26 - INFO - __main__ - eval_loss = 0.7071832418441772 11/18/2020 17:32:26 - INFO - __main__ - eval_pearson = 0.8379047445076137 11/18/2020 17:32:26 - INFO - __main__ - eval_spearmanr = 0.8350383734219187 11/18/2020 17:32:26 - INFO - __main__ - eval_combined_score = 0.8364715589647662 11/18/2020 17:32:26 - INFO - __main__ - epoch = 3.0 ``` Is expected behaviour? Meaning that the readaption of the embedding matrices can work with non english vocabs such as Chinese and Arabic since they perhaps contain some latin characters. With English model `bert-base-cased` we get pearson: `83.95` and Arabic model `asafaya/bert-base-arabic` pearson: `83.57`. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> Thanks! ## Expected behavior Not sure.. <!-- A clear and concise description of what you would expect to happen. -->
11-18-2020 17:27:48
11-18-2020 17:27:48
I can't say with certainty, but I actually think it's entirely feasible that this is legitimate result. Here's a [recent ACL paper](https://www.aclweb.org/anthology/2020.acl-main.421/) showing that a monolingual model can be fine-tuned on another language with competitive performance. The authors do learn a new embedding layer for the new target language in an intermediate pre-training step, so it's not entirely the same, but I wouldn't find this result too surprising. It's also likely that these non-English models had exposure to some English that wasn't scrubbed from their pre-training corpora, in which case the model might already have decent embeddings for tokens sourced from English text just from pre-training.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
8,625
closed
Model Card for abhilash1910/financial_roberta
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
11-18-2020 17:22:37
11-18-2020 17:22:37
transformers
8,624
closed
Fixes the training resuming with gradient accumulation
# What does this PR do? As #5605 pointed out, there was a mistake in the way the number of steps to skip was computed when the `Trainer` wants to resume training from a checkpoint with gradient accumulation activated. This PR fixes that and adds more tests. More specifically: 1. It tests a regular gradient accumulation training (wasn't done before) and checks it gives the same results as the same training with the batch size multiplied by the number of gradient accumulation steps. 2. It adds a test of a training resuming with gradient accumulation (which fails on current master) 3. It fixes master so that the test in 2 passes. Fixes #5605
11-18-2020 16:42:19
11-18-2020 16:42:19
transformers
8,623
closed
Fix training from scratch in new scripts
# What does this PR do? This PR fixes a test in the new example scripts when the model can be trained from scratch and the `model_name_or_path` argument can be None. It also updates the template accordingly. Fixes #8590
11-18-2020 16:18:34
11-18-2020 16:18:34
transformers
8,622
closed
[Tokenizer Doc] Improve tokenizer summary
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR improves the tokenizer docs. This includes more consistent use of terminology, arguably better phrasing, correction of spelling, and more consistent formatting. Terminology: Tried to make the difference between "symbol", "character", "word", and "subword" Consistency: Use `"` for token notation and replace `this paper <...>`__ by `<paper_name (author, year)>`, rename section to "Summary of the tokenizers" I want to spend some time on the tokenizers library in Rust in the next couple of weeks and was reading through this summary for a start. It's great! I thought that I can improve the wording and explanations a bit while reading through it and pass it through Grammarly. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
11-18-2020 15:14:15
11-18-2020 15:14:15
transformers
8,621
closed
Fix DataCollatorForLanguageModeling
# What does this PR do? A clone was removed by mistake, this PR adds it back. Fixes #8619
11-18-2020 14:48:32
11-18-2020 14:48:32
transformers
8,620
closed
Create model_cards for Chinese Couplet and Poem GPT2 models
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
11-18-2020 14:44:55
11-18-2020 14:44:55
Really cool, and great that you inputed custom widget inputs. Merging.<|||||>cc'ing @JetRunner for info
transformers
8,619
closed
`DataCollatorForLanguageModeling` modifies `input_ids` via `labels` variable
The cloning step was removed in https://github.com/huggingface/transformers/pull/8308 at https://github.com/huggingface/transformers/pull/8308/files#diff-046566f2b40a246c7d533457cd7f6f07830516da845b904086f36b3cfe0d5965L201 so now the code that sets padded labels to `-100` is operating on the `input_ids` tensor directly. I suspect the code then fails when trying to look up the embedding for `-100` . cc @sgugger ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.5.1 - Platform: Linux-5.4.72-x86_64-with - Python version: 3.8.6 - PyTorch version (GPU?): 1.7.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao Blenderbot: @patrickvonplaten Bart: @patrickvonplaten Marian: @patrickvonplaten Pegasus: @patrickvonplaten mBART: @patrickvonplaten T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSMT: @stas00 examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Use `DataCollatorForLanguageModeling` with `Trainer` and a tokenizer with `pad_token` ``` File "/home/lulu/r/buganart/dialog/.build/pip_packages/bin/finetune", line 33, in <module> sys.exit(load_entry_point('dialog', 'console_scripts', 'finetune')()) File "/home/lulu/r/buganart/dialog/dialog/finetune.py", line 139, in main trainer.train() File "/nix/store/0jdyxgmg88y6sbjm3xkqdn06f493ahf2-python3-3.8.6-env/lib/python3.8/site-packages/transformers/trainer.py", line 775, in train tr_loss += self.training_step(model, inputs) File "/nix/store/0jdyxgmg88y6sbjm3xkqdn06f493ahf2-python3-3.8.6-env/lib/python3.8/site-packages/transformers/trainer.py", line 1112, in training_step loss = self.compute_loss(model, inputs) File "/nix/store/0jdyxgmg88y6sbjm3xkqdn06f493ahf2-python3-3.8.6-env/lib/python3.8/site-packages/transformers/trainer.py", line 1136, in compute_loss outputs = model(**inputs) File "/nix/store/0jdyxgmg88y6sbjm3xkqdn06f493ahf2-python3-3.8.6-env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/nix/store/0jdyxgmg88y6sbjm3xkqdn06f493ahf2-python3-3.8.6-env/lib/python3.8/site-packages/transformers/modeling_gpt2.py", line 774, in forward transformer_outputs = self.transformer( File "/nix/store/0jdyxgmg88y6sbjm3xkqdn06f493ahf2-python3-3.8.6-env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/nix/store/0jdyxgmg88y6sbjm3xkqdn06f493ahf2-python3-3.8.6-env/lib/python3.8/site-packages/transformers/modeling_gpt2.py", line 612, in forward inputs_embeds = self.wte(input_ids) File "/nix/store/0jdyxgmg88y6sbjm3xkqdn06f493ahf2-python3-3.8.6-env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/nix/store/0jdyxgmg88y6sbjm3xkqdn06f493ahf2-python3-3.8.6-env/lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 124, in forward return F.embedding( File "/nix/store/0jdyxgmg88y6sbjm3xkqdn06f493ahf2-python3-3.8.6-env/lib/python3.8/site-packages/torch/nn/functional.py", line 1852, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) IndexError: index out of range in self ``` My script is here https://github.com/buganart/dialog/blob/master/dialog/finetune.py . <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
11-18-2020 14:21:12
11-18-2020 14:21:12
Ah yes, only the detach was supposed to be removed but I guess I went a bit too far with my mouse, sorry about that. Will fix right now, thanks for flagging!
transformers
8,618
closed
seq2seq_trainer optimization issue on TPU
Hi I am running seq2seq_trainer.py model on TPU v3-8 instance with pytorch xla 1.7, using adafactor, here are the logs, could you please assist? thanks ``` /root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/optimization.py:506: UserWarning: This overload of add_ is deprecated: add_(Number alpha, Tensor other) Consider using one of the following signatures instead: add_(Tensor other, *, Number alpha) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:882.) exp_avg_sq_row.mul_(beta2t).add_(1.0 - beta2t, update.mean(dim=-1)) /root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/optimization.py:506: UserWarning: This overload of add_ is deprecated: add_(Number alpha, Tensor other) Consider using one of the following signatures instead: add_(Tensor other, *, Number alpha) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:882.) exp_avg_sq_row.mul_(beta2t).add_(1.0 - beta2t, update.mean(dim=-1)) /root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/optimization.py:506: UserWarning: This overload of add_ is deprecated: add_(Number alpha, Tensor other) Consider using one of the following signatures instead: add_(Tensor other, *, Number alpha) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:882.) exp_avg_sq_row.mul_(beta2t).add_(1.0 - beta2t, update.mean(dim=-1)) /root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/optimization.py:506: UserWarning: This overload of add_ is deprecated: add_(Number alpha, Tensor other) Consider using one of the following signatures instead: add_(Tensor other, *, Number alpha) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:882.) exp_avg_sq_row.mul_(beta2t).add_(1.0 - beta2t, update.mean(dim=-1)) /root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/optimization.py:506: UserWarning: This overload of add_ is deprecated: add_(Number alpha, Tensor other) Consider using one of the following signatures instead: add_(Tensor other, *, Number alpha) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:882.) exp_avg_sq_row.mul_(beta2t).add_(1.0 - beta2t, update.mean(dim=-1)) /root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/optimization.py:506: UserWarning: This overload of add_ is deprecated: add_(Number alpha, Tensor other) Consider using one of the following signatures instead: add_(Tensor other, *, Number alpha) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:882.) exp_avg_sq_row.mul_(beta2t).add_(1.0 - beta2t, update.mean(dim=-1)) 0%| | 1/19380 [00:02<13:07:23, 2.44s/it]/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/optimization.py:506: UserWarning: This overload of add_ is deprecated: add_(Number alpha, Tensor other) Consider using one of the following signatures instead: add_(Tensor other, *, Number alpha) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:882.) exp_avg_sq_row.mul_(beta2t).add_(1.0 - beta2t, update.mean(dim=-1)) /root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/optimization.py:506: UserWarning: This overload of add_ is deprecated: add_(Number alpha, Tensor other) Consider using one of the following signatures instead: add_(Tensor other, *, Number alpha) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:882.) exp_avg_sq_row.mul_(beta2t).add_(1.0 - beta2t, update.mean(dim=-1)) {'loss': 10383.4560546875, 'learning_rate': 6e-07, 'epoch': 0.0010319917440660474} ``` on GPU also I cannot use adafactor with this error: ``` /opt/conda/envs/internship/lib/python3.7/site-packages/transformers/optimization.py:506: UserWarning: This overload of add_ is deprecated: add_(Number alpha, Tensor other) Consider using one of the following signatures instead: add_(Tensor other, *, Number alpha) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:882.) exp_avg_sq_row.mul_(beta2t).add_(1.0 - beta2t, update.mean(dim=-1)) Traceback (most recent call last): File "finetune_t5_trainer.py", line 223, in <module> main() File "finetune_t5_trainer.py", line 159, in main model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/trainer.py", line 797, in train self.optimizer.step() File "/opt/conda/envs/internship/lib/python3.7/site-packages/torch/optim/lr_scheduler.py", line 67, in wrapper return wrapped(*args, **kwargs) File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/optimization.py", line 510, in step update = self._approx_sq_grad(exp_avg_sq_row, exp_avg_sq_col) File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/optimization.py", line 441, in _approx_sq_grad return torch.mm(r_factor.unsqueeze(-1), c_factor.unsqueeze(0)) RuntimeError: tensors must be 2-D ``` @patrickvonplaten
11-18-2020 14:08:54
11-18-2020 14:08:54
I'm not sure your first stacktrace shows an actual error? Just deprecation warnings. Also, without your code we have no way of understanding what might have happened here.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.<|||||>Hello, I recently got a very similar problem when trying to implement (manually) a self-attention module for images on a model which is trained using adafactor. I'm also using PyTorch lightning, but I don't think that makes a difference, since I tried with the "default" optimizers coming with torch.optim (RMSProp, Adam), and they work. This means that the problem might possibly be caused by the adafactor implementation. I'm running the latest, stable version of 'transformers' as of now (4.9.1). Here are the detailed logs: ``` File "train.py", line 125, in <module> run(args) File "train.py", line 90, in run trainer.fit(model, train_dataloader=train_loader, val_dataloaders=test_loader) File "/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 473, in fit results = self.accelerator_backend.train() File "/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/accelerators/gpu_accelerator.py", line 66, in train results = self.train_or_test() File "/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 69, in train_or_test results = self.trainer.train() File "/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 524, in train self.train_loop.run_training_epoch() File "/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 572, in run_training_epoch batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx) File "/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 730, in run_training_batch self.optimizer_step(optimizer, opt_idx, batch_idx, train_step_and_backward_closure) File "/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 505, in optimizer_step model_ref.optimizer_step( File "/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/core/lightning.py", line 1261, in optimizer_step optimizer.step(closure=optimizer_closure) File "/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/core/optimizer.py", line 286, in step self.__optimizer_step(*args, closure=closure, profiler_name=profiler_name, **kwargs) File "/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/core/optimizer.py", line 144, in __optimizer_step optimizer.step(closure=closure, *args, **kwargs) File "/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/torch/optim/lr_scheduler.py", line 67, in wrapper return wrapped(*args, **kwargs) File "/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/transformers/optimization.py", line 576, in step update = self._approx_sq_grad(exp_avg_sq_row, exp_avg_sq_col) File "/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/transformers/optimization.py", line 507, in _approx_sq_grad return torch.mm(r_factor.unsqueeze(-1), c_factor.unsqueeze(0)) RuntimeError: tensors must be 2-D ```<|||||>> > > Hello, I recently got a very similar problem when trying to implement (manually) a self-attention module for images on a model which is trained using adafactor. I'm also using PyTorch lightning, but I don't think that makes a difference, since I tried with the "default" optimizers coming with torch.optim (RMSProp, Adam), and they work. This means that the problem might possibly be caused by the adafactor implementation. I'm running the latest, stable version of 'transformers' as of now (4.9.1). Here are the detailed logs: > > ``` > File "train.py", line 125, in <module> > run(args) > File "train.py", line 90, in run > trainer.fit(model, train_dataloader=train_loader, val_dataloaders=test_loader) > File "/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 473, in fit > results = self.accelerator_backend.train() > File "/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/accelerators/gpu_accelerator.py", line 66, in train > results = self.train_or_test() > File "/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 69, in train_or_test > results = self.trainer.train() > File "/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 524, in train > self.train_loop.run_training_epoch() > File "/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 572, in run_training_epoch > batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx) > File "/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 730, in run_training_batch > self.optimizer_step(optimizer, opt_idx, batch_idx, train_step_and_backward_closure) > File "/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 505, in optimizer_step > model_ref.optimizer_step( > File "/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/core/lightning.py", line 1261, in optimizer_step > optimizer.step(closure=optimizer_closure) > File "/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/core/optimizer.py", line 286, in step > self.__optimizer_step(*args, closure=closure, profiler_name=profiler_name, **kwargs) > File "/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/core/optimizer.py", line 144, in __optimizer_step > optimizer.step(closure=closure, *args, **kwargs) > File "/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/torch/optim/lr_scheduler.py", line 67, in wrapper > return wrapped(*args, **kwargs) > File "/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/transformers/optimization.py", line 576, in step > update = self._approx_sq_grad(exp_avg_sq_row, exp_avg_sq_col) > File "/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/transformers/optimization.py", line 507, in _approx_sq_grad > return torch.mm(r_factor.unsqueeze(-1), c_factor.unsqueeze(0)) > RuntimeError: tensors must be 2-D > ``` Upon further inspection in these last few minutes, it seems that the Adafactor optimizer has difficulties optimizing PyTorch's nn.Conv* layers. If I try to use some Conv1d or to fine-tune a Resnet model, I get the error indicated above but if not my model works fine. In both these cases, the only thing that has changed is the task of updating the weights of some Convolutional layers. I urge you to take a look at this, as it is quite bizarre. Since I did not mentioned this above, this is the optimizer I'm currently trying to use: ``` optimizer = Adafactor(self.parameters(), scale_parameter=True, relative_step=True, warmup_init=True, lr=None) lr_scheduler = AdafactorSchedule(optimizer) ```
transformers
8,617
closed
Add cards for all Geotrend models
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Adds model card ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
11-18-2020 11:20:10
11-18-2020 11:20:10
Awesome, thank you!
transformers
8,616
closed
Add pip install update to resolve import error in transformers notebook
Add pip install upgrade tensorflow-gpu to remove error below: ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-2-094fadb93f3f> in <module>() 1 import torch ----> 2 from transformers import AutoModel, AutoTokenizer, BertTokenizer 3 4 torch.set_grad_enabled(False) 4 frames /usr/local/lib/python3.6/dist-packages/transformers/__init__.py in <module>() 133 134 # Pipelines --> 135 from .pipelines import ( 136 Conversation, 137 ConversationalPipeline, /usr/local/lib/python3.6/dist-packages/transformers/pipelines.py in <module>() 46 import tensorflow as tf 47 ---> 48 from .modeling_tf_auto import ( 49 TF_MODEL_FOR_QUESTION_ANSWERING_MAPPING, 50 TF_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING, /usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_auto.py in <module>() 49 from .configuration_utils import PretrainedConfig 50 from .file_utils import add_start_docstrings ---> 51 from .modeling_tf_albert import ( 52 TFAlbertForMaskedLM, 53 TFAlbertForMultipleChoice, /usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_albert.py in <module>() 22 import tensorflow as tf 23 ---> 24 from .activations_tf import get_tf_activation 25 from .configuration_albert import AlbertConfig 26 from .file_utils import ( /usr/local/lib/python3.6/dist-packages/transformers/activations_tf.py in <module>() 52 "gelu": tf.keras.layers.Activation(gelu), 53 "relu": tf.keras.activations.relu, ---> 54 "swish": tf.keras.activations.swish, 55 "silu": tf.keras.activations.swish, 56 "gelu_new": tf.keras.layers.Activation(gelu_new), AttributeError: module 'tensorflow_core.python.keras.api._v2.keras.activations' has no attribute 'swish' ``` I have tried running the colab after this change and it seems to work fine (all the cells run with no errors). # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
11-18-2020 10:39:05
11-18-2020 10:39:05
You are right, thanks!
transformers
8,615
closed
Batch Size error
Hello, I wanted to use Roberta for Sentence classification on protein sequences which I have converted into sentences. So first I train a tokenizer for my custom vocabulary. ```tokenizer = CharBPETokenizer() tokenizer.train("bert_vocab.txt",vocab_size=8000,special_tokens=[ "[CLS]", "[SEP]", "[UNK]", "[MASK]", ]) tokenizer.save_model("EsperBERTo") from tokenizers.implementations import CharBPETokenizer from tokenizers.processors import BertProcessing tokenizer._tokenizer.post_processor = BertProcessing(("[SEP]", tokenizer.token_to_id("[SEP]")), ("[CLS]", tokenizer.token_to_id("[CLS]")),) print(tokenizer.encode("AAA ATA AKA")) print(tokenizer.encode("AAA ATA AKA").tokens) tokenizer.enable_truncation(max_length=512) import torch torch.cuda.is_available() from transformers import RobertaConfig config = RobertaConfig( vocab_size=8000, max_position_embeddings=514, num_attention_heads=12, num_hidden_layers=6, type_vocab_size=1, ) from transformers import RobertaTokenizerFast tokenizer = RobertaTokenizerFast.from_pretrained("./EsperBERTo", max_len=512) ``` When i try to train the model, i get the error `ValueError: Expected input batch_size (64) to match target batch_size (8192).` ``` model = RobertaForSequenceClassification(config=config) model.num_parameters() from transformers import DataCollatorForLanguageModeling from transformers import LineByLineTextDataset dataset = LineByLineTextDataset( tokenizer=tokenizer, file_path="bert_data.txt", block_size=64, ) data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer) from transformers import Trainer, TrainingArguments training_args = TrainingArguments( output_dir="./EsperBERTo", overwrite_output_dir=True, num_train_epochs=100, per_device_train_batch_size=64, save_steps=10_000, save_total_limit=2, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=dataset, prediction_loss_only=True, ) trainer.train() trainer.save_model("./EsperBERTo") ``` I understand i do not specify the output labels anywhere in the code but I am yet to find any example which I could follow to figure this.
11-18-2020 09:15:44
11-18-2020 09:15:44
It would be great to have all the information relative to your environment, as asked in the template, as well as the full error stack trace so that we may help you.<|||||>``` trainer.train() File "/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/transformers/trainer.py", line 499, in train tr_loss += self._training_step(model, inputs, optimizer) File "/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/transformers/trainer.py", line 622, in _training_step outputs = model(**inputs) File "/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/transformers/modeling_roberta.py", line 357, in forward loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) File "/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 932, in forward ignore_index=self.ignore_index, reduction=self.reduction) File "/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/torch/nn/functional.py", line 2317, in cross_entropy return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) File "/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/torch/nn/functional.py", line 2113, in nll_loss .format(input.size(0), target.size(0))) ValueError: Expected input batch_size (64) to match target batch_size (8192). ``` For my environment I had `pytorch == 1.6` on a Linux based system but while trying to solve this i have mixed up alot of my packages.<|||||>@LysandreJik the issue is still there in `pytorch== 1.7.0`<|||||>Could you please show us the result of running `transformers-cli env`?<|||||>@LysandreJik ``` - `transformers` version: 3.0.2 - Platform: Linux-3.10.0-693.5.2.el7.x86_64-x86_64-with-centos-7.4.1708-Core - Python version: 3.7.9 - PyTorch version (GPU?): 1.7.0 (False) - Tensorflow version (GPU?): 2.3.0 (False) - Using GPU in script?:yes - Using distributed or parallel set-up in script?: < ```<|||||>Maybe @sgugger has an idea of what could be going on.<|||||>You're using `DataCollatorForLanguageModeling`, with a model for sequence classification. It can't work as `DataCollatorForLanguageModeling` prepares the labels for language modeling by duplicating the inputs, whereas you should have one label per sentence in your batch.<|||||>@sgugger Thanks for the help. I tried to use `DataCollatorForTokenClassification` but that throws ``` ]Traceback (most recent call last): File "m1.py", line 122, in <module> trainer.train() File "/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/transformers/trainer.py", line 747, in train tr_loss += self.training_step(model, inputs) File "/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/transformers/trainer.py", line 1075, in training_step loss = self.compute_loss(model, inputs) File "/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/transformers/trainer.py", line 1105, in compute_loss return outputs["loss"] if isinstance(outputs, dict) else outputs[0] File "/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/transformers/file_utils.py", line 1338, in __getitem__ return inner_dict[k] KeyError: 'loss' ```<|||||>You're doing sequence classification, not token classification. Also `LineByLineDataset` can't be used as it doesn't deal with labels. For doing text classification, you should look at the [run_glue example](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue.py). Also, we don't use issues to debug user's code, so please switch to the [forum](https://discuss.huggingface.co/) where there is a bigger community of people that will be able to help.
transformers
8,614
closed
ValueError while running run_glue.py with xlnet model.
## Environment info + transformers version: 3.5.1 + Platform: Linux version 3.10.107-1-tlinux2-0050 + Python version: 3.7.6 + PyTorch version (GPU?): 1.6.0+cu101 + Tensorflow version (GPU?): no + Using GPU in script?: yes + Using distributed or parallel set-up in script?: yes ## Who can help @sgugger ## To reproduce ```shell CUDA_VISIBLE_DEVICES=1,2 python run_glue.py \ --model_name_or_path xlnet-base-cased \ --task_name stsb \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --per_device_eval_batch_size 8 \ --max_steps 1200 \ --learning_rate 2e-5 \ --num_train_epochs 3.0 \ --output_dir ./output/tranformer/xlnet \ --cache_dir ./pretained_model/xlnet \ --overwrite_output_dir \ --overwrite_cache \ --eval_accumulation_steps 2 \ --gradient_accumulation_steps 1 \ --disable_tqdm True\ --dataloader_drop_last \ --past_index 2 ``` ## error ```shell [INFO|trainer.py:1387] 2020-11-18 15:21:21,084 >> ***** Running Evaluation ***** [INFO|trainer.py:1388] 2020-11-18 15:21:21,084 >> Num examples = 4000 [INFO|trainer.py:1389] 2020-11-18 15:21:21,085 >> Batch size = 16 ./sim/lib/python3.7/site-packages/transformers/modeling_xlnet.py:297: UserWarning: Mixed memory format inputs detected while calling the operator. The operator will output contiguous tensor even if some of the inputs are in channels_last format. (Triggered internally at /pytorch/aten/src/ATen/native/TensorIterator.cpp:918.) attn_score = (ac + bd + ef) * self.scale ./sim/lib/python3.7/site-packages/torch/nn/parallel/_functions.py:61: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector. warnings.warn('Was asked to gather along dimension 0, but all ' Traceback (most recent call last): File "run_glue.py", line 414, in <module> main() File "run_glue.py", line 366, in main eval_result = trainer.evaluate(eval_dataset=eval_dataset) File "./sim/lib/python3.7/site-packages/transformers/trainer.py", line 1313, in evaluate prediction_loss_only=True if self.compute_metrics is None else None, File "./sim/lib/python3.7/site-packages/transformers/trainer.py", line 1431, in prediction_loop preds_gatherer.add_arrays(self._gather_and_numpify(preds_host, "eval_preds")) File "./sim/lib/python3.7/site-packages/transformers/trainer_pt_utils.py", line 330, in add_arrays slice_len = self._nested_set_tensors(self._storage, arrays) File "./sim/lib/python3.7/site-packages/transformers/trainer_pt_utils.py", line 337, in _nested_set_tensors slice_len = self._nested_set_tensors(x, y) File "./sim/lib/python3.7/site-packages/transformers/trainer_pt_utils.py", line 337, in _nested_set_tensors slice_len = self._nested_set_tensors(x, y) File "./sim/lib/python3.7/site-packages/transformers/trainer_pt_utils.py", line 349, in _nested_set_tensors i * slice_len : (i + 1) * slice_len ValueError: could not broadcast input array from shape (512,8,768) into shape (416,8,768) ```shell Could you please help me? Thanks a lot !
11-18-2020 07:35:39
11-18-2020 07:35:39
This is a duplicate of #7584 I think. Some workarounds are mentioned in that issue and a fix is on its way.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
8,613
closed
[s2s] multigpu skip
I have a hard time remembering whether this test worked on multi-gpu or not, it currently fails there, but work on a single gpu. So putting a band-aid `require_torch_non_multi_gpu_but_fix_me` skip for now, unless someone wants to work on it. For some reason I thought it was fine - I think it used to be fine. The error: ``` CUDA_VISIBLE_DEVICES=0,1 RUN_SLOW=1 pyt test_finetune_trainer.py::TestFinetuneTrainer::test_finetune_bert2bert ... test_finetune_trainer.py:159: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../../src/transformers/trainer.py:774: in train self._maybe_log_save_evaluate(tr_loss, model, trial, epoch) ../../src/transformers/trainer.py:838: in _maybe_log_save_evaluate metrics = self.evaluate() ../../src/transformers/trainer.py:1241: in evaluate output = self.prediction_loop( ../../src/transformers/trainer.py:1343: in prediction_loop loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only) seq2seq_trainer.py:188: in prediction_step generated_tokens = model.generate( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = DataParallel( (module): EncoderDecoderModel( (encoder): BertModel( (embeddings): BertEmbeddings( (...) ) (decoder): Linear(in_features=128, out_features=30522, bias=True) ) ) ) ) ) name = 'generate' def __getattr__(self, name: str) -> Union[Tensor, 'Module']: if '_parameters' in self.__dict__: _parameters = self.__dict__['_parameters'] if name in _parameters: return _parameters[name] if '_buffers' in self.__dict__: _buffers = self.__dict__['_buffers'] if name in _buffers: return _buffers[name] if '_modules' in self.__dict__: modules = self.__dict__['_modules'] if name in modules: return modules[name] > raise ModuleAttributeError("'{}' object has no attribute '{}'".format( type(self).__name__, name)) E torch.nn.modules.module.ModuleAttributeError: 'DataParallel' object has no attribute 'generate' /home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:795: ModuleAttributeError ``` @patrickvonplaten, @LysandreJik
11-18-2020 07:06:26
11-18-2020 07:06:26
Hi I am getting this issue during eval on multiple gpus, is there a temporary fix I could run the codes on multiple gpus? thanks <|||||>Fixed in https://github.com/huggingface/transformers/pull/8716<|||||>Hi could you tell me in which version it is fixed? thanks Rabeeh On Sun, Nov 22, 2020 at 8:45 PM Stas Bekman <[email protected]> wrote: > Fixed in #8716 <https://github.com/huggingface/transformers/pull/8716> > > — > You are receiving this because you commented. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/pull/8613#issuecomment-731844378>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ABP4ZCFX4OKJX7X3LFZJQFLSRFZ5DANCNFSM4TZSIEOQ> > . > <|||||>I just made a PR, so until it's accepted you need to apply the change yourself or use the PR branch - it's a 2-line change: https://github.com/huggingface/transformers/pull/8716/files<|||||>Hi this does not resolve the issue, could you please have a look at my response here https://github.com/huggingface/transformers/issues/7146 thanks Rabeeh On Sun, Nov 22, 2020 at 8:48 PM Stas Bekman <[email protected]> wrote: > I just made a PR, so until it's accepted you need to apply the change > yourself or use the PR branch - it's a 2-line change: > https://github.com/huggingface/transformers/pull/8716/files > > — > You are receiving this because you commented. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/pull/8613#issuecomment-731844857>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ABP4ZCGGZT4AE7ABEX5UGPTSRF2I7ANCNFSM4TZSIEOQ> > . > <|||||>That's a totally different issue. Please kindly file a new issue about it. Also when you link to a comment, please click on the [...] in the right upper corner of the comment and get the link to that comment. Otherwise you're linking to the whole PR/issue and there is no telling what you're talking about. Hope this makes sense. Also please use code formatting for backtraces as "code" using the menu. Finally, you need to fully follow the Issue template and provide full information on how the issue can be reproduced. Giving just the backtrace most of the time doesn't help the developer to know how to reproduce the problem and thus solve it.<|||||>Hi Stas thank you, sure Best Rabeeh On Sun, Nov 22, 2020 at 9:13 PM Stas Bekman <[email protected]> wrote: > That's a totally different issue. Please kindly file a new issue about it. > > Also when you link to a comment, please click on the [...] in the right > upper corner of the comment and get the link to that comment. Otherwise > you're linking to the whole PR/issue and there is no telling what you're > talking about. Hope this makes sense. > > Also please use code formatting for backtraces as "code" using the menu. > > Finally, you need to fully follow the Issue template and provide full > information on how the issue can be reproduced. Giving just the backtrace > most of the time doesn't help the developer to know how to reproduce the > problem and thus solve it. > > — > You are receiving this because you commented. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/pull/8613#issuecomment-731848044>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ABP4ZCGAOXRH7KU6RL3TIDDSRF5ILANCNFSM4TZSIEOQ> > . >
transformers
8,612
closed
[s2s] fix finetune.py to adjust for #8530 changes
Making the script work again after https://github.com/huggingface/transformers/pull/8530 change. As mentioned in https://github.com/huggingface/transformers/pull/8612 `.logits` doesn't seem to work with apex/PL. No idea why. So `distillation.py` is probably a problem too since it uses `.logits`. I haven't checked it. I don't think any tests use apex. ``` RUN_SLOW=1 pyt -sv test_bash_script.py::TestMbartCc25Enro::test_train_mbart_cc25_enro_script ``` tests finetune with fp16, but it runs it in a different way. @sgugger, @LysandreJik
11-18-2020 06:38:44
11-18-2020 06:38:44
@sgugger if you want to see why `.logits` isn't working you can try: ``` cd examples/seq2seq PYTHONPATH="../../src" CUDA_VISIBLE_DEVICES=0 python finetune.py --learning_rate 3e-5 --gpus 1 --do_train --val_check_interval 1 --num_train_epochs 1 --freeze_encoder --freeze_embeds --data_dir cnn_dm --max_target_length 142 --val_max_target_length 142 --train_batch_size 1 --eval_batch_size 1 --gradient_accumulation_steps 1 --model_name_or_path sshleifer/student_cnn_12_6 --tokenizer_name facebook/bart-large --output_dir distilbart-cnn-12-6 --overwrite_output_dir --num_sanity_val_steps 0 --fp16 --eval_beams 1 --amp_backend=apex --n_train 1 --n_val 1 --warmup_steps 1 ``` If I `pprint(outputs)` I get a dict. If you need `cnn_dm`: ``` wget https://cdn-datasets.huggingface.co/summarization/cnn_dm_v2.tgz tar -xzvf cnn_dm_v2.tgz # empty lines removed mv cnn_cln cnn_dm ```<|||||>I'll investigate but at first glance it looks like it's PyTorch Lightning that is messing with the new output types, not PyTorch. We will add a caveat in #8530 if that's the case.<|||||>So this is not linked to mixed precision directly but purely something in PL. Somewhere when dealing with mixed precision, it looks like there is something happening is the output is an instance of a dict, and they don't respect the class of the dict.<|||||>More investigation shows the problem is actually coming from apex, when using `opt_level="O2"` (the default level is fine). For reference, the following code shows apex-converted model lose their output type: ``` from apex import amp from transformers import BertModel, BertTokenizer tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") inputs = tokenizer("Hi, my name is Sylvain!", return_tensors="pt") inputs = {k: v.cuda() for k, v in inputs.items()} model = BertModel.from_pretrained("bert-base-uncased") model = model.cuda() optimizer = torch.optim.SGD(model.parameters(), lr=0.1) model, optimizer = amp.initialize(model, optimizer, opt_level="O2") outputs = model(**inputs) type(outputs) ``` should return `BaseModelOutputWithPoolingAndCrossAttentions` but returns `dict`.<|||||>(The fix is the right one, so while the discussion may continue, I'm merging this PR.)<|||||>Thank you for investigating this, @sgugger. As expected distillation.py fails too with apex / default level 2 ``` python distillation.py --teacher facebook/bart-large-xsum --data_dir xsum --tokenizer_name facebook/bart-large-xsum --student_decoder_layers 6 --student_encoder_layers 12 --freeze_encoder --freeze_embeds --learning_rate=3e-4 --do_train --do_predict --fp16 --val_check_interval 0.1 --n_val 1 --eval_beams 1 --length_penalty=0.5 --max_target_length=60 --val_max_target_length=60 --test_max_target_length=100 --model_name_or_path IGNORED --alpha_hid=3. --train_batch_size=16 --eval_batch_size=16 --gradient_accumulation_steps=2 --sortish_sampler --num_train_epochs=6 --warmup_steps 1 --output_dir distilbart_xsum_12_6 --amp_backend=apex --n_train 1 --gpus 1 [...] File "distillation.py", line 157, in _step lm_logits = student_outputs.logits AttributeError: 'dict' object has no attribute 'logits' ``` there are multiple situations like this in this program. what's the best way to proceed, @sgugger? switch to dict keys for now and report this issue to apex? (which is not being watched - even PRs aren't being merged/attended to).<|||||>@ptrblck, can https://github.com/huggingface/transformers/pull/8612#issuecomment-729721261 be fixed in apex, or is it a lost cause (as I noticed apex is not actively supported anymore). Please let me know if we should ask someone else? <|||||>I think we should fix the scripts by accessing elements in the outputs with their keys.<|||||>I will do that. Thank you for the feedback, @sgugger
transformers
8,611
closed
tf_bart typo - self.self.activation_dropout
# What does this PR do? Fix one line typo in `modeling_tf_bart` : `self.self.activation_dropout` -> `self.activation_dropout` BTW, there's no error in forward pass. I met the error when I played around with `model.fit()` :) ## Who can review ? @sshleifer
11-18-2020 04:01:47
11-18-2020 04:01:47
LGTM, cc @LysandreJik. I won't merge on my own anymore.
transformers
8,610
closed
How to train EncoderDecoderModel using bert for seq-to-seq model
Hi @patrickvonplaten, I am trying to make a sequence to sequence model using EncoderDecodermodel and BERT Please find the following code, ```python import pandas as pd from transformers import EncoderDecoderModel train_data = [ ["one", "1"], ["two", "2"], ] train_df = pd.DataFrame(train_data, columns=["input_text", "target_text"]) eval_data = [["three", "3"]] eval_df = pd.DataFrame(eval_data, columns=["input_text", "target_text"]) #using BERT encoder decoder model=EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased') model.train(train_df) results=model.eval([eval_df],) #results=model.generate([["Five"],]) print(results) ``` But while evaluating, ends up in an error as shown in the figure ![image](https://user-images.githubusercontent.com/52187221/99480246-95474b00-29bc-11eb-88c1-8ffaa88e1738.png) Any suggestions on the same?
11-18-2020 03:39:40
11-18-2020 03:39:40
Hey @jithincheriyan, Please note that `.eval()` and `.train()` are not used to evaluate / train the model in PyTorch. In PyTorch `.eval()` and `.train()` are simply used to set the model into "training" or "evaluation" model. See the functions' API here: https://pytorch.org/docs/master/generated/torch.nn.Flatten.html#torch.nn.Flatten.train Please refer to this blog post to understand how to train an `EncoderDecoderModel`: https://huggingface.co/blog/warm-starting-encoder-decoder#warm-starting-encoder-decoder-models-with-%F0%9F%A4%97transformers-practice
transformers
8,609
closed
Missing `tokenizers` file?
In latest https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils_base.py ![image](https://user-images.githubusercontent.com/4702353/99480139-759b2d00-2992-11eb-8763-87b6b246819a.png) But there is no `tokenizers` file
11-18-2020 03:38:02
11-18-2020 03:38:02
Hi, tokenizers is not a file. It's an entire [library](https://github.com/huggingface/tokenizers) built by the Hugging Face team. The code that you show will import some functions from that library, if it's available.
transformers
8,608
closed
Extracting word representations from BPE-tokenization-based models (GPT-2, RoBERTa, etc.)
Hi! I am trying to extract representations from models (GPT-2 and RoBERTa) when the words are segmented into pieces. For this issue, let's assume I'm trying to extract the representation of `afoot` in the sentence: `the game is afoot !` For this I first load my libraries and instantiate the RoBERTa tokenizer: ```py import torch from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('roberta-base') ``` My inputs are in the form of `(sentence, idx)` where sentence is the context in which the desired word occurs in and idx which the index of the word (`afoot`) in the space-segmented form of the sentence `['the', 'game', 'is', 'afoot', '!']`: ```py sentence = ("The game is afoot !", 3) ``` The problem occurs when I compare the encoded token_ids for the word when it appears individually: ```py tokenizer.encode_plus('afoot', add_special_tokens = False)['input_ids'] #> [2001, 9210] ``` to the token_ids for the same word when it appears in a sentence: ```py tokenizer.encode_plus('the game is afoot !', add_special_tokens = False)['input_ids'] #> [627, 177, 16, 10, 2917, 27785] ``` **Why do I even need the individually split token_ids?** Because I want to know which indices correspond to the desired word so that I can query the model when I pass the sentence to it as an input and extract the representations for the wordpieces/BPEs and then average them to represent the word's vector. This is not a problem when I use BERT, where individual vs words as part of a sentence have the same segmentation. Any ideas how I can solve this issue? Environment: ``` python = 3.8.3 transformers = 3.1 torch = 1.6 ```
11-18-2020 00:24:16
11-18-2020 00:24:16
SOLVED! Prefixing spaces before the word seems to do the trick for now. But I will wait to close the issue for people who have more elegant solutions to this problem.<|||||>> SOLVED! Prefixing spaces before the word seems to do the trick for now. But I will wait to close the issue for people who have more elegant solutions to this problem. But how do you deal with the case that there are no spaces between two words? For example ``` tokenizer.tokenize("I have a dog, which loves eating meat!") #> [['I', 'Ġhave', 'Ġa', 'Ġdog', ',', 'Ġwhich', 'Ġloves', 'Ġeat', 'ing', 'Ġmeat', '!']] ``` There is no space before ',' and '!', but both are words. How do you distinguish them from 'ing'? THX for your reply!
transformers
8,607
closed
Fixed link to the wrong paper.
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. documentation: @sgugger
11-17-2020 23:24:59
11-17-2020 23:24:59