repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
11,825
closed
Faster list concat for trainer_pt_utils.get_length_grouped_indices()
# What does this PR do? substitutes faster list concatenation for get_length_grouped_indices() in LengthGroupedSampler and DistributedLengthGroupedSampler as prior sum(megabatches, []) is prohibitively slow for large number of megabatches (in test case takes hours for ~270k megabatches with 100 items each). Fixes #11795 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
05-21-2021 22:15:06
05-21-2021 22:15:06
No problem, thank you for all your wonderful work!
transformers
11,824
closed
Add flax text class colab
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Adds official link to notebook ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
05-21-2021 22:10:16
05-21-2021 22:10:16
transformers
11,823
closed
Hugging Face model Bio_ClinicalBERT producing 404 error
I'm building a Named Entity Recognition (NER) model using the Hugging Face implementation of emilyalsentzer/Bio_ClinicalBERT. Up to today, I've had no issues with the model. Today it's not working as expected. Question 1 - today, trying to train using: MODEL_NAME = 'emilyalsentzer/Bio_ClinicalBERT' model = text.sequence_tagger('bilstm-bert', preproc, bert_model=MODEL_NAME) results in this error: 404 Client Error: Not Found for url: https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT/resolve/main/tf_model.h5 Does Hugging Face offer any kind of health check to ascertain the status of their models? Question 2 - working with files (model.h5, model.json, and preproc.sav) I'd saved from earlier training iterations, I'm getting the same 404 error shown above. I don't understand wherein these files the call to Hugging Face is occurring. It doesn't seem to be in the .json, and the .h5 and .sav file formats are hard to inspect. Read more about what these files are: https://medium.com/analytics-vidhya/how-to-deploy-your-neural-network-model-using-ktrain-ae255b134c77 Back in February, I'd used these exact model.h5, model.json, and preproc.sav files to run the NER app using Streamlit, no problem. Not sure if this is temporary issue with Bio_ClinicalBERT or if I need to retool my original approach due to potentially permanent problems with this transformer model.
05-21-2021 18:22:56
05-21-2021 18:22:56
Hi @NicoleJaneway , I think this issue is similar to the following one in `ktrain` repo: https://github.com/amaiya/ktrain/issues/367 "Problem" is, that there's no TensorFlow compatible model found on the hub (more precisely the `tf_model.h5` one). One good "workaround" would be if the model owner (pinging @EmilyAlsentzer here) would upload such a model to avoid these message :hugs: <|||||>Thanks, @stefan-it! Unfortunately, with the 404 error, my app is no longer working. I posted a new [ktrain issue](https://github.com/amaiya/ktrain/issues/369) about it. In my experience, the creator has been amazingly responsive, so let's see what comes of the question.<|||||>Hello @NicoleJaneway, looking at the repository and its commit history, I don't think there ever was a `.h5` file uploaded. Could you share the code you're using when using local files so that we can see what's going on? If using local files, `transformers` should look locally before looking on the server, so you shouldn't get a 404 error<|||||>Hey @LysandreJik, thanks for trying to help - I don't have this project up on a public github yet. I'll let you know when I do.
transformers
11,822
closed
Training Transformer XL from scratch
Hello, I am trying to recreate this notebook https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb for transformer XL I made changes to the tokenizer as follows ``` %%time from pathlib import Path from tokenizers import Tokenizer from tokenizers.models import WordLevel from tokenizers import normalizers from tokenizers.normalizers import Lowercase, NFD, StripAccents from tokenizers.pre_tokenizers import Whitespace from tokenizers.processors import TemplateProcessing from tokenizers.trainers import WordPieceTrainer from tokenizers.trainers import WordLevelTrainer tokenizer = Tokenizer(WordLevel(unk_token="[UNK]")) tokenizer.normalizer = normalizers.Sequence([NFD(), Lowercase(), StripAccents()]) tokenizer.pre_tokenizer = Whitespace() bert_tokenizer.post_processor = TemplateProcessing( single="[CLS] $A [SEP]", pair="[CLS] $A [SEP] $B:1 [SEP]:1", special_tokens=[ ("[CLS]", 1), ("[SEP]", 2), ], ) trainer = WordLevelTrainer(show_progress=True, special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"]) files = [str(x) for x in Path(".").glob("**/*.txt")] tokenizer.train(files, trainer) tokenizer.save("espertransXL.json") ``` and then loaded it into the FastTokenizer ``` from transformers import PreTrainedTokenizerFast tokenizer = PreTrainedTokenizerFast(tokenizer_file="espertransXL.json") tokenizer.bos_token="[CLS]" tokenizer.eos_token="[SEP]" tokenizer.sep_token="[SEP]" tokenizer.cls_token="[CLS]" tokenizer.unk_token="[UNK]" tokenizer.pad_token="[PAD]" tokenizer.mask_token="[MASK]" tokenizer._bos_token="[CLS]" tokenizer._eos_token="[SEP]" tokenizer._sep_token="[SEP]" tokenizer._cls_token="[CLS]" tokenizer._unk_token="[UNK]" tokenizer._pad_token="[PAD]" tokenizer._mask_token="[MASK]" ``` Post that, I instantiated the model ``` from transformers import TransfoXLConfig, TransfoXLModel config = TransfoXLConfig() model = TransfoXLModel(config=config) ``` Set up the data collator: ``` from transformers import DataCollatorForLanguageModeling data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=True, mlm_probability=0.15 ) ``` Setting up the trainer as follows ``` from transformers import Trainer, TrainingArguments training_args = TrainingArguments( output_dir="./TransfoXL", overwrite_output_dir=True, num_train_epochs=1, per_gpu_train_batch_size=16, save_steps=10_000, save_total_limit=2, prediction_loss_only=True, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=dataset, ) ``` When I execute: ``` %%time trainer.train() ``` I get the following error: ``` TypeError Traceback (most recent call last) <timed eval> in <module> /opt/conda/envs/Python-3.7-CUDA/lib/python3.7/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, **kwargs) 1270 tr_loss += self.training_step(model, inputs) 1271 else: -> 1272 tr_loss += self.training_step(model, inputs) 1273 self.current_flos += float(self.floating_point_ops(inputs)) 1274 /opt/conda/envs/Python-3.7-CUDA/lib/python3.7/site-packages/transformers/trainer.py in training_step(self, model, inputs) 1732 loss = self.compute_loss(model, inputs) 1733 else: -> 1734 loss = self.compute_loss(model, inputs) 1735 1736 if self.args.n_gpu > 1: /opt/conda/envs/Python-3.7-CUDA/lib/python3.7/site-packages/transformers/trainer.py in compute_loss(self, model, inputs, return_outputs) 1764 else: 1765 labels = None -> 1766 outputs = model(**inputs) 1767 # Save past state if it exists 1768 # TODO: this needs to be fixed and made cleaner later. /opt/conda/envs/Python-3.7-CUDA/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 887 result = self._slow_forward(*input, **kwargs) 888 else: --> 889 result = self.forward(*input, **kwargs) 890 for hook in itertools.chain( 891 _global_forward_hooks.values(), TypeError: forward() got an unexpected keyword argument 'attention_mask' ``` Can some please advise on this or if they have a working notebook example point to it? Thanks
05-21-2021 16:57:46
05-21-2021 16:57:46
Hi! I believe you should be using `TransfoXLLMHeadModel` instead, as right now you're using the Transfo XL model without it's LM head. The TransfoXL model is one of our older models which doesn't fit one-to-one with other models, unfortunately. I invite you to take a look at the signature here: https://huggingface.co/transformers/model_doc/transformerxl.html#transformers.TransfoXLLMHeadModel.forward It doesn't accept the `attention_mask` parameter, so you would need to tell the tokenizer it doesn't need to output those. The easiest way you can achieve that is by changing the following line: ```diff - tokenizer = PreTrainedTokenizerFast(tokenizer_file="espertransXL.json") + tokenizer = PreTrainedTokenizerFast(tokenizer_file="espertransXL.json", model_input_names=["input_ids"]) ```<|||||>@LysandreJik Thank you for the reply. I made those changes and while that error is resolved, I am getting the error `KeyError: 'loss'` On searching the internet, it seems that this error comes when `labels` are not defined, but I believe I have defined it. I have created this public notebook for transformerXL https://colab.research.google.com/drive/1vMVoPhtkHFC_-0X-hgwHvH03ynGT0j5i?usp=sharing . Can you please check and advise. I would be happy to publish this as a tutorial/example once it is working as I see this question on training transformer-xl has come up in past.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hello @vishrawas! You could subclass `TransfoXLLMHeadModel` and change its output dictionary from `losses` to `loss`, so it would work with the trainer. Please note that you will probably have to reduce the loss prior to the return, as it has not been reduced yet, for example: `loss.mean()`: ```Python class OwnTransfoXLLMHeadModel(TransfoXLLMHeadModel): def __init__(self, *args, **kwargs) -> None: super(OwnTransfoXLLMHeadModel, self).__init__(*args, **kwargs) def forward( self, input_ids=None, mems=None, head_mask=None, inputs_embeds=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None, ): return_dict = return_dict if return_dict is not None else self.config.use_return_dict if input_ids is not None: bsz, tgt_len = input_ids.size(0), input_ids.size(1) elif inputs_embeds is not None: bsz, tgt_len = inputs_embeds.size(0), inputs_embeds.size(1) else: raise ValueError("You have to specify either input_ids or inputs_embeds") transformer_outputs = self.transformer( input_ids, mems=mems, head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) last_hidden = transformer_outputs[0] pred_hid = last_hidden[:, -tgt_len:] softmax_output = self.crit(pred_hid, labels) prediction_scores = softmax_output.view(bsz, tgt_len, -1) if labels is None else () loss = softmax_output.view(bsz, tgt_len - 1) if labels is not None else None loss = loss.mean() if not return_dict: output = (prediction_scores,) + transformer_outputs[1:] return ((loss,) + output) if loss is not None else output return TransfoXLLMHeadModelOutput( loss=loss, prediction_scores=prediction_scores, mems=transformer_outputs.mems, hidden_states=transformer_outputs.hidden_states, attentions=transformer_outputs.attentions, ) ``` Additionally, you will need to subclass `ModelOutput` in the same way `TransfoXLLMHeadModelOutput` does and change the `losses` argument to `loss`: ```Python class TransfoXLLMHeadModelOutput(ModelOutput): loss: Optional[torch.FloatTensor] = None prediction_scores: torch.FloatTensor = None mems: List[torch.FloatTensor] = None hidden_states: Optional[Tuple[torch.FloatTensor]] = None attentions: Optional[Tuple[torch.FloatTensor]] = None @property def logits(self): return self.prediction_scores ```<|||||>> @LysandreJik Thank you for the reply. I made those changes and while that error is resolved, I am getting the error `KeyError: 'loss'` On searching the internet, it seems that this error comes when `labels` are not defined, but I believe I have defined it. I have created this public notebook for transformerXL https://colab.research.google.com/drive/1vMVoPhtkHFC_-0X-hgwHvH03ynGT0j5i?usp=sharing . Can you please check and advise. > > I would be happy to publish this as a tutorial/example once it is working as I see this question on training transformer-xl has come up in past. Hello there! I wonder if you have an updated version of the transformer-XL notebook? Thank you for your help!
transformers
11,821
closed
[run_clm.py] restore caching
`datasets==0.1.6` introduced in-memory datasets, which unfortunately has no caching which makes it very slow to develop with as the dataset gets reprocessed on every run. Supposedly this should make things faster overall, but at this huge cost to us developers. It's also inconsistent where some datasets behave in one way, others in another way. This is too magical, IMHO. This PR adds `keep_in_memory=False`, to disable in-memory cache, but restores normal caching. Perhaps adding a note in the example that the user can change it to `True` if they don't care for the slow startup? Alternatively, if you believe that the new behavior is good, let's create an env var at `datasets` that will control that, so that we can turn off this painful behavior w/o needing to manually modify the code. Fixes: https://github.com/huggingface/transformers/issues/11801 p.s. working on this one script on many fronts - and then will sync other scripts at once. @sgugger, @VictorSanh, @lhoestq
05-21-2021 15:58:14
05-21-2021 15:58:14
No we can't just add the new argument without checking the version, as it's probably not going to work anymore for earlier versions of datasets (that's why it's bad to do breaking changes :-P). It seems like it's the way the Datasets library wants to be used, so I would leave the default behavior here and you can change the script locally for your use case. If the defaults of the Datasets library are not satisfactory, then maybe those defaults should be changed.<|||||>Makes sense, @sgugger - thank you - back to `datasets`
transformers
11,820
closed
[Flax] Small fixes in `run_flax_glue.py`
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes a typo. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
05-21-2021 15:40:51
05-21-2021 15:40:51
transformers
11,819
closed
Add option to log only once in multinode training
# What does this PR do? This PR adds the option to only log on one node when doing multinode training. This is controlled by the `is_local_process_zero` method, so I apply the switch there to avoid putting in multiple places. Fixes #11796
05-21-2021 13:57:18
05-21-2021 13:57:18
transformers
11,818
closed
[Trainer] Report both steps and num samples per second
# What does this PR do? As seen with @stas00, there is a bug in the current speed metrics reporting: training reports the number of training steps per second while evaluation and predict report the number of samples per second. After discussion we concluded that both are interesting, so this PR updates the Trainer to report both.
05-21-2021 13:50:16
05-21-2021 13:50:16
transformers
11,817
closed
same sentence different padding length result different embedding.
I use nn.Softmax(dim=-1) to softmax. I find different outputs. ``` a = [-3.6180e-01, 6.6926e-01, 1.2248e+01, -9.5795e-01] b = [-3.6180e-01, 6.6926e-01, 1.2248e+01, -9.5795e-01, -9.5795e-01] ``` softmax(a) = [3.3403e-06, 9.366**2**e-06, 9.999**9**e-01, 1.8402e-06] softmax(b) =[3.3403e-06, 9.366**1**e-06, 9.999**8**e-01, 1.8402e-06, 1.8402e-06] The different softmax results result in different sentence embedding, sometimes the embedding differ a lot.I test transeformers the question cant repoduce. This bug appears in transformers modified by our company. Any help is appreciate!
05-21-2021 13:06:41
05-21-2021 13:06:41
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,816
closed
ValueError batch-size mismatch when redefining classifier layer on BertForSequenceClassification
Hi, I am currently using BertForSequenceClassification for my project, to show some results regarding transfer performance on the GLUE Benchmark. I wan't to do two things. 1. Add a seperate nn.Linear() head on top of the already fine-tuned BertForSequenceClassification model and train the entire model: Input -> BERT_BASE_MODEL -> CLASSIFIER -> nn.Linear 2. Remove (or reinitialize) classifier head with random weights with |out_features| = |labels of the new task| of the classifier.layer of BertForSequenceClassification model and retrain it on the new task. I have problem with 2. If I try to execute my script, I always get the following error. I tried various things, but couldn't get the code work. I also searched for similar posts, but couldn't find one. >> source t_ft.sh Selected cpu as device. b'Skipping line 24810: expected 12 fields, saw 13\nSkipping line 33961: expected 12 fields, saw 13\n' b'Skipping line 75911: expected 12 fields, saw 13\nSkipping line 100114: expected 12 fields, saw 13\n' b'Skipping line 150638: expected 12 fields, saw 13\nSkipping line 158834: expected 12 fields, saw 13\nSkipping line 173104: expected 12 fields, saw 13\nSkipping line 178252: expected 12 fields, saw 13\n' b'Skipping line 221951: expected 12 fields, saw 13\n' b'Skipping line 286845: expected 12 fields, saw 13\nSkipping line 314110: expected 12 fields, saw 13\n' Processing 1000 / 391120 Samples Processing 2000 / 391120 Samples Processing 3000 / 391120 Samples Processing 1000 / 9714 Samples Processing 2000 / 9714 Samples Processing 3000 / 9714 Samples add_head: no remove_head: yes >>======== Epoch 1 / 2 ======== Training... Traceback (most recent call last): File "bert_pipeline.py", line 1117, in <module> main() File "bert_pipeline.py", line 976, in main outputs = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.8/dist-packages/transformers/models/bert/modeling_bert.py", line 1513, in forward loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/loss.py", line 1047, in forward return F.cross_entropy(input, target, weight=self.weight, File "/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py", line 2693, in cross_entropy return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) File "/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py", line 2384, in nll_loss raise ValueError( ValueError: Expected input batch_size (24) to match target batch_size (16). I guess the error is in line 891-892 ``` if (remove_head == 'yes'): model.classifier = nn.Linear(in_features=model.classifier.in_features, out_features=num_labels) ``` Full Code: ``` import numpy as np_ import pandas as pd import torch import torch.nn as nn from torch.nn import CrossEntropyLoss, MSELoss import random import time import datetime import argparse import copy import json import csv from transformers import BertConfig, BertTokenizer, BertForSequenceClassification, get_linear_schedule_with_warmup, AdamW from transformers.data import metrics from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler from sklearn.metrics import f1_score, matthews_corrcoef from scipy.stats import pearsonr, spearmanr import warnings warnings.simplefilter(action='ignore', category=FutureWarning) # Methods for Transfer-Learning def freeze_base_weights(model): pass # Types of some new BERT architectures class BertWithAdditionalHead(nn.Module): def __init__(self,base_model, num_labels): super(BertWithAdditionalHead,self).__init__() self.num_labels = num_labels self.base_model = base_model self.activation = nn.GELU() self.fc1 = nn.Linear(self.base_model.num_labels, self.num_labels) def forward( self, input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None, ): r""" labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`): Labels for computing the sequence classification/regression loss. Indices should be in :obj:`[0, ..., config.num_labels - 1]`. If :obj:`config.num_labels == 1` a regression loss is computed (Mean-Square loss), If :obj:`config.num_labels > 1` a classification loss is computed (Cross-Entropy). """ return_dict = return_dict if return_dict is not None else self.base_model.config.use_return_dict outputs = self.base_model.bert( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) pooled_output = outputs[1] pooled_output = self.base_model.dropout(pooled_output) outputs = self.base_model.classifier(pooled_output) outputs = self.activation(outputs) logits = self.fc1(outputs) loss = None if labels is not None: if self.num_labels == 1: # We are doing regression loss_fct = MSELoss() loss = loss_fct(logits.view(-1), labels.view(-1)) else: loss_fct = CrossEntropyLoss() loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) if not return_dict: output = (logits, outputs[2:]) return (loss, output) if loss is not None else output return ((loss,logits)) # Processors class ColaProcessor: def __init__(self, data_dir): self.data_dir = data_dir self.labels=["0","1"] self.train_label_index=[1] self.dev_label_index=[1] self.train_sentence_index=[3] self.dev_sentence_index=[3] self.test_sentence_index=[1] def get_train_data(self): data = pd.read_csv(self.data_dir + "train.tsv", delimiter="\t", error_bad_lines=False, header=None, encoding='utf8', dtype=str) #Remove NaN values data = data.dropna(subset=(self.train_sentence_index + self.train_label_index)) train_data = data.iloc[:,self.train_sentence_index].copy() train_labels = data.iloc[:,self.train_label_index].copy() return((train_data, train_labels)) def get_dev_data(self): data = pd.read_csv(self.data_dir + "dev.tsv", delimiter="\t", error_bad_lines=False, encoding='utf8', header=None, dtype=str) data = data.dropna(subset=(self.dev_sentence_index + self.dev_label_index)) dev_data = data.iloc[:,self.dev_sentence_index].copy() dev_labels = data.iloc[:,self.dev_label_index].copy() return((dev_data, dev_labels)) def get_test_data(self): data = pd.read_csv(self.data_dir + "test.tsv", delimiter="\t", encoding='utf8', error_bad_lines=False, dtype=str) data = data.dropna(subset=self.test_sentence_index) test_data = data.iloc[:,self.test_sentence_index].copy() return(test_data) def get_label_list(self): return(self.labels) def get_index(self): return((self.train_sentence_index, self.train_label_index), (self.dev_sentence_index, self.dev_label_index), (self.test_sentence_index)) # TODO class MRPCProcessor: def __init__(self, data_dir): self.data_dir = data_dir self.labels=["0","1"] self.train_label_index=[0] self.dev_label_index=[0] self.train_sentence_index=[3,4] self.dev_sentence_index=[3,4] self.test_sentence_index=[3,4] def get_train_data(self): data = pd.read_csv(self.data_dir + "train.tsv", delimiter="\t", dtype=str, header=None, skiprows=[0], encoding='utf8', error_bad_lines=False) #Remove NaN values data = data.dropna(subset=(self.train_sentence_index + self.train_label_index)) train_data = data.iloc[:,self.train_sentence_index].copy() train_labels = data.iloc[:,self.train_label_index].copy() return((train_data, train_labels)) def get_dev_data(self): data = pd.read_csv(self.data_dir + "dev.tsv", delimiter="\t", dtype=str, header=None, skiprows=[0], encoding='utf8', error_bad_lines=False) data = data.dropna(subset=(self.dev_sentence_index + self.dev_label_index)) dev_data = data.iloc[:,self.dev_sentence_index].copy() dev_labels = data.iloc[:,self.dev_label_index].copy() return((dev_data, dev_labels)) def get_test_data(self): data = pd.read_csv(self.data_dir + "test.tsv", delimiter="\t", dtype=str, header=None, skiprows=[0], encoding='utf8', error_bad_lines=False) data = data.dropna(subset=self.test_sentence_index) test_data = data.iloc[:,self.test_sentence_index].copy() return(test_data) def get_label_list(self): return(self.labels) def get_index(self): return((self.train_sentence_index, self.train_label_index), (self.dev_sentence_index, self.dev_label_index), (self.test_sentence_index)) class MNLIMatchedProcessor: def __init__(self, data_dir): self.data_dir = data_dir self.labels=["contradiction", "entailment", "neutral"] self.train_label_index=[11] self.dev_label_index=[15] self.train_sentence_index=[8,9] self.dev_sentence_index=[8,9] self.test_sentence_index=[8,9] def get_train_data(self): data = pd.read_csv(self.data_dir + "train.tsv", delimiter="\t", dtype=str, header=None, skiprows=[0], encoding='utf8', error_bad_lines=False) #Remove NaN values data = data.dropna(subset=(self.train_sentence_index + self.train_label_index)) train_data = data.iloc[:,self.train_sentence_index].copy() train_labels = data.iloc[:,self.train_label_index].copy() return((train_data, train_labels)) def get_dev_data(self): data = pd.read_csv(self.data_dir + "dev_matched.tsv", delimiter="\t", dtype=str, header=None, skiprows=[0], encoding='utf8', error_bad_lines=False) data = data.dropna(subset=(self.dev_sentence_index + self.dev_label_index)) dev_data = data.iloc[:,self.dev_sentence_index].copy() dev_labels = data.iloc[:,self.dev_label_index].copy() return((dev_data, dev_labels)) def get_test_data(self): data = pd.read_csv(self.data_dir + "test_matched.tsv", delimiter="\t", dtype=str, header=None, skiprows=[0], encoding='utf8', error_bad_lines=False) data = data.dropna(subset=self.test_sentence_index) test_data = data.iloc[:,self.test_sentence_index] return(test_data) def get_label_list(self): return(self.labels) def get_index(self): return((self.train_sentence_index, self.train_label_index), (self.dev_sentence_index, self.dev_label_index), (self.test_sentence_index)) class QNLIProcessor: def __init__(self, data_dir): self.data_dir = data_dir self.labels=["entailment", "not_entailment"] self.train_label_index=[3] self.dev_label_index=[3] self.train_sentence_index=[1,2] self.dev_sentence_index=[1,2] self.test_sentence_index=[1,2] def get_train_data(self): data = pd.read_csv(self.data_dir + "train.tsv", delimiter="\t", dtype=str, header=None, skiprows=[0], encoding='utf8', error_bad_lines=False) #Remove NaN values data = data.dropna(subset=(self.train_sentence_index + self.train_label_index)) train_data = data.iloc[:,self.train_sentence_index].copy() train_labels = data.iloc[:,self.train_label_index].copy() return((train_data, train_labels)) def get_dev_data(self): data = pd.read_csv(self.data_dir + "dev.tsv", delimiter="\t", dtype=str, header=None, skiprows=[0], encoding='utf8', error_bad_lines=False) data = data.dropna(subset=(self.dev_sentence_index + self.dev_label_index)) dev_data = data.iloc[:,self.dev_sentence_index].copy() dev_labels = data.iloc[:,self.dev_label_index].copy() return((dev_data, dev_labels)) def get_test_data(self): data = pd.read_csv(self.data_dir + "test.tsv", delimiter="\t", dtype=str, header=None, skiprows=[0], encoding='utf8', error_bad_lines=False) data = data.dropna(subset=self.test_sentence_index) test_data = data.iloc[:,self.test_sentence_index] return(test_data) def get_label_list(self): return(self.labels) def get_index(self): return((self.train_sentence_index, self.train_label_index), (self.dev_sentence_index, self.dev_label_index), (self.test_sentence_index)) class QQPProcessor: def __init__(self, data_dir): self.data_dir = data_dir self.labels=["0", "1"] self.train_label_index=[5] self.dev_label_index=[5] self.train_sentence_index=[3,4] self.dev_sentence_index=[3,4] self.test_sentence_index=[1,2] def get_train_data(self): data = pd.read_csv(self.data_dir + "train.tsv", delimiter="\t", dtype=str, header=None, skiprows=[0], encoding='utf8', error_bad_lines=False) #Remove NaN values data = data.dropna(subset=(self.train_sentence_index + self.train_label_index)) train_data = data.iloc[:,self.train_sentence_index].copy() train_labels = data.iloc[:,self.train_label_index].copy() return((train_data, train_labels)) def get_dev_data(self): data = pd.read_csv(self.data_dir + "dev.tsv", delimiter="\t", dtype=str, header=None, skiprows=[0], encoding='utf8', error_bad_lines=False) data = data.dropna(subset=(self.dev_sentence_index + self.dev_label_index)) dev_data = data.iloc[:,self.dev_sentence_index].copy() dev_labels = data.iloc[:,self.dev_label_index].copy() return((dev_data, dev_labels)) def get_test_data(self): data = pd.read_csv(self.data_dir + "test.tsv", delimiter="\t", dtype=str, header=None, skiprows=[0], encoding='utf8', error_bad_lines=False) data = data.dropna(subset=self.test_sentence_index) test_data = data.iloc[:,self.test_sentence_index] return(test_data) def get_label_list(self): return(self.labels) def get_index(self): return((self.train_sentence_index, self.train_label_index), (self.dev_sentence_index, self.dev_label_index), (self.test_sentence_index)) class RTEProcessor: def __init__(self, data_dir): self.data_dir = data_dir self.labels=["entailment", "not_entailment"] self.train_label_index=[3] self.dev_label_index=[3] self.train_sentence_index=[1,2] self.dev_sentence_index=[1,2] self.test_sentence_index=[1,2] def get_train_data(self): data = pd.read_csv(self.data_dir + "train.tsv", delimiter="\t", dtype=str, header=None, skiprows=[0], encoding='utf8', error_bad_lines=False) #Remove NaN values data = data.dropna(subset=(self.train_sentence_index + self.train_label_index)) train_data = data.iloc[:,self.train_sentence_index].copy() train_labels = data.iloc[:,self.train_label_index].copy() return((train_data, train_labels)) def get_dev_data(self): data = pd.read_csv(self.data_dir + "dev.tsv", delimiter="\t", dtype=str, header=None, skiprows=[0], encoding='utf8', error_bad_lines=False) data = data.dropna(subset=(self.dev_sentence_index + self.dev_label_index)) dev_data = data.iloc[:,self.dev_sentence_index].copy() dev_labels = data.iloc[:,self.dev_label_index].copy() return((dev_data, dev_labels)) def get_test_data(self): data = pd.read_csv(self.data_dir + "test.tsv", delimiter="\t", dtype=str, header=None, skiprows=[0], encoding='utf8', error_bad_lines=False) data = data.dropna(subset=self.test_sentence_index) test_data = data.iloc[:,self.test_sentence_index] return(test_data) def get_label_list(self): return(self.labels) def get_index(self): return((self.train_sentence_index, self.train_label_index), (self.dev_sentence_index, self.dev_label_index), (self.test_sentence_index)) class SST2Processor: def __init__(self, data_dir): self.data_dir = data_dir self.labels=["0", "1"] self.train_label_index=[1] self.dev_label_index=[1] self.train_sentence_index=[0] self.dev_sentence_index=[0] self.test_sentence_index=[1] def get_train_data(self): data = pd.read_csv(self.data_dir + "train.tsv", delimiter="\t", dtype=str, header=None, skiprows=[0], encoding='utf8', error_bad_lines=False) #Remove NaN values data = data.dropna(subset=(self.train_sentence_index + self.train_label_index)) train_data = data.iloc[:,self.train_sentence_index].copy() train_labels = data.iloc[:,self.train_label_index].copy() return((train_data, train_labels)) def get_dev_data(self): data = pd.read_csv(self.data_dir + "dev.tsv", delimiter="\t", dtype=str, header=None, skiprows=[0], encoding='utf8', error_bad_lines=False) data = data.dropna(subset=(self.dev_sentence_index + self.dev_label_index)) dev_data = data.iloc[:,self.dev_sentence_index].copy() dev_labels = data.iloc[:,self.dev_label_index].copy() return((dev_data, dev_labels)) def get_test_data(self): data = pd.read_csv(self.data_dir + "test.tsv", delimiter="\t", dtype=str, header=None, skiprows=[0], encoding='utf8', error_bad_lines=False) data = data.dropna(subset=self.test_sentence_index) test_data = data.iloc[:,self.test_sentence_index] return(test_data) def get_label_list(self): return(self.labels) def get_index(self): return((self.train_sentence_index, self.train_label_index), (self.dev_sentence_index, self.dev_label_index), (self.test_sentence_index)) class STSBProcessor: def __init__(self, data_dir): self.data_dir = data_dir self.labels=[] self.train_label_index=[9] self.dev_label_index=[9] self.train_sentence_index=[7,8] self.dev_sentence_index=[7,8] self.test_sentence_index=[7,8] def get_train_data(self): data = pd.read_csv(self.data_dir + "train.tsv", delimiter="\t", dtype=str, header=None, skiprows=[0], encoding='utf8', error_bad_lines=False) #Remove NaN values data = data.dropna(subset=(self.train_sentence_index + self.train_label_index)) train_data = data.iloc[:,self.train_sentence_index].copy() train_labels = data.iloc[:,self.train_label_index].copy() return((train_data, train_labels)) def get_dev_data(self): data = pd.read_csv(self.data_dir + "dev.tsv", delimiter="\t", dtype=str, header=None, skiprows=[0], encoding='utf8', error_bad_lines=False) data = data.dropna(subset=(self.dev_sentence_index + self.dev_label_index)) dev_data = data.iloc[:,self.dev_sentence_index].copy() dev_labels = data.iloc[:,self.dev_label_index].copy() return((dev_data, dev_labels)) def get_test_data(self): data = pd.read_csv(self.data_dir + "test.tsv", delimiter="\t", dtype=str, header=None, skiprows=[0], quoting=csv.QUOTE_NONE, encoding='utf8', error_bad_lines=False) data = data.dropna(subset=self.test_sentence_index) test_data = data.iloc[:,self.test_sentence_index] return(test_data) def get_label_list(self): return(self.labels) def get_index(self): return((self.train_sentence_index, self.train_label_index), (self.dev_sentence_index, self.dev_label_index), (self.test_sentence_index)) class WNLIProcessor: def __init__(self, data_dir): self.data_dir = data_dir self.labels=["0", "1"] self.train_label_index=[3] self.dev_label_index=[3] self.train_sentence_index=[1,2] self.dev_sentence_index=[1,2] self.test_sentence_index=[1,2] def get_train_data(self): data = pd.read_csv(self.data_dir + "train.tsv", delimiter="\t", dtype=str, header=None, skiprows=[0], encoding='utf8', error_bad_lines=False) #Remove NaN values data = data.dropna(subset=(self.train_sentence_index + self.train_label_index)) train_data = data.iloc[:,self.train_sentence_index].copy() train_labels = data.iloc[:,self.train_label_index].copy() return((train_data, train_labels)) def get_dev_data(self): data = pd.read_csv(self.data_dir + "dev.tsv", delimiter="\t", dtype=str, header=None, skiprows=[0], encoding='utf8', error_bad_lines=False) data = data.dropna(subset=(self.dev_sentence_index + self.dev_label_index)) dev_data = data.iloc[:,self.dev_sentence_index].copy() dev_labels = data.iloc[:,self.dev_label_index].copy() return((dev_data, dev_labels)) def get_test_data(self): data = pd.read_csv(self.data_dir + "test.tsv", delimiter="\t", dtype=str, header=None, skiprows=[0], quoting=csv.QUOTE_NONE, encoding='utf8', error_bad_lines=False) data = data.dropna(subset=self.test_sentence_index) test_data = data.iloc[:,self.test_sentence_index] return(test_data) def get_label_list(self): return(self.labels) def get_index(self): return((self.train_sentence_index, self.train_label_index), (self.dev_sentence_index, self.dev_label_index), (self.test_sentence_index)) # Metrics class Metrics(): def __init__(self, is_regression): self.is_regression = is_regression def get_dict(self): if self.is_regression: return({"pearson_corr": None, "spearman_corr": None}) else: return({"accuracy": None, "f1_score": None, "mcc": None}) def calculate_metrics(self, preds, labels): eval_dict = {} if self.is_regression: eval_dict["pearson_corr"] = pearsonr(preds, labels)[0] eval_dict["spearman_corr"] = spearmanr(preds, labels)[0] else: eval_dict["accuracy"] = metrics.simple_accuracy(preds, labels) eval_dict["f1_score"] = f1_score(y_true=labels, y_pred=preds) eval_dict["mcc"] = matthews_corrcoef(labels, preds) return(eval_dict) def log_eval(epoch_i, avg_train_loss, eval_loss, eval_dict, output_dir, t_train, t_val): eval_file = open((output_dir + "/eval_result_transfer.txt"), "a") eval_file.writelines("epoch : {} \n".format(epoch_i)) eval_file.writelines("train_loss : {} \n".format(avg_train_loss)) eval_file.writelines("train_time : {} \n".format(t_train)) eval_file.writelines("eval_loss : {} \n".format(eval_loss)) eval_file.writelines("eval_time : {} \n".format(t_val)) for key in eval_dict: eval_file.writelines(key + ": {} \n".format(eval_dict[key])) eval_file.writelines("\n") eval_file.close() with open(output_dir + "/epoch_{}.json".format(epoch_i), "w") as f: json.dump(eval_dict, f) f.close() def preprocesser(dataset, sentence_idx, tokenizer, max_seq_len): input_ids = [] attention_mask = [] token_type_ids = [] if len(sentence_idx) == 1: for j, sentence in enumerate(dataset.iloc()): sentence1 = sentence[sentence_idx[0]] if pd.isnull(sentence1): continue if j%1000==0 and not (j == 0): print("Processing {} / {} Samples".format(j, len(dataset))) # Tokenize the sentence sentence1 = tokenizer.tokenize(sentence1) # Convert tokens to ids sentence1 = tokenizer.convert_tokens_to_ids(sentence1) # Additional preprocessing ## Padding to a max_seq_len ## or Truncating to max_seq_len ## computing input_ids, attention mask and token_type_ids tokenized_dict = tokenizer.prepare_for_model( ids=sentence1, pair_ids=None, add_special_tokens=True ,padding='max_length', truncation='longest_first', max_length=max_seq_len, return_tensors='np', return_token_type_ids=True , return_attention_mask=True) input_ids.append(tokenized_dict['input_ids']) attention_mask.append(tokenized_dict['attention_mask']) token_type_ids.append(tokenized_dict['token_type_ids']) if len(sentence_idx) == 2: for j,sentence in enumerate(dataset.iloc()): sentence1 = sentence[sentence_idx[0]] sentence2 = sentence[sentence_idx[1]] if pd.isnull(sentence1) or pd.isnull(sentence2): continue if j%1000==0 and not (j == 0): print("Processing {} / {} Samples".format(j, len(dataset))) if j==3000: break # Tokenize the sentence sentence1 = tokenizer.tokenize(sentence1) sentence2 = tokenizer.tokenize(sentence2) # Convert tokens to ids sentence1 = tokenizer.convert_tokens_to_ids(sentence1) sentence2 = tokenizer.convert_tokens_to_ids(sentence2) # Additional preprocessing ## Padding to a max_seq_len ## or Truncating to max_seq_len ## computing input_ids, attention mask and token_type_ids tokenized_dict = tokenizer.prepare_for_model(ids=sentence1, pair_ids=sentence2, add_special_tokens=True,padding='max_length', truncation='longest_first', max_length=max_seq_len, return_tensors='np',return_token_type_ids=True , return_attention_mask=True) input_ids.append(tokenized_dict['input_ids']) attention_mask.append(tokenized_dict['attention_mask']) token_type_ids.append(tokenized_dict['token_type_ids']) # Converting to pytorch tensors input_ids = torch.tensor(input_ids) attention_mask = torch.tensor(attention_mask) token_type_ids = torch.tensor(token_type_ids) return (input_ids, attention_mask, token_type_ids) def format_time(elapsed): ''' Takes a time in seconds and returns a string hh:mm:ss ''' # Round to the nearest second. elapsed_rounded = int(round((elapsed))) # Format as hh:mm:ss return str(datetime.timedelta(seconds=elapsed_rounded)) def main(): # Arguments parser = argparse.ArgumentParser(description='A BERT pipeline with transformers library') parser.add_argument('-t_n', '--task_name', help='Name of the task', default=None, type=str) parser.add_argument('-d_t', '--do_train', help='Whether model needs to be trained yes/no', default='no', type=str) parser.add_argument('-d_e', '--do_eval', help='Whether you want to evaluate on dev set', default='no', type=str) parser.add_argument('-d_p', '--do_predict', help='Whether you want to do predictions on test set yes/no', default='no', type=str) parser.add_argument('-a_h', '--add_head', help='Whether you want to add a new head on the given BERT model yes/no', default='no', type=str) parser.add_argument('-r_h', '--remove_head', help='Whether you want to remove head and instantiante new head with random weights yes/no', default='no', type=str) parser.add_argument('-f_b', '--freeze_base', help="Whether you only want to train the classification layer yes/no", default='no', type=str) parser.add_argument('-i_r', '--is_regression', help='Whether the given task is a regression task yes/no', default='no', type=str) parser.add_argument('-g_s', '--global_seed', help='Define seed for reproducability purpose', default=0, type=int) parser.add_argument('-d_d', '--data_dir', help='Directory, where the dataset can be found', default=None, type=str) parser.add_argument('-v_f', '--vocab_file', help='Path of BERT vocabulary file', default=None, type=str) parser.add_argument('-s_t', '--source_task', help='Optional argument, to store the name of the model', default='',type=str) parser.add_argument('-b_c_f', '--bert_config_file', help='Directory, where the configuration file can be found', default=None, type=str) parser.add_argument('-p_m', '--pretrained_model', help='Path of the Pretrained model (.bin /.pth)', default=None, type=str) parser.add_argument('-m_s_l', '--max_seq_len', help='Maximum length boundary for all sequences', default=128, type=int) parser.add_argument('-t_b_s', '--train_batch_size', help='Batch size for Training', default=32, type=int) parser.add_argument('-e_b_s', '--eval_batch_size', help='Batch size for Evaluation', default=16, type=int) parser.add_argument('-l_r', '--learning_rate', help='Learning rate', default=3e-5, type=float) parser.add_argument('-n_t_e', '--num_train_epochs', help='Number of training epochs', default=1, type=int) parser.add_argument('-n_w_s', '--num_warmup_steps', help='Number of warmup steps', default=0, type=int) parser.add_argument('-o_d', '--output_dir', help='Directory for the output file', default=None, type=str) args = vars(parser.parse_args()) # Passing arguments to variables task_name = args['task_name'] do_train = args['do_train'] do_eval = args['do_eval'] do_predict = args['do_predict'] add_head = args['add_head'] remove_head = args['remove_head'] freeze_base = args['freeze_base'] global_seed = args['global_seed'] data_dir = args['data_dir'] vocab_file = args['vocab_file'] bert_config_file = args['bert_config_file'] pretrained_model = args['pretrained_model'] max_seq_len = args['max_seq_len'] train_batch_size = args['train_batch_size'] eval_batch_size = args['eval_batch_size'] learning_rate = args['learning_rate'] epochs = args['num_train_epochs'] num_warmup_steps = args['num_warmup_steps'] output_dir = args['output_dir'] is_regression = args['is_regression'] source_task = args['source_task'] # Setting seed random.seed(global_seed) np.random.seed(global_seed) torch.manual_seed(global_seed) torch.cuda.manual_seed_all(global_seed) # Setting up device device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") print("Selected {} as device.".format(device)) # BertConfig bert_config = BertConfig.from_json_file(bert_config_file) # Processor processor_dict = {"cola":ColaProcessor, "mrpc":MRPCProcessor, "mnli":MNLIMatchedProcessor, "qnli":QNLIProcessor, "qqp":QQPProcessor, "rte":RTEProcessor, "sst2":SST2Processor, "sts":STSBProcessor, "wnli":WNLIProcessor} processor = processor_dict[task_name](data_dir=data_dir) # Tokenizer tokenizer = BertTokenizer(vocab_file=vocab_file) # Metrics metric = Metrics(is_regression) # Training if do_train == 'yes': train_data, train_labels = processor.get_train_data() dev_data, dev_labels = processor.get_dev_data() label_list = processor.get_label_list() num_labels = len(label_list) (train_sentence_index, _), (dev_sentence_index, _), _ = processor.get_index() train_input_ids, train_attention_mask, train_token_type_ids = preprocesser( dataset=train_data, sentence_idx=train_sentence_index,tokenizer=tokenizer, max_seq_len=max_seq_len) dev_input_ids, dev_attention_mask, dev_token_type_ids = preprocesser( dataset=dev_data, sentence_idx=dev_sentence_index ,tokenizer=tokenizer, max_seq_len=max_seq_len) # Converting labels to numeric values train_labels = train_labels.values.flatten('C') dev_labels = dev_labels.values.flatten('C') if is_regression == 'yes': # TODO train_labels = train_labels.astype(float) dev_labels = dev_labels.astype(float) else: label_map = {} for i,label in enumerate(label_list): label_map[label] = i for j,label in enumerate(train_labels): train_labels[j] = label_map[label] for k,label in enumerate(dev_labels): dev_labels[k] = label_map[label] train_labels=train_labels.astype(int) dev_labels=dev_labels.astype(int) train_labels = torch.tensor(train_labels[:3000]) dev_labels = torch.tensor(dev_labels[:3000]) # Defining DataLoader train_data = TensorDataset(train_input_ids, train_attention_mask, train_labels) train_sampler = RandomSampler(train_data) train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=train_batch_size) dev_data = TensorDataset(dev_input_ids, dev_attention_mask, dev_labels) dev_sampler = SequentialSampler(dev_data) dev_dataloader = DataLoader(dev_data, sampler=dev_sampler, batch_size=eval_batch_size) # TODO Transfer-Modeling # Defining model and criterion bert_config.num_labels = num_labels model = BertForSequenceClassification.from_pretrained(pretrained_model, return_dict=False) print('add_head: {}'.format(add_head)) print('remove_head: {}'.format(remove_head)) if (add_head == 'yes'): model = BertWithAdditionalHead(model, num_labels) **``` if (remove_head == 'yes'): model.classifier = nn.Linear(in_features=model.classifier.in_features, out_features=num_labels) ```** # TODO Freezing weights of base model if (freeze_base == 'yes'): if (add_head == 'yes'): for params in model.base_model.parameters(): print("Freezing Parameter: {}".format(params)) params.requires_grad = False else: for params in model.bert.parameters(): print("Freezing Parameter: {}".format(params)) params.requires_grad = False # Moving model to GPU if possible if device == torch.device("cuda"): model.cuda() optimizer = AdamW(model.parameters(), lr=learning_rate) total_steps = len(train_dataloader) * epochs scheduler = get_linear_schedule_with_warmup(optimizer,num_warmup_steps=num_warmup_steps, num_training_steps=total_steps) loss_val = [] # For each epoch... for epoch_i in range(0, epochs): # ======================================== # Training # ======================================== # Perform one full pass over the training set. print("") print('======== Epoch {:} / {:} ========'.format(epoch_i + 1, epochs)) print('Training...') # Measure how long the training epoch takes. t0 = time.time() # Reset the total loss for this epoch. total_loss = 0 # Put the model into training mode. Don't be mislead--the call to # `train` just changes the *mode*, it doesn't *perform* the training. # `dropout` and `batchnorm` layers behave differently during training # vs. test (source: https://stackoverflow.com/questions/51433378/what-does-model-train-do-in-pytorch) model.train() # For each batch of training data... for step, batch in enumerate(train_dataloader): # Progress update every 40 batches. if step % 2 == 0 and not step == 0: # Calculate elapsed time in minutes. elapsed = format_time(time.time() - t0) # Report progress. print(' Batch {:>5,} of {:>5,}. Elapsed: {:}.'.format(step, len(train_dataloader), elapsed)) # Unpack this training batch from our dataloader. # # As we unpack the batch, we'll also copy each tensor to the GPU using the # `to` method. # # `batch` contains three pytorch tensors: # [0]: input ids # [1]: attention masks # [2]: labels b_input_ids = batch[0].to(device) b_input_mask = batch[1].to(device) b_labels = batch[2].to(device) # Always clear any previously calculated gradients before performing a # backward pass. PyTorch doesn't do this automatically because # accumulating the gradients is "convenient while training RNNs". # (source: https://stackoverflow.com/questions/48001598/why-do-we-need-to-call-zero-grad-in-pytorch) model.zero_grad() # optimizer.zero_grad() # Perform a forward pass (evaluate the model on this training batch). # This will return the loss (rather than the model output) because we # have provided the `labels`. # The documentation for this `model` function is here: # https://huggingface.co/transformers/v2.2.0/model_doc/bert.html#transformers.BertForSequenceClassification outputs = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels) # The call to `model` always returns a tuple, so we need to pull the # loss value out of the tuple. loss = outputs[0] # Display loss for every 10 steps print("Loss: {} in Step: {}".format(loss, step)) if step%20==0 and not step==0: break # Accumulate the training loss over all of the batches so that we can # calculate the average loss at the end. `loss` is a Tensor containing a # single value; the `.item()` function just returns the Python value # from the tensor. total_loss += loss.item() # Perform a backward pass to calculate the gradients. loss.backward() # Clip the norm of the gradients to 1.0. # This is to help prevent the "exploding gradients" problem. torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) # Update parameters and take a step using the computed gradient. # The optimizer dictates the "update rule"--how the parameters are # modified based on their gradients, the learning rate, etc. optimizer.step() # Update the learning rate. scheduler.step() # Calculate the average loss over the training data. avg_train_loss = total_loss / len(train_dataloader) # Store the loss value for plotting the learning curve. loss_val.append(avg_train_loss) t_train = format_time(time.time() - t0) print("") print(" Average training loss: {0:.2f}".format(avg_train_loss)) print(" Training epoch took: {:}".format(t_train)) # ======================================== # Validation # ======================================== # After the completion of each training epoch, measure our performance on # our validation set. print("") print("Running Validation...") t0 = time.time() # Put the model in evaluation mode--the dropout layers behave differently # during evaluation. model.eval() # Tracking variables eval_loss, eval_accuracy = 0, 0 nb_eval_steps, nb_eval_examples = 0, 0 eval_dict = metric.get_dict() tmp_eval_dict = {} # Evaluate data for one epoch for dev_step, batch in enumerate(dev_dataloader): # Add batch to GPU batch = tuple(t.to(device) for t in batch) # Unpack the inputs from our dataloader b_input_ids, b_input_mask, b_labels = batch # Telling the model not to compute or store gradients, saving memory and # speeding up validation with torch.no_grad(): # Forward pass, calculate logit predictions. # This will return the logits rather than the loss because we have # not provided labels. # token_type_ids is the same as the "segment ids", which # differentiates sentence 1 and 2 in 2-sentence tasks. # The documentation for this `model` function is here: # https://huggingface.co/transformers/v2.2.0/model_doc/bert.html#transformers.BertForSequenceClassification outputs = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels) # Get the "logits" output by the model. The "logits" are the output # values prior to applying an activation function like the softmax. tmp_eval_loss, logits = outputs[:2] print("Eval Loss: {} in Step: {}".format(tmp_eval_loss, dev_step)) # Move logits and labels to CPU logits = logits.detach().cpu().numpy() logits = logits.argmax(axis=1) label_ids = b_labels.to('cpu').numpy() # Calculate the accuracy for this batch of test sentences. # tmp_eval_accuracy = flat_accuracy(logits, label_ids) tmp_eval_dict = metric.calculate_metrics(preds=logits, labels=label_ids) # Accumulate the total accuracy. eval_loss += tmp_eval_loss for key in eval_dict: if eval_dict[key] == None: eval_dict = copy.deepcopy(tmp_eval_dict) continue else: eval_dict[key] += tmp_eval_dict[key] # Track the number of batches nb_eval_steps += 1 # logging time if dev_step==10: break t_val = format_time(time.time() - t0) for key in eval_dict: eval_dict[key] = eval_dict[key]/nb_eval_steps eval_loss = eval_loss/nb_eval_steps log_eval(epoch_i, avg_train_loss, eval_loss, eval_dict, output_dir, t_train, t_val) # Report the final accuracy for this validation run. for key in eval_dict: print(key + ": {}".format(eval_dict[key])) print(" Validation took: {:}".format(t_val)) print("") print("Training complete!") print("Saving the model in {} ...".format(output_dir)) model.save_pretrained(output_dir+"{}_{}.bin".format(source_task,task_name)) # TODO if do_eval == 'yes' and not (do_train == 'yes'): pass # TODO if do_predict == 'yes': pass if __name__ == "__main__": main() ``` Training is based on: https://www.youtube.com/watch?v=FKlPCK1uFrc&list=PLam9sigHPGwOBuH4_4fr-XvDbe5uneaf6 Hopefully you can help me :)
05-21-2021 12:57:36
05-21-2021 12:57:36
transformers
11,815
closed
How get sentenses embbedings from TFBertForMaskedLM
Good afternoon! I am solving a text clustering problem by fine-tuning a trained BERT model. After seeing a number of articles on the subject, I decided to use the masking problem and the TFBertForMaskedLM model for fine-tuning. I was able to fine-tune the network on my set, and now I want to use the embbedings of this model to transform my set and feed into the clustering algorithm. The question and problem is that the output of `bert_model.layers[0]` is `[None, max_len, emb_size]`, I get emb of each word, and I need emb of a document or sequence, any way out?
05-21-2021 11:51:40
05-21-2021 11:51:40
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discusss.huggingface.co) instead? Thanks!<|||||>Resolved
transformers
11,814
closed
Permission error for cardiffnlp/twitter-roberta-base-emotion
@patrickvonplaten, I'm having issues accessing the `cardiffnlp/twitter-roberta-base-emotion` model using: ``` task='emotion' MODEL = f"cardiffnlp/twitter-roberta-base-{task}" tokenizer = AutoTokenizer.from_pretrained(MODEL) ``` When I substitute another task, such as `task='sentiment'`, it works fine. I have also tried using the `cardiffnlp/twitter-roberta-base-emotion` model within an NLP framework (AdaptNLP) but got a `permission denied` error. However, I did not receive a `permission denied` error when using the `sentiment ` task within this NLP framework.
05-21-2021 11:50:52
05-21-2021 11:50:52
Hey @StephenQuirolgico, could you attach a code snippet that I can copy-paste to reproduce the error? :-)<|||||>@patrickvonplaten, Not exactly sure what the issue was but it's working now. Thanks!
transformers
11,813
closed
fix roformer config doc
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes roformer config doc ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @LysandreJik Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
05-21-2021 11:37:12
05-21-2021 11:37:12
transformers
11,812
closed
Patch recursive import
The RoFormer converter requires the `JiebaPreTokenizer` which was imported at the root of the file. This resulted in a cyclic dependency and a partially initialized module. This PR fixes the issue by importing it only when necessary and additionally tests that the `PreTrainedTokenizerFast` can be loaded as a standalone.
05-21-2021 10:21:16
05-21-2021 10:21:16
transformers
11,811
closed
GPT Neo for Sequence Classification
Hi, Is there a way to use GPT Neo for classification tasks like BoolQ ? As 'OpenAI GPT2' integration of HF has 'GPT2ForSequenceClassification', is there a similar one for GPT Neo?
05-21-2021 09:42:27
05-21-2021 09:42:27
@patil-suraj this may be a good first issue? Feel free to open a PR!<|||||>Thanks @NielsRogge . Hi @patil-suraj , Is there any workaround to make it work in my local?<|||||>We could for sure add `GPTNeoForSequenceClassification`. It would be as easy as - just copying the `GPT2ForSequenceClassification` module and replacing the `GPT2` with `GPTNeo` https://github.com/huggingface/transformers/blob/afe479adb5474250215438fe27db9dc9dbbbde09/src/transformers/models/gpt2/modeling_gpt2.py#L1225-L1231 - `config.hidden_size` instead of config.n_embd https://github.com/huggingface/transformers/blob/afe479adb5474250215438fe27db9dc9dbbbde09/src/transformers/models/gpt2/modeling_gpt2.py#L1232 - remove the model_parallel logic https://github.com/huggingface/transformers/blob/afe479adb5474250215438fe27db9dc9dbbbde09/src/transformers/models/gpt2/modeling_gpt2.py#L1236-L1238 - add a test in `tests/test_modeling_gpt_neo.py` similar to https://github.com/huggingface/transformers/blob/afe479adb5474250215438fe27db9dc9dbbbde09/tests/test_modeling_gpt2.py#L357 Marking this as "Good First Issue". Feel free to take a stab if you want, I would be happy to help.<|||||>Hi Guys, is anyone working on this? I can make PR for this. I might also need to use it in the future. <|||||>Hi @bhadreshpsavani, Feel free to open a PR :)
transformers
11,810
closed
Feature to use the PreTrainedTokenizerFast class as a stand-alone tokenizer
# What does this PR do? In this PR, I propose to add the features needed to use `PreTrainedTokenizerFast` as a standalone tokenizer. These features include: 1. The ability to save a `PreTrainedTokenizerFast` tokenizer. Until now, it was not possible to `save_pretrained` ( with default values in the method) a `PreTrainedTokenizerFast` initialized from a folder containing only the `tokenizer.json`, `tokenizer_config.json` and `special_tokens_map.json` files. An error was previously returned because the `save_vocabulary` method was not implemented, which is normal when trying to use a fast tokenizer alone as it has no slow version. This feature allows this kind of use: ``` from transformers import PreTrainedTokenizerFast tokenizer = PreTrainedTokenizerFast.from_pretrained("SaulLu/bengali-tokenizer-v2") tokenizer.save_pretrained("./local_tokenizer") ``` 2. The ability to specify in the `config.json` file that the type of tokenizer to be loaded is `PreTrainedTokenizerFast` in order to be able to load a `PreTrainedTokenizerFast` with `AutoTokenizer`. In this PR, I also propose the modification/addition of 3 types of tests: - Modifications: This design change required the modification of common tests for tokenizers stored in the `tests/test_tokenization_common.py` file. To my knowledge, this is quite a different use as this is the first time a tokenizer will not have a slow/legacy version. The changes to `tests/test_tokenization_common.py` allow a test class derived from `TokenizerTesterMixin` to leave the `tokenizer_class` attribute set to None and to only set the `rust_tokenizer_class` attribute. In other words, the derived class will allow to test a tokenizer which would not have an associated slow/legacy version. As there were several possibilities to modify these tests, if you ever think that it is easier to develop these tests in another PR, I can remove this part from this PR. - Added : Added tests for using a standalone `PreTrainedTokenizerFast` in the `tests/test_tokenization_fast.py` file. I have created a tokenizer for this and stored it [here](https://huggingface.co/robot-test/dummy-tokenizer-fast). - Added : Added tests to load a standalone `PreTrainedTokenizerFast` via `AutoTokenizer` in the `tests/test_tokenization_auto.py ` file. I have created a tokenizer for this and stored it [here](https://huggingface.co/robot-test/dummy-tokenizer-fast-with-model-config). This PR should make it easy to use a fast tokenizer created with the `Tokenizers` library in the `Transformers` library. A typical use case would be : 1. Create a tokenizer with `Tokenizers` library ``` from tokenizers import Tokenizer from tokenizers.models import BPE from tokenizers.trainers import BpeTrainer from tokenizers.pre_tokenizers import Whitespace tokenizer = Tokenizer(BPE(unk_token="[UNK]")) trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"]) tokenizer.pre_tokenizer = Whitespace() files = [...] tokenizer.train(files, trainer) ``` 2. Adapt the tokenizer to `Transformers` library At the end of this step, the tokenizer will be saved in a folder named `brand_new_tokenizer` and containing `tokenizer.json`, `tokenizer_config.json` and `special_tokens_map.json` files. a. Save and initialize `PreTrainedTokenizerFast` with json file ``` tokenizer.save("tokenizer.json") ``` ``` from transformers import PreTrainedTokenizerFast from transformers.tokenization_utils import AddedToken fast_tokenizer = PreTrainedTokenizerFast( tokenizer_file="tokenizer.json", model_max_length=512, padding_side="right", mask_token=AddedToken("[MASK]", lstrip=True, rstrip=False ) fast_tokenizer.save_pretrained("brand_new_tokenizer") ``` b. Initialize `PreTrainedTokenizerFast` from the tokenizer object ``` from transformers import PreTrainedTokenizerFast from transformers.tokenization_utils import AddedToken fast_tokenizer = PreTrainedTokenizerFast( tokenizer_object=tokenizer, model_max_length=512, padding_side="right", mask_token=AddedToken("[MASK]", lstrip=True, rstrip=False ) fast_tokenizer.save_pretrained("brand_new_tokenizer") ``` 3. Load tokenizer with `PreTrainedTokenizerFast` ``` from transformers import PreTrainedTokenizerFast tokenizer = PreTrainedTokenizerFast.from_pretrained("brand_new_tokenizer") ``` 4. (Temporary solution before a next PR) Create a `config.json` file in `brand_new_tokenizer` folder and initialize a tokenizer with `AutoTokenizer`. Config file: ``` { "model_type": "albert", "tokenizer_class": "PreTrainedTokenizerFast" } ``` ``` from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("brand_new_tokenizer") ``` In one (or more) next PRs, we would still have to : - disassociate the tokenizer from the `config.json` file so that `AutoTokenizer` can load a saved tokenizer without a model - if necessary adjust the documentation (for example [here](https://huggingface.co/transformers/fast_tokenizers.html) ) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @LysandreJik <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
05-21-2021 09:28:00
05-21-2021 09:28:00
Could this also be a fallback for `AutoTokenizer` when none of the children classes match?
transformers
11,809
closed
Wrong LayerNorm weight names in "bert-base-uncased" checkpoint ?
## Environment info - `transformers` version: 4.4.0.dev0 - Platform: Linux-4.18.0-147.44.1.el8_1.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.7.1 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten (issue with "bert-base-uncased" checkpoint) ## Information Model I am using (Bert, XLNet ...): BERT(base, uncased) The problem arises when: loading "bert-base-uncased" model weights from state_dict ## To reproduce Steps to reproduce the behavior: 1. Download model checkpoint from hub: ``` git lfs install git clone https://huggingface.co/bert-base-uncased ``` 2. Load pre-trained model from checkpoint using `.from_pretrained` (this sort of works) ```python import torch from transformers import BertForPreTraining model = BertForPretraining.from_pretrained('./bert-base-uncased') """ [Output]: Some weights of BertForPreTraining were not initialized from the model checkpoint at ./bert-base-uncased and are newly initialized: ['cls.predictions.decoder.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. """ ``` 3. Re-load same weights, this time using `.load_state_dict` ```python state_dict = torch.load('./bert-base-uncased/pytorch_model.bin') model.load_state_dict(state_dict) ``` This fails and outputs: ``` RuntimeError: Error(s) in loading state_dict for BertForPreTraining: Missing key(s) in state_dict: "bert.embeddings.position_ids", "bert.embeddings.LayerNorm.weight", "bert.embeddings.LayerNorm.bias", "bert.encoder.layer.0.attention.output.LayerNorm.weight", "bert.encoder.layer.0.attention.output.LayerNorm.bias", "bert.encoder.layer.0.output.LayerNorm.weight", "bert.encoder.layer.0.output.LayerNorm.bias", "bert.encoder.layer.1.attention.output.LayerNorm.weight", "bert.encoder.layer.1.attention.output.LayerNorm.bias", "bert.encoder.layer.1.output.LayerNorm.weight", "bert.encoder.layer.1.output.LayerNorm.bias", "bert.encoder.layer.2.attention.output.LayerNorm.weight", "bert.encoder.layer.2.attention.output.LayerNorm.bias", "bert.encoder.layer.2.output.LayerNorm.weight", "bert.encoder.layer.2.output.LayerNorm.bias", "bert.encoder.layer.3.attention.output.LayerNorm.weight", "bert.encoder.layer.3.attention.output.LayerNorm.bias", "bert.encoder.layer.3.output.LayerNorm.weight", "bert.encoder.layer.3.output.LayerNorm.bias", "bert.encoder.layer.4.attention.output.LayerNorm.weight", "bert.encoder.layer.4.attention.output.LayerNorm.bias", "bert.encoder.layer.4.output.LayerNorm.weight", "bert.encoder.layer.4.output.LayerNorm.bias", "bert.encoder.layer.5.attention.output.LayerNorm.weight", "bert.encoder.layer.5.attention.output.LayerNorm.bias", "bert.encoder.layer.5.output.LayerNorm.weight", "bert.encoder.layer.5.output.LayerNorm.bias", "bert.encoder.layer.6.attention.output.LayerNorm.weight", "bert.encoder.layer.6.attention.output.LayerNorm.bias", "bert.encoder.layer.6.output.LayerNorm.weight", "bert.encoder.layer.6.output.LayerNorm.bias", "bert.encoder.layer.7.attention.output.LayerNorm.weight", "bert.encoder.layer.7.attention.output.LayerNorm.bias", "bert.encoder.layer.7.output.LayerNorm.weight", "bert.encoder.layer.7.output.LayerNorm.bias", "bert.encoder.layer.8.attention.output.LayerNorm.weight", "bert.encoder.layer.8.attention.output.LayerNorm.bias", "bert.encoder.layer.8.output.LayerNorm.weight", "bert.encoder.layer.8.output.LayerNorm.bias", "bert.encoder.layer.9.attention.output.LayerNorm.weight", "bert.encoder.layer.9.attention.output.LayerNorm.bias", "bert.encoder.layer.9.output.LayerNorm.weight", "bert.encoder.layer.9.output.LayerNorm.bias", "bert.encoder.layer.10.attention.output.LayerNorm.weight", "bert.encoder.layer.10.attention.output.LayerNorm.bias", "bert.encoder.layer.10.output.LayerNorm.weight", "bert.encoder.layer.10.output.LayerNorm.bias", "bert.encoder.layer.11.attention.output.LayerNorm.weight", "bert.encoder.layer.11.attention.output.LayerNorm.bias", "bert.encoder.layer.11.output.LayerNorm.weight", "bert.encoder.layer.11.output.LayerNorm.bias", "cls.predictions.transform.LayerNorm.weight", "cls.predictions.transform.LayerNorm.bias", "cls.predictions.decoder.bias". Unexpected key(s) in state_dict: "bert.embeddings.LayerNorm.gamma", "bert.embeddings.LayerNorm.beta", "bert.encoder.layer.0.attention.output.LayerNorm.gamma", "bert.encoder.layer.0.attention.output.LayerNorm.beta", "bert.encoder.layer.0.output.LayerNorm.gamma", "bert.encoder.layer.0.output.LayerNorm.beta", "bert.encoder.layer.1.attention.output.LayerNorm.gamma", "bert.encoder.layer.1.attention.output.LayerNorm.beta", "bert.encoder.layer.1.output.LayerNorm.gamma", "bert.encoder.layer.1.output.LayerNorm.beta", "bert.encoder.layer.2.attention.output.LayerNorm.gamma", "bert.encoder.layer.2.attention.output.LayerNorm.beta", "bert.encoder.layer.2.output.LayerNorm.gamma", "bert.encoder.layer.2.output.LayerNorm.beta", "bert.encoder.layer.3.attention.output.LayerNorm.gamma", "bert.encoder.layer.3.attention.output.LayerNorm.beta", "bert.encoder.layer.3.output.LayerNorm.gamma", "bert.encoder.layer.3.output.LayerNorm.beta", "bert.encoder.layer.4.attention.output.LayerNorm.gamma", "bert.encoder.layer.4.attention.output.LayerNorm.beta", "bert.encoder.layer.4.output.LayerNorm.gamma", "bert.encoder.layer.4.output.LayerNorm.beta", "bert.encoder.layer.5.attention.output.LayerNorm.gamma", "bert.encoder.layer.5.attention.output.LayerNorm.beta", "bert.encoder.layer.5.output.LayerNorm.gamma", "bert.encoder.layer.5.output.LayerNorm.beta", "bert.encoder.layer.6.attention.output.LayerNorm.gamma", "bert.encoder.layer.6.attention.output.LayerNorm.beta", "bert.encoder.layer.6.output.LayerNorm.gamma", "bert.encoder.layer.6.output.LayerNorm.beta", "bert.encoder.layer.7.attention.output.LayerNorm.gamma", "bert.encoder.layer.7.attention.output.LayerNorm.beta", "bert.encoder.layer.7.output.LayerNorm.gamma", "bert.encoder.layer.7.output.LayerNorm.beta", "bert.encoder.layer.8.attention.output.LayerNorm.gamma", "bert.encoder.layer.8.attention.output.LayerNorm.beta", "bert.encoder.layer.8.output.LayerNorm.gamma", "bert.encoder.layer.8.output.LayerNorm.beta", "bert.encoder.layer.9.attention.output.LayerNorm.gamma", "bert.encoder.layer.9.attention.output.LayerNorm.beta", "bert.encoder.layer.9.output.LayerNorm.gamma", "bert.encoder.layer.9.output.LayerNorm.beta", "bert.encoder.layer.10.attention.output.LayerNorm.gamma", "bert.encoder.layer.10.attention.output.LayerNorm.beta", "bert.encoder.layer.10.output.LayerNorm.gamma", "bert.encoder.layer.10.output.LayerNorm.beta", "bert.encoder.layer.11.attention.output.LayerNorm.gamma", "bert.encoder.layer.11.attention.output.LayerNorm.beta", "bert.encoder.layer.11.output.LayerNorm.gamma", "bert.encoder.layer.11.output.LayerNorm.beta", "cls.predictions.transform.LayerNorm.gamma", "cls.predictions.transform.LayerNorm.beta". ``` ## Expected behavior Opening the checkpoint using `torch.load` then loading these weights using `model.load_state_dict` should result in matching all keys successfully (in particular here, all LayerNorm weights should be loaded). ## Solution? The issue here seems to be that the weight and bias parameters in LayerNorm were renamed from gamma and beta previously but the bert-base-uncased checkpoint wasn't updated to reflect this change. I am using a somewhat older version of transformers / pytorch but this seems to be still the case in recent versions of both libraries. The test was done using the model checkpoint from the model hub on 21 May 2021.
05-21-2021 09:19:18
05-21-2021 09:19:18
Did you try with other models? Since 4.6, it gives a similar warning for every model i try to load. For example: ```python import transformers as tr tr.AutoModel.from_pretrained("xlm-roberta-base") ``` ```bash Some weights of the model checkpoint at xlm-roberta-base were not used when initializing XLMRobertaModel: ['lm_head.layer_norm.weight', 'lm_head.dense.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight', 'lm_head.dense.bias', 'lm_head.bias'] - This IS expected if you are initializing XLMRobertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing XLMRobertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). ``` <|||||>It seems like your case is a bit different. I think you are "initializing `XLMRobertaModel` from the checkpoint of a model trained on another task" (pretraining checkpoint). So you have some parameters that are not needed (those from the language modeling head) In my case, it is the layer norm parameters that have the wrong name regardless of which architecture I load :) Edit: basically what I mean is that your behaviour is expected while mine is not.<|||||>Thanks for the heads up, I guess I need to open a new issue.<|||||>I'm not sure that it is an issue. It just seems that the checkpoint on the model hub was made with the LM model which explains why there are some weights that are not used in your case since you only use the "encoder" part. 😊<|||||>Prior to 4.6, it has never shown these type of warnings when downloading with an `AutoModel`, that's why I think it may be an issue. The same line of code with 4.5 doesn't trigger the warning.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,808
closed
How to save and load model from local path in pipeline api ?
In `from_pretrained` api, the model can be loaded from local path by passing the `cache_dir`. However, I have not found any parameter when using `pipeline` for example, `nlp = pipeline("fill-mask" , model = 'distilbert-base-uncased', device=0)` how to save the downloaded model and load it next time from local path, rather than default cache path ? Thanks.
05-21-2021 07:12:08
05-21-2021 07:12:08
I don't think it's currently possible, you would have to specify the local path in `model` but it won't ping the custom `cache_dir`. We would happily welcome a PR that enables that for pipelines, would you be interested in that?<|||||>> I don't think it's currently possible, you would have to specify the local path in `model` but it won't ping the custom `cache_dir`. > > We would happily welcome a PR that enables that for pipelines, would you be interested in that? Thanks for your solution. I prefer to wait for new features in the future.
transformers
11,807
closed
version of T5 is not reported in HuggingFace models
Hi @patrickvonplaten, @patil-suraj, Google T5 model has two checkpoints, of t5.0.0 and t5.1.0, the performance of the two models is very different, in huggingface mdoels it is not specified which version HuggingFace is using, could you kindly add the details? thanks
05-21-2021 06:55:02
05-21-2021 06:55:02
Hi there, for T5V1.1 models we explicitly mention it in the model name, for example see here https://huggingface.co/google/t5-v1_1-base the model version is mentioned in the name as `v1_1`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,806
closed
updated the original RAG implementation to be compatible with latest Pytorch-Lightning
The original RAG version was not working with PL>=1.3, specially due to the fact that DDPAccelerator class has removed (Retriever Initialization of RAG). The new version of PL library advises us to use DDP pluggings as a replacement. I also updated lightning_base.py regarding the new PL version. Now RAG works with the latest libraries. @patrickvonplaten @lhoestq
05-21-2021 04:40:27
05-21-2021 04:40:27
Hey @shamanez, Could you run `make style`? @lhoestq - could you take a look as well?<|||||>Hey @patrickvonplaten I did run the "make style" and it changed following files and working alright. <|||||>Thanks @patrickvonplaten :)
transformers
11,805
closed
[Deepspeed] support `zero.Init` in `from_config`
As discussed a while ago this PR: - adds missing support for `zero.Init` (zero3) for `from_config` (same as we have in `from_pretrained) - which allows a huge model to be loaded in small chunks per gpu at once. - test @sgugger
05-21-2021 04:08:42
05-21-2021 04:08:42
transformers
11,804
closed
Index out of range when doing manual testing for TFBertModel
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.6.0 - Platform: Windows 10 - Python version: 3.8.5 - PyTorch version (GPU?): - Tensorflow version (GPU?):2.4.1 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @LysandreJik @Rocketknight1 <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): TFBertModel, BertTokenizer The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: Want to check what input BertModel will take, so I tested with code ``` tokenizer = transformers.BertTokenizer.from_pretrained('bert-base-uncased') model = transformers.TFBertModel.from_pretrained('bert-base-uncased') model(**tokenizer(['i, ne'])) ``` This gives an error ``` File "C:\Users\liche\anaconda3\lib\site-packages\transformers\models\bert\modeling_tf_bert.py", line 887, in call outputs = self.bert( File "C:\Users\liche\anaconda3\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 1012, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "C:\Users\liche\anaconda3\lib\site-packages\transformers\models\bert\modeling_tf_bert.py", line 645, in call embedding_output = self.embeddings( File "C:\Users\liche\anaconda3\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 1012, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "C:\Users\liche\anaconda3\lib\site-packages\transformers\models\bert\modeling_tf_bert.py", line 199, in call position_embeds = tf.tile(input=position_embeds, multiples=(input_shape[0], 1, 1)) IndexError: list index out of range ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> I think my code should produce something, but not an error. So I check the code and find that `input_ids` for `TFBertEmbedding` is a () shape Tensor. Then tracing back to where produce it, and I finally end at function `input_processing` in modeling_tf_utils.py, and find that input_ids are a list of five Tensors, each is a shape of (). So here comes the problem. As shown in the documentation, `TFBertModel` takes input_ids as a type of `TFModelInputType`, which only accepts either Tensor or numpy array or a list of them. My tokenizer produces ` [[101, 1045, 1010, 11265, 102]]` as input_ids. If manually converting it whole to Tensor or numpy array, I will get a shape (1,5) variable and can be successfully fed to the model and get outputs. However, if directly fed the dict to the model (as the code above), since the model only accepts Tensor or numpy array type, so it will convert the list to a type it accepts. Then it doesn't correctly covert the whole list to tensor, instead, it converts each individual element, i.e. integers, as the accepted type. And after `input_processing`, not the list of Tensor is fed to `TFEmbedding` but each individual empty shape integer Tensor. And it raises the error. It can be solved by converting it to Tensor as desired before calling the model but my code is logically correct, and expect it works.
05-21-2021 03:54:20
05-21-2021 03:54:20
Hi, you're quite right with your diagnosis. The problem is that by default, the tokenizer creates a dict of Python lists, not Tensors. Our models don't really understand those list inputs, and so you get errors. You already found the solution of converting those lists to TF Tensors or Numpy arrays, but there is an easier way - just tell the Tokenizer that you want array output. Then you will get the dict you want, and the rest of your code will work correctly. Here's an updated code sample that returns a dict of Numpy arrays instead: ``` tokenizer = transformers.BertTokenizer.from_pretrained('bert-base-uncased') model = transformers.TFBertModel.from_pretrained('bert-base-uncased') model(**tokenizer(['i, ne'], return_tensors='np')) ```<|||||>Thanks, that really helps!
transformers
11,803
closed
bert model (bert-base-chinese) consumed too much memory
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.6.0 - Platform: Linux-4.19.117.bsk.5-amd64-x86_64-with-debian-10.7 - Python version: 3.7.3 - PyTorch version (GPU?): 1.8.1+cu102 (False) - Tensorflow version (GPU?): 2.5.0 (False) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. @LysandreJik Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using Bert: When I run a code like this: ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('bert-base-chinese') bert = AutoModel.from_pretrained('bert-base-chinese') tokens = tokenizer(query, answer, padding=True, truncation=True, max_length=128, return_tensors="pt") out = bert(**tokens) ``` where query and answer are both tensors with batch size 128 however, it consumes over 10G memory in this line of code, ```out = bert(**tokens)```, anyone knows why? and in the next iteration, it consumes 20, 30, 40G memory...
05-21-2021 03:48:14
05-21-2021 03:48:14
A batch size of 128 is a lot! Are you using batch size 128 with sequence length 128?<|||||>Most likely you have a large tensor size of 128 * 128 * 768 - and also depends on what type of tensor data you put int32 / float32 / float64? Try to reduce the batch size, even to 2.<|||||>> A batch size of 128 is a lot! Are you using batch size 128 with sequence length 128? yes, maybe batch size 128 is large, but I don't know why the memory cost becomes larger in each iteration. I mean in the first iteration(with batch size 128) it consumes 10G, and when the process goes to the second iteration(still batch size 128), it consumes 20G, and 30G,40G,.....<|||||>> Most likely you have a large tensor size of 128 * 128 * 768 - and also depends on what type of tensor data you put int32 / float32 / float64? Try to reduce the batch size, even to 2. The input tensor size actually is 128*128? I am just confused why the memory cost rises in each iteration.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,802
closed
Text Generation, adding random words, weird linebreaks & symbols at random.
Here's the code I'm using to generate text. `sentence= tokenizer.encode(kw, return_tensors='pt') output = model.generate(sentence, max_length = 500, no_repeat_ngram_size = 2, do_sample=False) text.append(tokenizer.decode(output[0], skip_special_tokens = True))` The issue is that the output often comes like this: `What are the benefits of using collagen? ,,, , , ,, , __________________, __________ The skin that has collagen has a higher level of hydrophilic (water-loving) proteins. ` or like this: `Yes, collagen is a natural skin-repairing substance. It is also a powerful anti-inflammatory and antiaging agent. , and, are the most common types of collagen found in skin.` As you can see, at the start it wrote ", and," at random and it happens EXTREMELY often, nearly in every single text generation I did. I don't know if it's related to my settings or not but I'd appreciate all the help you guys can give. I want to get my text to be as human-readable as possible & up to 100-500 words each input.
05-21-2021 03:12:02
05-21-2021 03:12:02
Hi! Could you provide more information, especially regarding which model and tokenizer you're using? Also, you might have more luck asking on the [forum](https://discusss.huggingface.co), as GitHub issues are for bugs/feature requests. Thanks!<|||||>> Hi! Could you provide more information, especially regarding which model and tokenizer you're using? Also, you might have more luck asking on the [forum](https://discusss.huggingface.co), as GitHub issues are for bugs/feature requests. > > Thanks! oh sorry forgot to include them. tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium') model = GPT2LMHeadModel.from_pretrained('gpt2-medium' , pad_token_id = tokenizer.eos_token_id)<|||||>> Hi! Could you provide more information, especially regarding which model and tokenizer you're using? Also, you might have more luck asking on the [forum](https://discusss.huggingface.co), as GitHub issues are for bugs/feature requests. > > Thanks! if possible can you remove my account on hold on the forum? wont allow me to ask it there. "steelhard" is the account name.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,801
closed
[examples] run_clm re-processes dataset on every run
developing with `run_clm` is difficult since its startup is very slow - it rebuilds the dataset on each start. @VictorSanh says it started to do that recently... I think it's because it has to chunk the existing dataset into smaller pieces, it's a slow start everytime and it doesn't save these results. So the original dataset has already been preprocessed, but it's not good enough for `run_clm.py`. So I'm thinking perhaps for dev needs we need a dataset with short <512 entries? and then it could use it w/o additional preprocessing? But I could be wrong I haven't investigated the reason for the slow start. to reproduce: ``` USE_TF=0 python examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path gpt2 \ --dataset_name "stas/openwebtext-10k" \ --output_dir output_dir \ --overwrite_output_dir \ --do_train \ --do_eval \ --max_train_samples 1000 \ --max_eval_samples 200 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --num_train_epochs 1 \ --warmup_steps 8 \ --block_size 64 \ --fp16 \ --report_to none ``` So look at the tqdm bars before training starts to see the symptom. And this is already a very truncated dataset. @VictorSanh, @sgugger
05-20-2021 22:49:10
05-20-2021 22:49:10
The dataset caching is all relying on the datasets library, so the issue should probably be tracked here. Especially if this is a new change: since there was no change I'm aware of in `run_clm` recently it may be coming from a change there.<|||||>Thank you! I will ask on the `datasets` side.<|||||>you scooped me Sylvain. I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335): > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b/cache-c6aefe81ca4e5152.arrow'}], 'validation': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b/cache-97cf4c813e6469c6.arrow'}]}` while the same command with the latest version of datasets (actually starting at `1.6.0`) gives: > `{'train': [], 'validation': []}` Does it ring any bell @lhoestq ?<|||||>OK, moved this to `datasets` https://github.com/huggingface/datasets/issues/2387 <|||||>Reopening and bringing it back here: According to this https://github.com/huggingface/datasets/issues/2387#issuecomment-845781874 we need to change examples to add `keep_in_memory=False` - load_dataset otherwise there is no caching. here: https://github.com/huggingface/transformers/blob/223943872e8c9c3fc11db3c6e93da07f5177423f/examples/pytorch/language-modeling/run_clm.py#L233 <|||||>ok, `datasets` reverted the in-memory-datasets by default in master, so this is no longer a problem.
transformers
11,800
closed
CamemBert Tokenizer AttributeError: 'NoneType' object has no attribute 'tokenize'
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.5.1 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.8.8 - PyTorch version (GPU?): 1.8.1+cpu (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: No ### Who can help Library: - tokenizers: @LysandreJik ## Information Model I am using camemBert https://huggingface.co/camembert-base. The problem arises when using: * [x ] the official example scripts: ``` python from transformers import CamembertModel, CamembertTokenizer # You can replace "camembert-base" with any other model from the table, e.g. "camembert/camembert-large". tokenizer = CamembertTokenizer.from_pretrained("camembert-base") camembert = CamembertModel.from_pretrained("camembert-base") camembert.eval() # disable dropout (or leave in train mode to finetune) import torch # Tokenize in sub-words with SentencePiece tokenized_sentence = tokenizer.tokenize("J'aime le camembert !") ``` ## To reproduce Steps to reproduce the behavior: 1. Install transformers 2. Run code I get a AttributeError: 'NoneType' object has no attribute 'tokenize' Error as the tokenizer is None when I load from pre trained.
05-20-2021 20:52:09
05-20-2021 20:52:09
Hi! Could you try installing `sentencepiece` to see if that solves the problem?<|||||>I got an error when sentencepiece wasn't installed and after installing it returned None. Trying it again now I don't see the error anymore though so I'll close the issue 🙂<|||||>If this was on colab it's possible that you needed the runtime to restart!
transformers
11,799
closed
ImportError: tokenizers>=0.10.1,<0.11 is required for a normal functioning of this module, but found tokenizers==0.8.1rc1.
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.6.1 - Platform: Linux Mint Tricia 19.3 (ubuntu 18.04) - Python version: 3.8.8 - PyTorch version (GPU?): 1.7.0, gpu yes - Tensorflow version (GPU?): - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help tokenizer: @LysandreJik ## Information Model I am using (Bert, XLNet ...): GPT2 The problem arises when using: * [ ] my own modified scripts: (give details below) * [ ] my own task or dataset: (give details below) text generation After upgrade to 4.6.1 (same error in 4.6.0), I have an error when I load tokenizer. ### What I have tried I searched for a similar issue and thought that this is a possible duplicate of [this issue](https://github.com/huggingface/transformers/issues/11713), but there was no change after I apply the solution. I uninstalled transformers and tokenizers package, reinstall those, and still there is the same issue. ## To reproduce Steps to reproduce the behavior: 1. Import tokenizer (like below) ``` from transformers import (PreTrainedTokenizerFast, GPT2Tokenizer,) ``` Error message ``` --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-5-dc540cd053e1> in <module> ----> 1 from transformers import (PreTrainedTokenizerFast, 2 PreTrainedTokenizer, 3 AutoTokenizer, 4 GPT2Tokenizer,) 5 /opt/conda/lib/python3.8/site-packages/transformers/__init__.py in <module> 41 42 # Check the dependencies satisfy the minimal versions required. ---> 43 from . import dependency_versions_check 44 from .file_utils import ( 45 _BaseLazyModule, /opt/conda/lib/python3.8/site-packages/transformers/dependency_versions_check.py in <module> 39 continue # not required, check version only if installed 40 ---> 41 require_version_core(deps[pkg]) 42 else: 43 raise ValueError(f"can't find {pkg} in {deps.keys()}, check dependency_versions_table.py") /opt/conda/lib/python3.8/site-packages/transformers/utils/versions.py in require_version_core(requirement) 118 """require_version wrapper which emits a core-specific hint on failure""" 119 hint = "Try: pip install transformers -U or pip install -e '.[dev]' if you're working with git master" --> 120 return require_version(requirement, hint) 121 122 /opt/conda/lib/python3.8/site-packages/transformers/utils/versions.py in require_version(requirement, hint) 112 if want_ver is not None: 113 for op, want_ver in wanted.items(): --> 114 _compare_versions(op, got_ver, want_ver, requirement, pkg, hint) 115 116 /opt/conda/lib/python3.8/site-packages/transformers/utils/versions.py in _compare_versions(op, got_ver, want_ver, requirement, pkg, hint) 47 raise ValueError("want_ver is None") 48 if not ops[op](version.parse(got_ver), version.parse(want_ver)): ---> 49 raise ImportError( 50 f"{requirement} is required for a normal functioning of this module, but found {pkg}=={got_ver}.{hint}" 51 ) ImportError: tokenizers>=0.10.1,<0.11 is required for a normal functioning of this module, but found tokenizers==0.8.1rc1. Try: pip install transformers -U or pip install -e '.[dev]' if you're working with git master ``` ## Expected behavior Just work like before!
05-20-2021 20:37:44
05-20-2021 20:37:44
Hello, the error is pretty straightforward: your Python environment has the wrong `tokenizers` version. I would suggest you reinstall tokenizers *while making sure you are in the same environment as your python runtime*: `pip install -U tokenizers`<|||||>`pip install -U tokenizers` does not solve the problem. And after several trials, I could not help recreating the docker container to make this work. I guess It was due to creating a new conda environment inside of a docker container. Thank you for your reply! Will close the issue. <|||||>I have this identical issue. I am running python under WSL2, which is a docker container, I gather. <|||||>I'm having the same issue, using conda inside docker since I need to create a jupyter notebook server<|||||>I have the same problem. I fixed it by update python's version from 3.6 to 3.9. <|||||>I try to run `pip uninstall tokenizers` for 2 times, and solved. <img width="703" alt="image" src="https://user-images.githubusercontent.com/30597946/174764042-f25d97fc-45c5-4000-8f4f-7b94e65302d3.png"> <|||||>Re-install transformers with a proper version will be ok. I solve it by the command: `pip install transformers==4.11.3`.<|||||>it works on python 3.8 when transformers==4.11.3. So using `pip install transformers==4.11.3` for the proper installation version. 3.8 and above will need to upgrade the transformers to 4.2x.xx <|||||>More info at sister thread: https://github.com/CompVis/latent-diffusion/issues/207
transformers
11,798
closed
[Examples] create model with custom config on the fly
This PR is addressing a need to: 1. be able to quickly whip up a model of any desired size for the big-science experiments. 2. be able to activate gradient checkpointing (later addition) We already have the functionality to create a model instead of using a pretrained one, but there was no way to control its config - it would choose the defaults of the Config object, which is very doubtful is of any practical use. This PR: 1. adds a new `PretrainedConfig` method: `update_from_string` so one can update from a string. ``` config.update_from_string("n_embd=10,n_head=5,scale_attn_weights=false,summary_type=super_cls_index") ``` plus test. 2. adds a new `ModelArguments` arg: `--config_overrides="n_embd=1024,n_head=16,n_layer=48,n_positions=102"` which overrides the default config 3. auto-logs the resulting model size e.g.: ``` Training new model from scratch - Total size=626.69M params ``` Usage: ``` PYTHONPATH=src python examples/pytorch/language-modeling/run_clm.py --dataset_name \ "stas/openwebtext-10k" --output_dir output_dir --overwrite_output_dir --do_train --do_eval \ --max_train_samples 10000 --max_eval_samples 1000 --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 --num_train_epochs 1 --warmup_steps 8 --block_size 64 --fp16 \ --report_to none --model_type gpt2 --tokenizer_name gpt2 \ --config_overrides "n_embd=1024,n_head=16,n_layer=48,n_positions=1024" ``` Only `run_clm.py` for this experiment. @sgugger, @LysandreJik
05-20-2021 20:26:56
05-20-2021 20:26:56
So now can we activate activation checkpointing with: `--config_overrides "gradient_checkpointing=true,use_cache=False"` 1. Should we document this somewhere? maybe `examples/pytorch/README.md` once we port this to all other examples? 2. But it's only available for non-pretrained model, should I make `config_overrides` available to any model? i.e. this change: ``` --- a/examples/pytorch/language-modeling/run_clm.py +++ b/examples/pytorch/language-modeling/run_clm.py @@ -286,9 +286,10 @@ def main(): else: config = CONFIG_MAPPING[model_args.model_type]() logger.warning("You are instantiating a new config instance from scratch.") - if model_args.config_overrides is not None: - logger.info(f"Overriding config: {model_args.config_overrides}") - config.update_from_string(model_args.config_overrides) + + if model_args.config_overrides is not None: + logger.info(f"Overriding config: {model_args.config_overrides}") + config.update_from_string(model_args.config_overrides) ``` It could invite problems for config sections which have to match the pre-trained weights, but otherwise should give users more flexibility. e.g. allow turning caching off, grad checkpointing on and perhaps do other things that aren't impacted by pretrained weights.<|||||>This option does not make any sense for pretrained models: in the best case the user will get an error of weights shape mismatch, in the worst case it will just silently yield crappy results. Thus, the option does not make sense IMO for scripts not used for training models from scratch, have to check manually but I think it's just the scripts for language-modeling which offer that option, so in this case only document the option in their README.<|||||>> [...] have to check manually but I think it's just the scripts for language-modeling which offer that option, so in this case only document the option in their README. by "that option" do you mean "gradient_checkpointing"? If so it's available in 30 models out of 59: ``` $ grep -Irl gradient_checkpointing src/transformers/models/*/modeling* | wc -l 30 $ ls -l src/transformers/models/*/modeling* | egrep -v '(flax|tf)'| wc -l 59 ```<|||||>No I meant the option of training from scratch. I did double check, and it's only in the LM scripts.<|||||>Right, and I was talking about documenting ` --config_overrides "gradient_checkpointing=true,use_cache=False"` which could apply to any model. (but is not coded to support that at the moment). And you did mention elsewhere that this feature is on a todo list.<|||||>Yes, it will be a regular training argument in the future.<|||||>Excellent point, @LysandreJik. I did both. Though warning not, assert yes.
transformers
11,797
closed
[examples] add desc to `dataset.map` to improve tqdm bars
https://github.com/huggingface/datasets/pull/2374 has been merged - we should deploy this feature in our examples, which would tell the user what's being processed and the tqdm bar is for. Currently we get a bunch of bars that are absolutely meaningless and hard to understand what they do. See also: https://github.com/huggingface/datasets/issues/2330 The only issue is how to depend on `datasets` dev version, might have to wait for a new `datasets` release 1.6.3 to be able to merge such PR. A new release should be made in the next few days I'm being told, so a PR can be made.
05-20-2021 19:27:57
05-20-2021 19:27:57
transformers
11,796
closed
[trainer] multi-node tweaks
As I'm using Trainer in a multi-node setup, I will use this issue to post the things that could be improved for that type of env. 1. Repeated logging for non-rank-0 process rank-0 machine: I gathered all these that get repeated 16 times on a 16 nodes machine: ``` [INFO|trainer.py:1145] 2021-05-20 20:16:39,037 >> ***** Running training ***** [INFO|trainer.py:1146] 2021-05-20 20:16:39,037 >> Num examples = 1000 [INFO|trainer.py:1147] 2021-05-20 20:16:39,037 >> Num Epochs = 1 [INFO|trainer.py:1148] 2021-05-20 20:16:39,037 >> Instantaneous batch size per device = 4 [INFO|trainer.py:1149] 2021-05-20 20:16:39,037 >> Total train batch size (w. parallel, distributed & accumulation) = 256 [INFO|trainer.py:1150] 2021-05-20 20:16:39,037 >> Gradient Accumulation steps = 1 [INFO|trainer.py:1151] 2021-05-20 20:16:39,037 >> Total optimization steps = 4 100%|██████████| 4/4 [00:02<00:00, 1.95it/s][INFO|trainer.py:1341] 2021-05-20 20:16:41,214 >> {'train_runtime': 2.185, 'train_samples_per_second': 1.831, 'epoch': 1.0} Training completed. Do not forget to share your model on huggingface.co/models =) INFO:__main__:*** Evaluate *** [INFO|trainer.py:2115] 2021-05-20 20:16:41,690 >> ***** Running Evaluation ***** [INFO|trainer.py:2117] 2021-05-20 20:16:41,690 >> Num examples = 200 [INFO|trainer.py:2120] 2021-05-20 20:16:41,690 >> Batch size = 4 ``` Probably should check not only rank of the process, but also the rank of the machine, right? @sgugger
05-20-2021 19:03:14
05-20-2021 19:03:14
Mmm I guess there should be some argument controlling this: when I'm using multi-node I launch the command on two separate machines and have two separate terminals, so having both output the logs is helpful to know where each is at.<|||||>Absolutely agree for a few nodes! This becomes an issue on 64+ nodes ;) Let's have a flag that by default it logs on each node, and can be turned off if wanted. This is all new so I'm first just sharing the things that can be improved One other thing to figure out is pytorch error handling, when the launcher crashes it generated 64 interleaved tracebacks - impossible to understand what went wrong half the time... But that's not trainer-related...
transformers
11,795
closed
get_length_grouped_indices() uses slow list concat
Hi, get_length_grouped_indices() in LengthGroupedSampler and DistributedLengthGroupedSampler is prohibitively slow for large number of megabatches (in my case takes hours for ~270k megabatches with 100 items each) due to slow list concatenation with sum(megabatches, []). Concatenating the lists with sum() may be repeatedly reallocating memory with each successive concatenation (similar to performance issues with string concatenation). [item for sublist in megabatches for item in sublist] approach appears to significantly improve speed for large megabatch number, especially for megabatches with larger number of items. For example: # 50,000 megabatches with 3 items each: megabatches = [[1,2,3] for _ in range(50_000)] %timeit [item for sublist in megabatches for item in sublist]; 3.72 ms ± 75.5 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) %timeit sum(megabatches, []); 7.66 s ± 31.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ------------------------------------------ # 100,000 megabatches with 3 items each: megabatches = [[1,2,3] for _ in range(100_000)] %timeit [item for sublist in megabatches for item in sublist]; 8.03 ms ± 14.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) %timeit sum(megabatches, []); 29.6 s ± 36.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ------------------------------------------ # 100,000 megabatches with 100 items each: megabatches = [list(range(100)) for _ in range(100_000)] %timeit [item for sublist in megabatches for item in sublist]; 208 ms ± 44.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) %timeit -r1 -n1 sum(megabatches, []); 44min 3s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each) Thank you for your wonderful work and consideration of this edit. @sgugger
05-20-2021 18:25:28
05-20-2021 18:25:28
Thanks a lot for looking at this optimization. It does look like a nice speedup! Do you want to open a PR with the suggested changes since you're the one who designed it?
transformers
11,794
closed
Bug in TokenClassificationPipeline
## Environment info - `transformers` version: 4.5.1 - Platform: Linux-5.4.0-42-generic-x86_64-with-debian-buster-sid - Python version: 3.7.10 - PyTorch version (GPU?): 1.8.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @Narsil , @LysandreJik ## Information The problem is in TokenClassificationPipeline, in and [this](https://github.com/huggingface/transformers/blob/f4a0d6ff867e8a82a33d7a653e7d45372a463271/src/transformers/pipelines/token_classification.py#L269) and [that line](https://github.com/huggingface/transformers/blob/f4a0d6ff867e8a82a33d7a653e7d45372a463271/src/transformers/pipelines/token_classification.py#L273) . Here the aim is to determine if the original word for that token is tokenized into multiple subwords or just a single one. The problem is some tokenizers (such as Roberta or GPT-2) tokenize the whitespace together with the subsequent word which causes a mismatch between the original word and the reconstructed word. Since we reconstruct from the tokenized input ids, a single-word token also includes a whitespace (unles it is not the first word in the sequence). ## To reproduce Please, consider the following: ```python from transformers import AutoTokenizer word_ref = "Car" tokenizer = AutoTokenizer.from_pretrained("roberta-base") word = tokenizer.tokenize(" " + word_ref)[0] print(word) >>> ĠCar is_subword = len(word_ref) != len(word) print(is_subword) >>> True ``` The problem I simulated occurs in my custom pipeline that inherits from TokenClassificationPipeline when I use Roberta tokenizer. I checked the tests for that pipeline and observed that a small Bert tokenizer is used. This can explain why this bug could not be catched as the Bert model tokenizes the spaces differently. If I recall correctly, it splits the words on the whitespaces, then tokenizes the words. In any case, the following result shows why Bert tokenizer does not suffer from the mentioned problem: ```python from transformers import AutoTokenizer word_ref = "Car" tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") word = tokenizer.tokenize(" " + word_ref)[0] print(word) >>> Car is_subword = len(word_ref) != len(word) print(is_subword) >>> False ``` Finally, this problem might be affecting other pipelines (or inference scripts etc.) that depends on the reconstructed tokens as well.
05-20-2021 18:21:38
05-20-2021 18:21:38
That is true. `len(word_ref) != len(word)` is a heuristic that will work on tokenizers that use BPE `continuing_subword_prefix` concept. The reality is that there is no consistent notion of a "word" within arbitrary tokenizers. The `continuing_subword_prefix` in BPE that *can* be used makes the concept explicit but it's not in the case of GPT2 (and roberta-large) as they are supposed to be ByteLevel. (it is set to ''). Because of that, there cannot be any consistent manner to check for "is_subword" for these tokenizers. Let's take an example "Hello thereHello" with `roberta-large`. -> [ 0 31414 89 31414 2] We have twice the same token (31414), one is not a subword, the second one is. So there can't be a perfect output in any case. Token 89 is really " there" the space isn't treated that differently any other characters. Is that clearer on why it fails in this use-case ? That being said, if we can figure out a heuristic that works for both, it would be better indeed. <|||||>When I had a similar problem, I resolved it by checking the character before that word in the original string. In this case, if there is a space, we can include it to the word, tokenize it and join them into a single string. The `word_ref` would be changed like the follows: ```python if start_ind > 0 and sentence[start_ind-1] == " ": decoded_word_ref = "".join(self.tokenizer.tokenize(sentence[start_ind-1: end_ind])) else: decoded_word_ref = sentence[start_ind:end_ind] ``` (I am replacing the identifier `word_ref` with `decoded_word_ref` to emphasize that it is reconstructed from the token ids and may not correspond to a valid substring in the original text) Therefore, the related code segment would be updated as follows: ```python if start_ind > 0 and sentence[start_ind-1] == " ": decoded_word_ref = "".join(self.tokenizer.tokenize(sentence[start_ind-1: end_ind])) else: decoded_word_ref = sentence[start_ind:end_ind] word = self.tokenizer.convert_ids_to_tokens([int(input_ids[idx])])[0] is_subword = len(decoded_word_ref) != len(word) if int(input_ids[idx]) == self.tokenizer.unk_token_id: word = decoded_word_ref is_subword = False ``` Notice that, even if there were multiple whitespaces before the word, they should not cause an issue since each space would be tokenized as a separate token except the last one. Alternatively, we might use the decoded_word_ref only for determining the value of `is_subword`. After that, we can use the `word_ref` as before. <|||||>That wouldn't work because some byte-level tokenizers will use space as a postfix, not prefix for "word-separation". This is where we would like to avoid many if conditions for every possible tokenizer. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,793
closed
[trainer] the noisy tensorflow loaded when asked explicitly not to load it
Unrequested TF loading and its noisy disrespectful logging is back it seems: ``` USE_TF=0 python examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path $MODEL \ --dataset_name $DATASET \ --output_dir output_dir \ --overwrite_output_dir \ --do_train \ --do_eval \ --max_train_samples 1000 \ --max_eval_samples 200 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --num_train_epochs 1 \ --warmup_steps 8 \ --block_size 64 \ --fp16 \ --report_to none ``` ``` r10i6n8: 2021-05-20 19:52:04.357654: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: [...] r10i6n8: 2021-05-20 19:52:04.357677: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. ``` I am testing multinode setups so am I'm getting hundreds of these! 256 gpus - 512 of these warnings! How can we make sure that `USE_TF=0` is respected and `tensorflow` doesn't get loaded - I can't uninstall it since it's a shared environment. Thank you! @sgugger
05-20-2021 17:58:55
05-20-2021 17:58:55
Indeed, the `import Trainer` seems to be importing TensorFlow again. Let me try to see if I can remove that.<|||||>I messed up my branch and pushed directly on master by mistake, but I don't think it needs reverting and doing a PR since it's a short fix. Short story is that I locally have no tensorflow import with `USE_TF=0` after [this commit](https://github.com/huggingface/transformers/commit/b8697bc62216b9e2ca60811626c6a6ca992b0d34). Can you confirm?<|||||>I confirm. Apologies I missed that request. Thank you for fixing it, @sgugger!
transformers
11,792
closed
T5EncoderModel slower in half-precision
Hi, I am encountering troubles in understanding why the half-precision version of the T5Encoder infers slower than the full-precision one. ## To reproduce Starting with the `half`-precision. ```python import torch from transformers import T5EncoderModel, T5Tokenizer import time device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') seq="Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet." seq = ' '.join([seq] * 50) model = T5EncoderModel.from_pretrained('t5-large', cache_dir='model').to(device) model = model.half() model = model.eval() tokenizer = T5Tokenizer.from_pretrained('t5-large', cache_dir='model') token_timer = time.time() tokens = tokenizer.batch_encode_plus(seq, add_special_tokens=True, padding='longest', return_tensors='pt') end_token = time.time() input_ids = tokens['input_ids'].to(device) attention_mask = tokens['attention_mask'].to(device) model_timer = time.time() with torch.no_grad(): ignored = model(input_ids=input_ids,attention_mask=attention_mask) end_timer = time.time() print(f'Full process:\t{end_timer - token_timer}') print(f'Model only:\t{end_timer - model_timer}') print(f'Token only:\t{end_token - token_timer}') ``` To use the `full`-precision, just drop the `model = model.half()` line. ## The output The `half`-precision: ``` Full process: 10.929700136184692 Model only: 3.4116599559783936 Token only: 7.5169923305511475 ``` The `full`-precision: ``` Full process: 7.794144153594971 Model only: 0.23117947578430176 Token only: 7.562213897705078 ``` First, I would expect that the half-precision model is faster but secondly what is more confusing to me is the time difference in `Model only`, which measures the time needed to execute the the `torch.no_grad()`-part. Is there an implementation problem in the code snippet?
05-20-2021 16:51:49
05-20-2021 16:51:49
@stas00 or @sgugger can chime in if I'm wrong, but I believe half-precision performance improvement is strongly tied to hardware: even hardware that handles half-precision like pascal GPUs may not see a speed increase with FP16 compared to FP32, and I believe it can have the opposite effect. Could you share your setup? You can check this thread for a similar question: https://github.com/huggingface/transformers/issues/9179<|||||>I don't think we have resolved this conundrum in https://github.com/huggingface/transformers/issues/9179 - it got closed w/o a resolution. Running your test on 2 cards: ``` rtx-3090 fp16: Full process: 7.10092830657959 Model only: 1.9809677600860596 Token only: 5.1195228099823 fp32: Full process: 5.614374399185181 Model only: 0.6039936542510986 Token only: 5.009963750839233 gtx-1070 fp16: Full process: 17.52509307861328 Model only: 12.342488050460815 Token only: 5.182169198989868 fp32: Full process: 5.362875461578369 Model only: 0.3538181781768799 Token only: 5.008580923080444 ``` This investigation most likely will require using torch profiler to get to the root of it.<|||||>Thank you for answering and reffering to the other issue, since it seems to be an ongoing mystery, I will close this issue w/o resolution for now.
transformers
11,791
closed
LongformerForSequenceClassification: global_attention_mask=None
Hi, my question is, what happens if `global_attention_mask` in `LongformerForSequenceClassification` is not stated? Does it mean that only local attention works in this case? I haven't found anything about it in the docs. Thanks in advance!
05-20-2021 15:42:50
05-20-2021 15:42:50
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discusss.huggingface.co) instead? Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,790
closed
facebook/mbart-large-50-one-to-many-mmt fails on Swahili
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.6.0 - Platform: Linux-5.4.109+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyTorch version (GPU?): 1.8.1+cu101 (False) - Tensorflow version (GPU?): 2.4.1 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @patil-suraj @patrickvonplaten <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - facebook/mbart-large-50-one-to-many-mmt - facebook/mbart-large-50-many-to-many-mmt Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Run the below code ``` from transformers import MBartForConditionalGeneration, MBart50TokenizerFast` model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-many-mmt") tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-many-mmt") article_hi = "संयुक्त राष्ट्र के प्रमुख का कहना है कि सीरिया में कोई सैन्य समाधान नहीं है" article_en = "Let's try this again..." article_sw = 'Mzozo wa Israeli na Palestina:Marekani imekuwa ikiilinda Israel na kuifanya kutogoopa kufanya lolote' # translate Hindi to Swahili tokenizer.src_lang = "hi_IN" encoded_hi = tokenizer(article_hi, return_tensors="pt") generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.lang_code_to_id["sw_KE"]) output = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) print(output) # translate English to Swahili tokenizer.src_lang = "en_XX" encoded_en = tokenizer(article_en, return_tensors="pt") generated_tokens = model.generate(**encoded_en, forced_bos_token_id=tokenizer.lang_code_to_id["sw_KE"]) output = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) print(output) #translate Swahili to English tokenizer.src_lang = 'sw_KE' encoded_sw = tokenizer(article_sw, return_tensors="pt") generated_tokens = model.generate(**encoded_sw, forced_bos_token_id=tokenizer.lang_code_to_id["en_XX"]) output = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) print(output) ``` ## Expected behavior Output is: ['U. N. head says there is no military solution in Syria'] ["! Let's try this again..."] ['The Israeli Prime Minister in Palestine: He visited Israel and visited Israel on any day of the week. Read more'] The translation from Swahili to English works, but the translations to Swahili all end up in English.
05-20-2021 14:30:24
05-20-2021 14:30:24
Hi @DCNemesis Does this happen for this specific example or for all the examples that you tried? And this isn't really an issue with implementation. As the model is many-to-many is not trained in every single language pair this does happen in some cases. It's likey that there's far less data for X to Swalihi translation which could be the reason for this.<|||||>@patil-suraj it fails every time I try English to Swahili. M2M100 does fine on the same tasks, so I'll probably just use that in this case, but it is hard to believe this is the intended behavior of mbart-large-50.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>This issue still has not been fixed. Mbart-large-50-many-to-many-mmt has the same issue.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,789
closed
PegasusTokenizer returning None
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: Ubuntu 20.04 - Python version: Python 3.7.10 - PyTorch version (GPU?): 1.8.1+cu101 - Tensorflow version (GPU?): - Using GPU in script?: Problem in both CPU and GPU - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten @LysandreJik <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): Pegasus The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Go to https://huggingface.co/transformers/model_doc/pegasus.html#pegasusforconditionalgeneration 2. Run the summarization example in the section 3. PegasusTokenizer.from_pretrained('google/pegasus-xsum') returns None. PegasusTokenizer also returns None for 'google/pegasus-large' <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Should return a non None value. <!-- A clear and concise description of what you would expect to happen. -->
05-20-2021 14:22:47
05-20-2021 14:22:47
Hey @akashe, Think this error is analogs to this one: https://github.com/huggingface/transformers/issues/8864. Installing `sentencepiece` should solve the problem :-) https://github.com/huggingface/transformers/issues/8864<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>> Hey @akashe, > > Think this error is analogs to this one: #8864. > > Installing `sentencepiece` should solve the problem :-) > > #8864 Still does not seem to work, even after installing sentencepiece<|||||>Same here ;(<|||||>Could you please update to the newest `transformers` version and check again? I cannot reproduce the error sadly<|||||>Hi @patrickvonplaten, checked with the newest transformers. Tokenizer is not returning None.<|||||>@akashe did you solve the problem later? I am having the same issue. <|||||>Update to the newest version. It worked after that.<|||||>I got the same issue first, of getting Nonetype. To solve this, just install sentencepiece, and make sure to restart runtime.
transformers
11,788
closed
EncoderDecoder Cross Attention Generation Output Shape does not match Documentation
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.5.1 - Platform: Linux-4.15.0-143-generic-x86_64-with-glibc2.27 - Python version: 3.9.4 - PyTorch version (GPU?): 1.7.1 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes (v100) - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. --> - encoderdecoder/text generation: @patrickvonplaten, @patil-suraj ## Information Model I am using: EncoderDecoder with BERT The problem arises when using: * [x] the official example scripts: slightly modified/extended, see below The tasks I am working on is: * [x] my own task or dataset: just an example sentence from the docs ## To reproduce Steps to reproduce the behavior: 1. Start with the example script from the [EncoderDecoder forward documenation](https://huggingface.co/transformers/model_doc/encoderdecoder.html#transformers.EncoderDecoderModel.forward) 2. Remove model training and model saving and loading steps (not relevant) and configure model to return attentions 3. Check shapes of cross attention outputs of generation and forward ```python from transformers import EncoderDecoderModel, BertTokenizer import torch tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased') # initialize Bert2Bert from pre-trained checkpoints # forward input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1 outputs = model(input_ids=input_ids, decoder_input_ids=input_ids, output_attentions=True) forward_cross_attentions = outputs.cross_attentions # As described in the docs the shapes are: # "Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)"" (here (1,12,8,8)) print(f"Elements in forward cross attention: {len(forward_cross_attentions)}") # Yields: Elements in forward cross attention: 12 print(f"Shapes in forward cross attention: {[fca.shape for fca in forward_cross_attentions]}") # Yields: Shapes in forward cross attention: [torch.Size([1, 12, 8, 8]), torch.Size([1, 12, 8, 8]), torch.Size([1, 12, 8, 8]), torch.Size([1, 12, 8, 8]), torch.Size([1, 12, 8, 8]), torch.Size([1, 12, 8, 8]), torch.Size([1, 12, 8, 8]), torch.Size([1, 12, 8, 8]), torch.Size([1, 12, 8, 8]), torch.Size([1, 12, 8, 8]), torch.Size([1, 12, 8, 8]), torch.Size([1, 12, 8, 8])] # generation generated = model.generate(input_ids, decoder_start_token_id=model.config.decoder.pad_token_id, return_dict_in_generate=True, output_attentions=True) generated_cross_attentions = generated.cross_attentions # generated_cross_attentions contains 19 elements, maybe one for each generation step (generated.sequences has 20 elements)? print(f"Elements in generation cross attention: {len(generated_cross_attentions)}") # Yields: Elements in generation cross attention: 19 # All of the contained cross attentions have shape (1,12,1,8) for cross_attention in generated_cross_attentions: print(f"Shapes in generation cross attention: {[gca.shape for gca in cross_attention]}") # Yields: Shapes in generation cross attention: [torch.Size([1, 12, 1, 8]), torch.Size([1, 12, 1, 8]), torch.Size([1, 12, 1, 8]), torch.Size([1, 12, 1, 8]), torch.Size([1, 12, 1, 8]), torch.Size([1, 12, 1, 8]), torch.Size([1, 12, 1, 8]), torch.Size([1, 12, 1, 8]), torch.Size([1, 12, 1, 8]), torch.Size([1, 12, 1, 8]), torch.Size([1, 12, 1, 8]), torch.Size([1, 12, 1, 8])] (repeated 19 times) ``` Furthermore, if `num_beams>1` all `num_beams*batch_size` cross attentions are returned even if `num_return_sequences == 1`. ```python # continued from above... # generation generated = model.generate(input_ids, decoder_start_token_id=model.config.decoder.pad_token_id, return_dict_in_generate=True, output_attentions=True, num_beams=10) generated_cross_attentions = generated.cross_attentions # generated_cross_attentions contains 19 elements, maybe one for each generation step (generated.sequences has 20 elements)? print(f"Elements in generation cross attention: {len(generated_cross_attentions)}") # All of the contained cross attentions have shape (10,12,1,8) for cross_attention in generated_cross_attentions: print(f"Shapes in generation cross attention: {[gca.shape for gca in cross_attention]}") print(f"Shape of the generated sequences: {generated.sequences.shape}") Yields: torch.Size([1, 20]) ``` ## Expected behavior A `Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, gen_sequence_length, sequence_length)`. Ideally the cross attentions batch size should match the batch size of the `generated.sequences`. ## Work arounds Stacking the tuples and then concatting along the dimension which is 1 like this: ```python torch.cat([torch.stack(ca) for ca in generated_cross_attentions], dim=-2) ``` yields such a tensor of the correct shape, is that the correct way to assemble it? For the batch size issue I haven't found a work around yet. Is it possible to retain the beam indices of the selected beams from `generate`? `output_scores` is no help, because it has the same shape as `generated.sequences`. Any help, ideas or pointers how to work around this are highly appreciated.
05-20-2021 13:48:32
05-20-2021 13:48:32
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,787
closed
GPT Neo past_key_values unexpected behaviour
I have been successfully using the GPT2LMHeadModel module for text generation for some time and I recently tried to reuse the code to generate with GPTNeoForCausalLM. Though the documentations appear identical, I get the error "ValueError: not enough values to unpack (expected 2, got 1)" for the line`output, past = self.model(context, past_key_values=past, use_cache=True).values()` (which works fine for GPT2). Is this a bug or has the documentation been copied incorrectly? Would appreciate any tips for fixing. Many thanks
05-20-2021 13:47:24
05-20-2021 13:47:24
I encountered a similar problem when trying to use GPT-Neo with PPLM (https://github.com/uber-research/PPLM). Seems that Neo's `past_key_values` is returning and consuming key-value tensors as well as (I'm guessing) feed-forward tensors: ```python inputs = tokenizer(prompt, return_tensors='pt') outputs = model(**inputs) past = outputs.past_key_values for idx, p in enumerate(past): print(f'{idx}: {tuple(elem.shape for elem in p)}') # output # 0: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64])) # 1: (torch.Size([1, 3, 768]),) # 2: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64])) # 3: (torch.Size([1, 3, 768]),) # 4: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64])) # 5: (torch.Size([1, 3, 768]),) # 6: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64])) # 7: (torch.Size([1, 3, 768]),) # 8: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64])) # 9: (torch.Size([1, 3, 768]),) # 10: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64])) # 11: (torch.Size([1, 3, 768]),) ``` GPT-2 correctly returns just the key-value tensors: ```python # 0: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64])) # 1: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64])) # 2: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64])) # 3: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64])) # 4: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64])) # 5: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64])) # 6: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64])) # 7: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64])) # 8: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64])) # 9: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64])) # 10: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64])) # 11: (torch.Size([1, 12, 3, 64]), torch.Size([1, 12, 3, 64])) ```<|||||>After some more testing, the above seems to be because of local attention layers in GPT-Neo's default configuration. When specifying ```config = GPTNeoConfig(attention_types=[[["global"], 24]])```, I get similar `past_key_values` as in GPT-2: ```python # 0: (torch.Size([1, 16, 3, 128]), torch.Size([1, 16, 3, 128])) # 1: (torch.Size([1, 16, 3, 128]), torch.Size([1, 16, 3, 128])) # 2: (torch.Size([1, 16, 3, 128]), torch.Size([1, 16, 3, 128])) # 3: (torch.Size([1, 16, 3, 128]), torch.Size([1, 16, 3, 128])) # 4: (torch.Size([1, 16, 3, 128]), torch.Size([1, 16, 3, 128])) # ... ``` I do think the [documentation](https://huggingface.co/transformers/model_doc/gpt_neo.html#transformers.GPTNeoModel.forward) for `past_key_values` should be updated since it currently says: "with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)"<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @patil-suraj, just checking if there is any progress on this issue or pull request #11630? That PR seems to fix the problem related to my usecase.<|||||>The different shape for local attention layers is because of the folding going on in the current implementation.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,786
closed
[RFC] Laying down building stone for more flexible ONNX export capabilities
This PR aims at reworking the way the ONNX export tool work by introducing a static, checked description format to provide ONNX exporters (pt almost done, TF will follow) all the required knobs. More specifically this PR introduces the following concepts: - `OnnxConfig` dataclass which enforces a model to be supported to describe all the properties to generate proper export - `OnnxVariable` namedtuple which describe a variables w.r.t the name of the variable, shape and potentially how many time it's "repeated" => Useful for `past_keys` Test case was done initially for BART model, without `use_cache=True` supports. For the sake of completeness, dropping support for `use_cache=True` is currently needed because we have a double nested tuple at the core of the `past_keys` output structure which would require multiple level of dynamic axis, not currently supported by ONNX. This might be something we can work on in the future, potentially introducing a ONNX compatible output structure getting rid of the nested tuples layout and activable from a config property (_to be discussed further later on_). **Update 1:** - I managed to enable exporting with nested structures such as `past_key_values` for GPT2. - Need to work on enabling the same for using such values as inputs to the model Supported models: - [x] ALBERT - [x] BART (with & without past) - [x] BERT - [x] DistilBERT - [ ] Longformer => I've support for this, but the exporting fails because of missing ops ... need investigations. - [x] GPT2 (with & without past) - [x] Roberta - [x] T5 - [x] XLM-Roberta
05-20-2021 13:35:44
05-20-2021 13:35:44
Example of potential command line to export `bert-base-cased` => `python3 -m transformers.onnx -f pytorch --model=bert-base-cased --features=default --optimize --optimization-level=all onnx/bert-base-cased/`<|||||>See the contributed docs here https://235542-155220641-gh.circle-artifacts.com/0/docs/_build/html/serialization.html<|||||>Idea: Rename the `convert_pytorch` to `export` so we have the exact same hierarchy than PyTorch: - PyTorch: `torch.onnx.export` - Transformers: `transformers.onnx.export` wdyt? <|||||>That's a great idea!<|||||>@Narsil we moved forward on your suggestion, can you have a look _(one more time 😄)_ 🙏🏻 <|||||>Hello, when we can use the transformers.onnx?<|||||>You already can when installing from source: ``` pip install git+https://github.com/huggingface/transformers ``` We'll do a release this week (probably Thursday or Friday) and it will be in a pypi release then.<|||||>hi, this thread is super important. Is there support for bart text2text_generation export to onnx (more specifically for summarization tasks) ?
transformers
11,785
closed
Fix regression in regression
# What does this PR do? This PR fixes the regression introduced in #11012 for regression problems with only one label (like STS-B), see discussion on #11780. I checked both `run_glue` and `run_glue_no_trainer` on this branch and get the proper results for this task now. Fixes #11780 Fixes #11583
05-20-2021 13:23:19
05-20-2021 13:23:19
Thank you for fixing the issue!
transformers
11,784
closed
Fix release utilpattern in conf.py
# What does this PR do? When we applied black to the conf.py style, the line with the version changed but the pattern in our release util script was not updated. This PR fixes that.
05-20-2021 13:05:25
05-20-2021 13:05:25
transformers
11,783
closed
PyInstaller Transformers runtime import error
Hi, I am getting the following error while creating executable with transformers using PyInstaller **PyInstaller: 4.3 Transformers Version: 4.6.0 736 INFO: Python: 3.8.5 (conda) 751 INFO: Platform: macOS-10.15.5** _413560 INFO: Packages required by datasets: ['dill', 'multiprocess', 'pandas', 'tqdm', 'tqdm', 'requests', 'xxhash', 'pyarrow', 'numpy'] 445137 INFO: Packages required by filelock: [] File "", line 2 import huggingface-hub as p ^ SyntaxError: invalid syntax Traceback (most recent call last): File "/Users/xxxxx/opt/anaconda3/envs/xxxxxx/lib/python3.8/site-packages/PyInstaller/utils/hooks/init.py", line 358, in get_module_file_attribute attr = loader.get_filename(package) AttributeError: 'NoneType' object has no attribute 'get_filename'_ transformers hook file as follows, ``` from PyInstaller.utils.hooks import collect_all def hook(hook_api): packages = [ 'transformers', # "Pillow", # "black==21.4b0", # "cookiecutter==1.7.2", "dataclasses", "datasets", # "deepspeed>=0.3.16", # "docutils==0.16.0", # "fairscale>0.3", # "faiss-cpu", # "fastapi", "filelock", # "flake8>=3.8.3", # "flax>=0.3.2", # "fugashi>=1.0", "huggingface-hub", "importlib_metadata", # "ipadic>=1.0.0,<2.0", # "isort>=5.5.4", # "jax>=0.2.8", # "jaxlib>=0.1.59", # "jieba", # "keras2onnx", # "nltk", "numpy", # "onnxconverter-common", # "onnxruntime-tools>=1.4.2", # "onnxruntime>=1.4.0", "packaging", # "parameterized", # "protobuf", # "psutil", # "pydantic", # "pytest", # "pytest-sugar", # "pytest-xdist", # "python>=3.6.0", # "recommonmark", "regex", "requests", # "rouge-score", # "sacrebleu>=1.4.12", "sacremoses", # "sagemaker>=2.31.0", # "scikit-learn", # "sentencepiece==0.1.91", # "soundfile", # "sphinx-copybutton", # "sphinx-markdown-tables", # "sphinx-rtd-theme==0.4.3", # sphinx-rtd-theme==0.5.0 introduced big changes in the style. # "sphinx==3.2.1", # "sphinxext-opengraph==0.4.1", # "starlette", # "tensorflow-cpu>=2.3", # "tensorflow>=2.3", # "timeout-decorator", "tokenizers", # "torch>=1.0", # "torchaudio", "tqdm", # "unidic>=1.0.2", # "unidic_lite>=1.0.7", # "uvicorn", ] for package in packages: datas, binaries, hiddenimports = collect_all(package) hook_api.add_datas(datas) hook_api.add_binaries(binaries) hook_api.add_imports(*hiddenimports) ```
05-20-2021 10:57:24
05-20-2021 10:57:24
Hi, the error seems to originate from PyInstaller rather than `transformers`, right? Have you reported it to the PyInstaller team?<|||||>> Hi, the error seems to originate from PyInstaller rather than `transformers`, right? Have you reported it to the PyInstaller team? Yes @LysandreJik, I posted the same question in PyInstaller as well. But PyInstaller is working with other libraries like torch, tensroflow. It's only failing with Transformers library as it is checking the versions of all dependent libraries. Not sure exact reason.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,782
closed
[WIP] Expand `past_key_values` also during beam search in EncoderDecoder models
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #11781 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. **Not yet - issue was just submitted** - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). **Internal change** - [x] Did you write any new necessary tests? **No coverage branch added** ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
05-20-2021 05:44:40
05-20-2021 05:44:40
Changed to `WIP` because right now the PR does not account for cross attentions in `past_key_values` (indices 2 and 3). Could not be certain if cross-attention matrix for each layer in `past_key_values` is always a 4-tuple for all encoder-decoder models (maybe some model does not use cross-attention even though it is an encoder-decoder model..?). The doc does say key/value indices 2 and 3 in `past_key_values` are optional.
transformers
11,781
closed
`generate` with `num_beam` > 1 does not work in EncoderDecoder models when `past` is supplied.
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.7.0.dev0 - Platform: Linux-5.4.0-72-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.8.1+cu102 (True) - Tensorflow version (GPU?): 2.5.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: Both ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. --> - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj ## Information Model I am using (Bert, XLNet ...): Bart, T5 The problem arises when using: * [ ] the official example scripts: (give details below) * [x ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x ] my own task or dataset: (give details below) ## To reproduce <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ```python import torch from transformers import BartForConditionalGeneration, BartConfig config = BartConfig.from_pretrained('facebook/bart-base') bart = BartForConditionalGeneration.from_pretrained('facebook/bart-base', config=config) batch_size = 4 input_ids = torch.zeros((batch_size, 1), dtype=torch.long) attention_mask = torch.ones((batch_size, 1)) # past_key_value: tuple of length config.n_layers with each tuple having 2 tuples each, # of which has 2 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head) embed_size_per_head = config.d_model // config.decoder_attention_heads keys = torch.ones(config.decoder_layers, batch_size, config.decoder_attention_heads, 1, embed_size_per_head) past = tuple((key, key) for key in keys) # Works. num_beams = 1 encoder_outputs = bart.get_encoder()(input_ids.repeat_interleave(num_beams, dim=0), return_dict=True) bart.generate(input_ids=input_ids, attention_mask=attention_mask, encoder_outputs=encoder_outputs,past=past, use_cahce=True, num_beams=num_beams) # Doesn't work. num_beams = 3 encoder_outputs = bart.get_encoder()(input_ids.repeat_interleave(num_beams, dim=0), return_dict=True) bart.generate(input_ids=input_ids, attention_mask=attention_mask, encoder_outputs=encoder_outputs, past=past, use_cahce=True, num_beams=num_beams) ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> In the code snippet above, second call to `generate` crashes because `past_key_values` are not supplied to all beams. This happened when `past` argument is passed to `generate` in models where `is_encoder_decoder` is `True` (issue seen in Bart and T5). To mitigate this issue, `past` should also be expanded in `_expand_inputs_for_generation` in `generation_utils.py`. (I've noticed that, at this point in the generation process, the script looks for `past` not `past_key_values` in `model_kwargs`.) I've submitted a pull request that applies the above mentioned patch.
05-20-2021 05:40:48
05-20-2021 05:40:48
Hey @seongminp, Thanks for the issue report. It's a rather specific use-case to pass `past_key_values` to `generate()`. Could you give me some more detail when you need to do so? <|||||>Hi @patrickvonplaten! My use-case for passing `past_key_values` to `generate` is to manipulate the encoder hidden states before passing them to decoder's cross attention. Specifically, I am using a encoder-decoder generative (as in modeling the latent space, like GAN or VAE) text model. Several existing works, like [Microsoft's Optimus](https://github.com/ChunyuanLI/Optimus) and [Fang et al.](https://arxiv.org/abs/2101.00828), adds custom manipulations for key/value of decoder's cross attention. Official implementations of Optimus and Fang et al. are both implemented with this wonderful library, but uses a custom `generate` function because right now restrictions mentioned in this issue exists while passing `past` to `generate`. Would love to hear your feedback!<|||||>Hey @seongminp, Thanks for the feedback! The problem is that the `past` variable strongly varies from model to model. *E.g.* Bart uses a different `past` tuple structure then `gpt2` does and `xlnet` uses a completely different structure. We would have to add a specific `prepare_cache` method to each model which seems would add to much complexity to the `generate()` method for quite a specific case IMO. Do you think we could instead solve it by just forcing the user to preprocess `past` correctly before passing it to `generate()`? E.g., the following code: ```python past = tuple( ( layer[0].index_select(0, expanded_return_idx).to(layer[0].device), layer[1].index_select(0, expanded_return_idx).to(layer[1].device), ) for layer in past ) ``` could be executed by the user before calling `model.generate(input_ids, past=past)` no? We could make a nice forum post about it so that people interested in the work mentioned above would have access to the correct pre-processing of `past` :-) What do you think? <|||||>Hi again. That makes more sense. Trying to encompass all uses of `past` in `generate_utils` seems to be more trouble than it is worth. I'll close the pull request. Feel free to close this issue also! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,780
closed
Unintentional(?) interface change on loss function in models didn't work well for single-column regression
The recent PR #11012 changed the interface of forward function for `labels` in regression tasks as it skips `.view(-1)` in loss function like [this](https://github.com/huggingface/transformers/pull/11012/files#diff-a48ba7f6444ca4954a58f1ac3e66c7941a2bbc4615649d56b182aeac8cc36d9cL1523). As shown below, that causes `UserWarning: Using a target size (torch.Size([32])) that is different to the input size (torch.Size([32, 1]))` with BERT in glue example code, and it looks like this change was applied **not only to BERT but also a lot of models in #11012** To resolve it, the current example glue code needs `if` statement that transform `labels` variable before `forward` function only for regression task. But if the interface change in the PR was unintentional, I think we should revert `.view(-1)` in loss function. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.6.0 - Platform: Google Colab - Python version: 3.7 - PyTorch version (GPU?): 1.8.1 - Tensorflow version (GPU?): - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @abhi1thakur @sgugger @LysandreJik from #11012 <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): `bert-base-uncased` The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Run ``` mkdir /tmp/stsb/ -p python transformers/examples/pytorch/text-classification/run_glue_no_trainer.py \ --model_name_or_path bert-base-cased \ --task_name stsb \ --max_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3 \ --output_dir /tmp/stsb/ ``` 2. We will see `UserWarning: Using a target size (torch.Size([32])) that is different to the input size (torch.Size([32, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.` which didn't appear with the previous version of transformers like 2-3 weeks ago. ``` 05/19/2021 22:39:45 - INFO - __main__ - ***** Running training ***** 05/19/2021 22:39:45 - INFO - __main__ - Num examples = 5749 05/19/2021 22:39:45 - INFO - __main__ - Num Epochs = 3 05/19/2021 22:39:45 - INFO - __main__ - Instantaneous batch size per device = 32 05/19/2021 22:39:45 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 32 05/19/2021 22:39:45 - INFO - __main__ - Gradient Accumulation steps = 1 05/19/2021 22:39:45 - INFO - __main__ - Total optimization steps = 540 0% 0/540 [00:00<?, ?it/s]/usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py:528: UserWarning: Using a target size (torch.Size([32])) that is different to the input size (torch.Size([32, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. return F.mse_loss(input, target, reduction=self.reduction) 33% 178/540 [00:24<00:47, 7.66it/s]/usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py:528: UserWarning: Using a target size (torch.Size([21])) that is different to the input size (torch.Size([21, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. return F.mse_loss(input, target, reduction=self.reduction) 33% 180/540 [00:24<00:43, 8.26it/s]/usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py:528: UserWarning: Using a target size (torch.Size([8])) that is different to the input size (torch.Size([8, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. return F.mse_loss(input, target, reduction=self.reduction) /usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py:528: UserWarning: Using a target size (torch.Size([4])) that is different to the input size (torch.Size([4, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. return F.mse_loss(input, target, reduction=self.reduction) 05/19/2021 22:40:13 - INFO - /usr/local/lib/python3.7/dist-packages/datasets/metric.py - Removing /root/.cache/huggingface/metrics/glue/stsb/default_experiment-1-0.arrow 05/19/2021 22:40:13 - INFO - __main__ - epoch 0: {'pearson': 0.40341441213742524, 'spearmanr': 0.41749739006146336} 66% 359/540 [00:52<00:24, 7.26it/s]05/19/2021 22:40:40 - INFO - /usr/local/lib/python3.7/dist-packages/datasets/metric.py - Removing /root/.cache/huggingface/metrics/glue/stsb/default_experiment-1-0.arrow 05/19/2021 22:40:40 - INFO - __main__ - epoch 1: {'pearson': 0.4407148954008369, 'spearmanr': 0.4550002378117188} 100% 539/540 [01:20<00:00, 7.42it/s]05/19/2021 22:41:08 - INFO - /usr/local/lib/python3.7/dist-packages/datasets/metric.py - Removing /root/.cache/huggingface/metrics/glue/stsb/default_experiment-1-0.arrow 05/19/2021 22:41:08 - INFO - __main__ - epoch 2: {'pearson': 0.4408745967619131, 'spearmanr': 0.43830345183360847} Configuration saved in /tmp/stsb/config.json Model weights saved in /tmp/stsb/pytorch_model.bin 100% 540/540 [01:24<00:00, 6.42it/s] ``` As a result, this gave me a pretty bad performance `epoch 2: {'pearson': 0.4408745967619131, 'spearmanr': 0.43830345183360847}` while they were both around 0.87 with the previous version. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior The following warning (actually should be a bug) should not appear, and the validation performance pearson and spearmanr should be around 0.87 with the parameters given in the example command. `UserWarning: Using a target size (torch.Size([32])) that is different to the input size (torch.Size([32, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.` <!-- A clear and concise description of what you would expect to happen. -->
05-19-2021 23:21:35
05-19-2021 23:21:35
Yes the change was unintentional to enable multi-label regression. I think the old ``` loss = loss_fct(logits.view(-1), labels.view(-1)) ``` will work in the case of one or several labels but might not give a clear error message if we have multiple labels but a shape error (if we have 5 possible lables but the model was configured with 4, we would see, with a batch size of 8, an error saying shape incompatibility between a tensor of size 32 and a tensor of size 40). Something that would give a nicer error message is probably: ``` if self.num_labels == 1: loss = loss_fct(logits.squeeze(), labels.squeeze()) else: loss = loss_fct(logits, labels) ``` which would take care of this problem and show a clear error message. I can implement that change quickly and we should do a patch release but want to check the fix seems ok. What do you think @LysandreJik and @abhi1thakur ?<|||||>Thank you @sgugger for your prompt response! I also found #11583 reports a weird result with STS-B and could be fixed by the patch.<|||||>Indeed, I don't know why I couldn't reproduce the bad results earlier but this is definitely the same issue (I probably wasn't trying on the master branch.)
transformers
11,779
closed
Deprecate commands from the transformers-cli that are in the hf-cli
Commands that are both in the `transformers-cli` and in the `huggingface-cli` are deprecated here and will be quickly removed. I'm voting for deprecating them and not removing them even though better ways exist as I suspect some users to use the `transformers-cli` in bash scripts to automatically upload models to the hub. Context from @julien-c: > my thoughts is that we should deprecate the subset of transformers-cli command that are in huggingface-cli, as the commands are identical and having both is confusing. > > Transformers-specific commands (model conversion, new model templating) can stay in transformers-cli. > > What do you think?
05-19-2021 19:19:54
05-19-2021 19:19:54
my reason for removing (still keeping a descriptive error obviously) rather than deprecating is that I'd love to know if people actually use those (and if they do, do they use them in scripts or manually) But I will defer to the great transformers-maintainers as the final decision 💖<|||||>sounds good to me!
transformers
11,778
closed
[Flax] Align GLUE training script with mlm training script
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Currently running on TPUv3-8 to see if this leads to a speed-up ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
05-19-2021 17:25:04
05-19-2021 17:25:04
Ran the experiment again, but testing time stayed the same for me...think it's better though to have a consistent way of handling the random keys though - so merging
transformers
11,777
closed
Flax Generate
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds the `generate()` method in Flax. An in-detail explanation of the design choices can be found here: https://www.notion.so/Flax-JAX-Generation-fe0c8d9807024d41a7ed4108f71a6f18 Example generate: https://colab.research.google.com/drive/1LiVLyjfTCGJtHldfFv1F3W3khkii5_Xp?usp=sharing ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
05-19-2021 17:21:03
05-19-2021 17:21:03
transformers
11,776
closed
uplaod
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
05-19-2021 17:14:43
05-19-2021 17:14:43
transformers
11,775
closed
Fix usage of head masks by TF encoder-decoder models' `generate()` function
TF counterpart to #11621 **Description:** It is necessary to fix head masking for LED and T5 models. Edit: Fix for T5 - #11857 <hr> **Reviewers:** @patrickvonplaten @Rocketknight1
05-19-2021 15:14:24
05-19-2021 15:14:24
Thanks for the addition @stancld ! I think once we fix the tests in led + T5 we can merge this one :-)<|||||>It also looks good to me!<|||||>Hey @patrickvonplaten, I haven't implemented head masking for the `generate` method for LED and T5 intentionally. The reason is that TF LED and T5 does not use head masks properly (there's an old glitch that the decoder uses encoder's `head_mask` instead of `cross_attn_head_mask`). Maybe, I can fix this issue in other PRs and then enable testing for these two models? :)<|||||>> Hey @patrickvonplaten, I haven't implemented head masking for the `generate` method for LED and T5 intentionally. The reason is that TF LED and T5 does not use head masks properly (there's an old glitch that the decoder uses encoder's `head_mask` instead of `cross_attn_head_mask`). Maybe, I can fix this issue in other PRs and then enable testing for these two models? :) Good for me!
transformers
11,774
closed
Finetune - Helsinki-NLP/opus-mt-fr-en
Hi all I am new to huggingface! I am trying to finetune the Helsinki-NLP/opus-mt-fr-en but I am getting the error: ``` 2021-05-19 14:20:33.882388: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dyn amic library libcudart.so.11.0 Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py", line 1205, in from_pretrained state_dict = torch.load(resolved_archive_file, map_location="cpu") File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 593, in load return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 762, in _legacy_load magic_number = pickle_module.load(f, **pickle_load_args) _pickle.UnpicklingError: invalid load key, 'v'. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/marian/examples/transformers/examples/research_projects/seq2seq-distillation/finetune.py", line 442, in <m odule> main(args) File "/marian/examples/transformers/examples/research_projects/seq2seq-distillation/finetune.py", line 381, in ma in model: SummarizationModule = SummarizationModule(args) File "/marian/examples/transformers/examples/research_projects/seq2seq-distillation/finetune.py", line 65, in __i nit__ super().__init__(hparams, num_labels=None, mode=self.mode, **kwargs) File "/marian/examples/transformers/examples/research_projects/seq2seq-distillation/lightning_base.py", line 109, in __init__ self.model = self.model_type.from_pretrained( File "/usr/local/lib/python3.8/dist-packages/transformers/models/auto/auto_factory.py", line 381, in from_pretrai ned return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs) File "/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py", line 1207, in from_pretrained raise OSError( OSError: Unable to load weights from pytorch checkpoint file for '/marian/examples/test' at '/marian/examples/test/ pytorch_model.bin'If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. ``` Could you tell me where I can set the from_tf=True? Also, how can I convert a pytorch_model.bin to tf model? Is there any step-by-step tutorial regarding this task? Best
05-19-2021 14:30:35
05-19-2021 14:30:35
Could you share your code so that we may help? I believe this is covered in the quicktour! https://huggingface.co/transformers/quicktour.html<|||||>here is my code `python3 /marian/examples/transformers/examples/research_projects/seq2seq-distillation/finetune.py \ --learning_rate=3e-5 \ --fp16 \ --gpus 1 \ --do_train \ --do_predict \ --n_val 1000 \ --val_check_interval 0.1 \ --src_lang "fr" \ --tgt_lang "en" \ --num_train_epochs 400 \ --warmup_steps 20 \ --train_batch_size 10 \ --eval_batch_size 10 \ --data_dir "/marian/examples/test/data" \ --output_dir "/marian/examples/test/out" \ --cache_dir "/marian/examples/test/cache" \ --max_source_length 128 \ --max_target_length 128 \ --val_max_target_length 128 \ --test_max_target_length 128 \ --model_name_or_path "/marian/examples/test" "$@"`<|||||>Ah, I believe this code has been deprecated for some time now. If you're looking to finetune a model on translation, may I recommend taking a look at our [translation examples](https://github.com/huggingface/transformers/tree/master/examples/pytorch/translation) instead?<|||||>Thank you, I will give it a try<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,773
closed
[Demo] Slow down in TPU training
@avital @marcvanzee - I wanted to align `run_mlm_flax.py` more with `run_glue_flax.py` and noticed that by doing the change as shown in this PR, training on TPU slows down very significantly by ca. ~40%. Currently, [`run_glue_flax.py`](https://github.com/huggingface/transformers/blob/master/examples/flax/text-classification/run_flax_glue.py) and [`run_mlm_flax.py`](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/run_flax_mlm.py) deal slightly differently with the PRNG key: `run_mlm_flax.py` splits the key inside the training step while `run_glue_flax.py` does so before the train step and shards it then before passing it to the train loop. It seems that `run_mlm_flax.py` is significantly faster on TPU. Do you by any chance have good explanations for that?
05-19-2021 11:22:30
05-19-2021 11:22:30
Basically prior to this change you were running a single jitted function in each train step, and because of asynchronous dispatch it didn't have to wait until the previous step was complete until dispatching the program for the next step. But if you split an RNG in between, then JAX blocks until the previous step was complete, then dispatches and executes the split command, and then dispatches the next training step. In short, the guideline is that each step in a training loop should be a single jitted function. If done right this should lead to close to 100% device utilization. This is a common gotcha -- people hit this regularly, and we should help catch the slow patterns early, such that you could detect this even with a local run with no accelerator/unit test. @jheek is working on a library that would allow you to annotate code such that you'd get that kind of error or warning for this, and other cases. @jheek also said: Yeah this is an example of my number 1 most common and most hurtful JAX performance gotcha that I want to catch automatically In this case it stands out but there are more subtle variants where it's hard to spot in a review This analysis is only true for TPU without async mode enabled btw. Because all other devices have a queue that is > 1 <|||||>Thanks a lot for this in-detail explanation @avital! Also pinging @sgugger @stas00 @mfuntowicz - might be interesting to read :-) <|||||>(I guess really this means that `run_glue_flax.py` could be made faster? /cc @marcvanzee )<|||||>> (I guess really this means that `run_glue_flax.py` could be made faster? /cc @marcvanzee ) Yeah, I'm currently testing it actually, see here: https://github.com/huggingface/transformers/pull/11778 . Will report results tomorrow<|||||>Great that you discovered this! I actually didn't notice the bug, and since training was already fast enough I didn't look into it. Curious to see whether we will get even more speedup!<|||||>Reran, the experiments - got a small speed-up on TPU. Here the new numbers: https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification#runtime-evaluation
transformers
11,772
closed
Different performance when training different transformers version
## Environment info - `transformers` version: 4.6 and 4.5 - Platform: - Python version: 3.7 - PyTorch version (GPU?): 1.8.0 GPU - Tensorflow version (GPU?): - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help @LysandreJik @sgugger Models: PhoBERT (RoBERTa based) Model hub: https://huggingface.co/vinai/phobert-base/ ## Information Model I am using (PhoBERT ...): The problem arises when using: * [x] my own modified scripts: (give details below) The tasks I am working on is: * [x] my own task or dataset: (give details below) ## Expected behavior Training loss, dev loss, dev F1 in each epoch different when training model with transformers version 4.5 and 4.6. Have anyone meet same this problem?
05-19-2021 10:37:13
05-19-2021 10:37:13
We do not guarantee the exact reproducibility of training between versions, only with the same version (PyTorch does the same by the way). Are you using the Trainer API? If this is the case, I believe it's the work done to ensure full reproducibility for checkpoints (e.g. you get to the same results training from scratch or resuming from a checkpoint) that is probably creating this difference, as the way the training data was shuffled has been changed.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,771
closed
Add DOI badge to README
# What does this PR do? Add DOI badge to README, as explained in https://guides.github.com/activities/citable-code/
05-19-2021 10:23:09
05-19-2021 10:23:09
transformers
11,770
closed
[T5 failing CI] Fix generate test
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes wrong device placement as introduced in #11621 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
05-19-2021 09:09:47
05-19-2021 09:09:47
transformers
11,769
closed
Trainer removes newer checkpoints, not older.
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.6.0 - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help @LysandreJik @patrickvonplaten @stas00 @sgugger ## Information Model I am using (Bert, XLNet ...): DEBERTA, but which model is used is not important here. The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Train with trainer for more than 200k steps with save_total_limit to 50 for example, and logging steps to 200. 2. Observe how this bug makes you lose your most recent progress and it removes your newest checkpoints, which costs money, as you have been training without saving the newest checkpoints (it removes them just after saving them). ## Expected behavior It is expected that the trainer doesn't remove the newest checkpoints, but the oldest ones, when you set the save_total_limit. This happens over 200k steps.
05-19-2021 08:04:55
05-19-2021 08:04:55
Please give us a command that reproduces the bug as your indications are too vague to reproduce. Also make sure you are using a source install as there was a bug recently fixed with the checkpoints (though it was with `load_best_model_at_end=True` which I have no idea if you're using).<|||||>I'm using load_best_model_at_end=True, but this happens way before the end, so I think this is a separate issue. Here's the command I'm using: ``` python -u -m torch.distributed.launch --nproc_per_node=8 /home/ubuntu/transformers/examples/research_projects/mlm_wwm/run_mlm_wwm.py \ --model_name_or_path ./deberta_3004/checkpoint-274200 \ --config_name ./config_deberta/config.json \ --tokenizer_name ./deberta_tokenizer_1304 \ --train_file ./suc_cleaned_1805.txt \ --validation_file ./final_valid.txt \ --output_dir ./deberta_3004 \ --overwrite_output_dir \ --do_train \ --do_eval \ --evaluation_strategy steps \ --per_device_train_batch_size 24 \ --per_device_eval_batch_size 48 \ --gradient_accumulation_steps 11 \ --learning_rate 2e-4 \ --save_steps 200 \ --logging_steps 200 \ --overwrite_cache \ --max_seq_length 512 \ --eval_accumulation_steps 10 \ --load_best_model_at_end \ --run_name deberta_1404 \ --save_total_limit 50 --warmup_steps 7000 \ --adam_beta2 0.999 --adam_epsilon 1e-6 --weight_decay 0.01 --num_train_epochs 1 --max_steps 1000000 --preprocessing_num_workers 96 --fp16 --dataloader_num_workers 24 --ignore_data_skip ```<|||||>Please retry on a master branch then. As I said, the bug of deleting newer checkpoints with `load_best_model_at_end=True` has been fixed by #11748. The bug was happening before the end, so I think you are experimenting the same one.<|||||>Okay, I'll retry re-installing from master then :) Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,768
closed
DataCollatorForWholeWordMask only works for BERT, and nothing is said in the docstring.
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help @patrickvonplaten @LysandreJik @patil-suraj @sgugger ## Information Model I am using (Bert, XLNet ...): DBERTA (V1) BASE The problem arises when using: * [x] the official example scripts: (give details below): DataCollatorForWholeWordMask * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce The DataCollatorForWholeWordMask, that should be used for pre-training a Roberta model, or a Deberta model for example (as you don't have a SpanCollator), only works for BERT, and one needs to look the details in the collator code to notice this. I've been training a language model from scratch for weeks now, just to notice yesterday that your collator for WholeWordMask is wrong and only works for BERT. Steps to reproduce the behavior: 1. Try to use the DataCollatorForWholeWordMask with any model that is not BERT. ## Expected behavior A data collator that is included in your data collators should work generally for any model, not only for BERT. Or at least, in the Docstring it should be clear that one will waste huge amounts of money if using this collator for other models that are not BERT. This being said, I would like to know how could I use the word_ids from the tokenizer to do this, as with the TokenClassification example you provide here: https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/text_classification.ipynb#scrollTo=MOsHUjgdIrIW In this example the extension of the token labels doesn't depend on the continuation token having "##" at the beginning, but uses the word ids from the FastTokenizer. I think the DataCollatorForWholeWordMask should work generally, at least for all fast tokenizers, not only for BERT. For my case, I would like to know what can I do to at least train a little bit more with the correct objective, not with normal MLM but with WWMLM.
05-19-2021 07:53:48
05-19-2021 07:53:48
Yes merging it was a mistake. It will be removed when we have something better in the future.<|||||>@sgugger Could you please tell me how could I adapt it for a general fast tokenizer? Or at least how would you do it for a ByteBPETokenizer like Roberta's or Deberta's?<|||||>I haven't dug into this, but it should probably leverage the `word_ids` the fast tokenizer provide to be more general.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I just ran into the same problem. Is somebody working on this? I need a language modeling data collator for RoBERTa-style tokenizers and might as well try my hand at providing an extensible, general implementation that issues proper warnings if used on yet-unsupported tokenizer classes, if there's interest.<|||||>The problem is that after passing through datasets, the objects are dicts, not BatchEncoding, therefore they don't have the word_ids() method, and without that we cannot generalize Whole Word Masking. One solution is to pre tokenize and pre process the dataset inside the function you put in the datasets map, however you disable dynamic batching which is a key improvement of Roberta with respect to Bert.<|||||>Thank you for elaborating! Similarly to the implementation for BERT tokenizers in the current `DataCollatorForWholeWordMasking`, it is possible to obtain a word start mask for RoBERTa tokenizers by decoding every token in the collator by using something like this: ```python def _word_starts(self, inputs: torch.Tensor) -> torch.Tensor: is_word_start = torch.full_like(inputs, fill_value=False) for i, example in enumerate(torch.split(inputs, split_size_or_sections=1, dim=0)): line_mask = torch.tensor([self.tokenizer.decode([t]).startswith(" ") for t in example.flatten().tolist() if t != self.tokenizer.pad_token_id]) is_word_start[i, 0:line_mask.shape[0]] = line_mask return is_word_start ``` I believe that this is accurate if the tokenizer is initialized with `add_prefix_space=True`, otherwise the first word is missing, which is probably acceptable in most circumstances. If this method is correct, it could be extended to BART tokenizers, where the condition for the first token of a word is `not tokenizer.decode([t]).startswith('##')`. I'm not sure whether this is a path one wants to take here, though.
transformers
11,767
closed
AttributeError when using EncoderDecoderModel.forward() with encoder_outputs and return_dict=True
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.6.0 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.9.4 - PyTorch version (GPU?): 1.8.1 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten, @patil-suraj ## Information Model I am using (Bert, XLNet ...): encoder=decoder="gpt2" The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce ```python3:code.py from transformers import EncoderDecoderModel, GPT2Tokenizer import torch tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = EncoderDecoderModel.from_encoder_decoder_pretrained("gpt2", "gpt2") enc_input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) dec_input_ids = torch.tensor([[model.config.decoder.eos_token_id]]) outputs = model(input_ids=enc_input_ids, decoder_input_ids=dec_input_ids, encoder_outputs=None, return_dict=True) _, _, enc_h = outputs.values() # (logits, past_key_values, encoder_last_hidden_states) enc_h = (enc_h, ) # *1(link below) requests that I should make tuple for "encoder_outputs" argument ↓ outputs = model(input_ids=enc_input_ids, decoder_input_ids=dec_input_ids, encoder_outputs=enc_h, return_dict=True)# Error occured @ this line. ``` [*1:Doc](https://huggingface.co/transformers/model_doc/encoderdecoder.html#transformers.EncoderDecoderModel.forward) <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## behavior ![image](https://user-images.githubusercontent.com/6253193/118772221-7455a900-b8be-11eb-8e36-125c7402ca4b.png) ## Expected behavior No Error at modeling_encoder_decoder.py line 463. # Cause of Error In [modeling_encoder_decoder.py line435](https://github.com/huggingface/transformers/blob/680d181ce80070f89f0ebd49bf93ca29b24cd56b/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L435), "encoder_outputs" need to behave as Iterable (and the encoder-decoder-model documentation request Tuple as argument). But, around [line 463](https://github.com/huggingface/transformers/blob/680d181ce80070f89f0ebd49bf93ca29b24cd56b/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L463), "encoder_outputs" need to behave something else.
05-19-2021 07:35:08
05-19-2021 07:35:08
Hey @aizawa-naoki, Thanks for your bug report here. The problem here is that the model expects the inputs and outputs to be of type `ModelOutput` by setting `return_dict=True`. However, `encoder_outputs` is passed as a tuple and not as a `ModelOutput` which leads to an error. You could fix your code as follows: ```python from transformers import EncoderDecoderModel, GPT2Tokenizer from transformers.modeling_outputs import BaseModelOutput import torch tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = EncoderDecoderModel.from_encoder_decoder_pretrained("gpt2", "gpt2") enc_input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) dec_input_ids = torch.tensor([[model.config.decoder.eos_token_id]]) outputs = model(input_ids=enc_input_ids, decoder_input_ids=dec_input_ids, encoder_outputs=None, return_dict=True) _, _, enc_h = outputs.values() # (logits, past_key_values, encoder_last_hidden_states) outputs = model(input_ids=enc_input_ids, decoder_input_ids=dec_input_ids, encoder_outputs=BaseModelOutput(last_hidden_state=enc_h), return_dict=True)# Error occured @ this line. ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,766
closed
Error when using IterableDataset as train_dataset for Trainer
Hi, I'm using large train data (parquet format) and want to pass this as `IterableDataset` to `Trainer`. I managed to make custom `IterableDataset`, but sadly it doesn't work. ```python import torch import pyarrow.parquet as pq from transformers import BatchEncoding class CustomIterableData(torch.utils.data.dataset.IterableDataset): def __init__(self, file_path, tokenizer, with_labels=False): super().__init__() self.file_path = file_path self.tokenizer = tokenizer self.with_labels = with_labels def process(self, row): inputs = str(row[2]) labels = str(row[4]) inputs = self.tokenizer(inputs, return_tensors="pt", padding=True, truncation=True) self.input_ids = [i.clone().detach() for i in inputs.input_ids] self.attention_mask = [i.clone().detach() for i in inputs.attention_mask] if self.with_labels: yield BatchEncoding({'input_ids': self.input_ids, 'attention_mask': self.attention_mask, 'labels': labels}) yield BatchEncoding({'input_ids': self.input_ids, 'attention_mask': self.attention_mask}) def __iter__(self): df = pq.read_table(source = self.file_path) for batch in df.to_batches(): return map(self.process, zip(*batch.columns)) def __len__(self): # yield one row at a time return 1 ``` This dataset gives me the error below. ``` File "main10m.py", line 128, in main trainer.train() File "/opt/conda/lib/python3.7/site-packages/transformers/trainer.py", line 1246, in train for step, inputs in enumerate(epoch_iterator): File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 517, in __next__ data = self._next_data() File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 557, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 35, in fetch return self.collate_fn(data) File "/opt/conda/lib/python3.7/site-packages/transformers/data/data_collator.py", line 54, in default_data_collator features = [vars(f) for f in features] File "/opt/conda/lib/python3.7/site-packages/transformers/data/data_collator.py", line 54, in <listcomp> features = [vars(f) for f in features] TypeError: vars() argument must have __dict__ attribute ``` I would appreciate any help! @sgugger
05-19-2021 07:10:15
05-19-2021 07:10:15
Can you print the elements you get when iterating through your dataset (and their types)? It seems like there is something wrong here. I'm not familiar with parquet but your iter is only going to return the result of the first of `df.to_batches()`, is that expected? Note that the `__len__` should not be implemented if possible as it will probably trigger other issues in the Trainer when it sees it.<|||||>If I run code below ```python3 ds = CustomIterableData(file_path, tokenizer, cat_info_path = cat_info_path) features = [] for i, result in enumerate(ds.__iter__()): features.append(result) if i >= 5: break ``` It gives this features ``` [{'input_ids': tensor([ 2, 12861, 10824, 12861, 2967, 8574, 4036, 4052, 7473, 3721, 12861, 23637, 12861, 3346, 11109, 2967, 11109, 10824, 11109, 3346, 3, 3813, 24928, 3346, 3, 3, 3, 3, 3425, 4431, 4109, 16853, 3, 3]), 'attention_mask': tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]), 'label': 3408}, {'input_ids': tensor([ 2, 15991, 4051, 17692, 6399, 2967, 6426, 11620, 2720, 4104, 4183, 26227, 25, 3308, 3, 6426, 12527, 14794, 3, 26227, 3, 6701, 26227, 3, 6426, 26227, 3, 38, 4276, 4091, 3, 38, 4276, 4091, 3]), 'attention_mask': tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]), 'label': 3318}, {'input_ids': tensor([ 2, 23687, 8027, 86, 15136, 15994, 7413, 11620, 4712, 9350, 15955, 31870, 11177, 16601, 18535, 3280, 3, 9477, 10532, 3, 2298, 4525, 4566, 16601, 3, 3, 3, 3, 3]), 'attention_mask': tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]), 'label': 2742}, {'input_ids': tensor([ 2, 3213, 11761, 9853, 2290, 8103, 12854, 2136, 10359, 18847, 22156, 4009, 4036, 10456, 4273, 78, 4184, 4011, 81, 4020, 76, 4097, 71, 4012, 69, 4037, 3, 10439, 10921, 3, 8103, 12873, 4031, 2136, 3, 8103, 12854, 2136, 3, 3, 3, 3213, 11761, 3]), 'attention_mask': tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]), 'label': 2560}, {'input_ids': tensor([ 2, 9204, 8006, 6988, 2744, 4162, 3283, 91, 7940, 16908, 9863, 18117, 16420, 16545, 12793, 25385, 28539, 25942, 8023, 4010, 11976, 27499, 6329, 70, 8193, 90, 16926, 13323, 23626, 4121, 87, 3, 7058, 6482, 3, 10921, 33648, 3, 8006, 16635, 10762, 3735, 3, 9420, 3, 3191, 4923, 4266, 14841, 3, 3191, 4923, 4266, 14841, 3]), 'attention_mask': tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]), 'label': 2419}, {'input_ids': tensor([ 2, 8471, 24, 4060, 80, 15994, 11704, 9651, 7998, 7388, 11903, 15962, 8022, 9668, 3283, 18806, 6223, 2348, 5032, 3802, 4007, 3081, 33036, 4257, 7018, 9651, 7998, 3, 6677, 7084, 2114, 3, 6718, 6951, 8705, 3, 11791, 3, 3, 3, 3]), 'attention_mask': tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]), 'label': 4097}] ``` I changed the code as following and passed a tokenizer to Trainer to use DataCollatorWithPadding. if I remove `__len__` method it gives this error `"train_dataset does not implement __len__, max_steps has to be specified"` Now Trainer works fine but it only trains with 1 sample, maybe the first one. I don't know why.. Why all the data is not getting read? ```python import torch import pyarrow.parquet as pq from transformers import BatchEncoding class CustomIterableData(torch.utils.data.dataset.IterableDataset): def __init__(self, file_path, tokenizer, with_labels=False): super().__init__() self.file_path = file_path self.tokenizer = tokenizer self.with_labels = with_labels def process(self, row): inputs = str(row[2]) labels = self.str2label(str(row[4])) inputs = self.tokenizer(inputs, return_tensors="pt", padding=True, truncation=True) self.input_ids = [i.clone().detach() for i in inputs.input_ids] self.attention_mask = [i.clone().detach() for i in inputs.attention_mask] if self.with_labels: # indexing to squeeze(0) return BatchEncoding({'input_ids': self.input_ids[0], 'attention_mask': self.attention_mask[0], 'label': labels}) return BatchEncoding({'input_ids': self.input_ids[0], 'attention_mask': self.attention_mask[0]}) def __iter__(self): df = pq.read_table(source = self.file_path) for batch in df.to_batches(): for row in zip(*batch.columns): yield self.process(row) def __len__(self): # yield one row at a time return 1 def str2label(self, string): .... ``` <|||||> This code works fine. I referred to [this post](https://medium.com/speechmatics/how-to-build-a-streaming-dataloader-with-pytorch-a66dd891d9dd). `__len__` method wasn't necessary if positive `max_steps` is passed to `TrainingArguments` ```python3 class CustomIterableData(torch.utils.data.dataset.IterableDataset): def __init__(self, file_path, tokenizer, with_labels=False): super().__init__() self.file_path = file_path self.tokenizer = tokenizer self.with_labels = with_labels def parse_file(self): df = pq.read_table(source = self.file_path) for batch in df.to_batches(): for row in zip(*batch.columns): yield self.process(row) def process(self, row): inputs = str(row[2]) labels = self.str2label(str(row[4])) inputs = self.tokenizer(inputs, return_tensors="pt", padding=True, truncation=True) self.input_ids = [i.clone().detach() for i in inputs.input_ids] self.attention_mask = [i.clone().detach() for i in inputs.attention_mask] if self.with_labels: # indexing to squeeze(0) return BatchEncoding({'input_ids': self.input_ids[0], 'attention_mask': self.attention_mask[0], 'label': labels}) return BatchEncoding({'input_ids': self.input_ids[0], 'attention_mask': self.attention_mask[0]}) def get_stream(self): return cycle(self.parse_file()) def __iter__(self): return self.get_stream() def str2label(self, string): .... ```
transformers
11,765
closed
Unable to use fill-mask pipeline on gpt-neo model
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.5.0 - Platform: googleColab - Python version:3.7 Models: `GPT neo` Code : ``` #Import Hugging Face's Transformers from transformers import pipeline generator = pipeline('fill-mask', model='EleutherAI/gpt-neo-1.3B') ``` Error: ![Screenshot 2021-05-19 at 10 55 30 AM](https://user-images.githubusercontent.com/10946649/118760614-bbc54080-b890-11eb-9076-5fe010f249b7.png) Can someone help me know what could the reason be for not able to use the fill-mask on `gpt-neo` model?
05-19-2021 05:37:06
05-19-2021 05:37:06
Fill-mask is for encoder-only models like BERT and RoBERTa. The GPT-neo model is a decoder-only model that is capable of doing text generation. There's a `TextGenerationPipeline` available, so you might try that out. The documentation can be found [here](https://huggingface.co/transformers/main_classes/pipelines.html#transformers.TextGenerationPipeline).<|||||>I read through articles that this model can be used to do grammar checking? Please share relevant documentation for the same.<|||||>Yes it can do grammar checking similar to GPT-3, in a zero-shot manner. So you can for example try the following prompt: ``` Original: She no went to the market. Standard American English: ``` Normally, if GPT-neo is smart enough, it will then generate `She didn't go to the market.` These big generation models like GPT-3 and GPT-neo can learn in a zero-shot manner, just by giving a few examples, and then ask the model what comes next. So in this case, I didn't even give one example, I asked the model directly for an answer. You can also first provide several examples ("Original" and "Standard American English" pairs) to the model, and then ask it to predict what comes next. <|||||>Great, Can you please share some sample implementation on google collab notebook? <|||||>I just copied the code sample from the [model card](https://huggingface.co/EleutherAI/gpt-neo-1.3B): ``` from transformers import pipeline generator = pipeline('text-generation', model='EleutherAI/gpt-neo-1.3B') generator("Original: She no went to the market. Standard American English: She didn't go to the market. Original: I loving eating pizza. Standard American English:", do_sample=True, min_length=50) ```<|||||>Isn't the text generation specific to generating new text with a given prompt? I tried using the same format as what you have provided and this was the response Input: `Original: She no went to the market. Standard American English:` Output: `Original: She no went to the market. Standard American English: No, I didn’t go to the market yesterday.` This completely changed the 3rd person to 1st person? is the format `Original:xxx Standard American English:` important and is this how it does the grammar correction?<|||||>Isn't the text generation specific to generating new text with a given prompt? => well, normally it is meant to generate new text given a prompt indeed. But as models like GPT-3 and GPT-neo are so powerful and are trained on a lot of data, they are capable of performing what the authors of GPT-3 call "in-context learning": this means that the model knows what to do just based on a given prompt. See the [GPT-3 paper](https://arxiv.org/abs/2005.14165) for more info. I've just tried it with GPT-3 and it works. However, GPT-neo doesn't seem as powerful. This is logical since GPT-3 has 175 billion parameters, whereas GPT-neo only has 1.3 billion (there's also a 2.7 billion variant available). Maybe you can try by giving more examples in the prompt. Sometimes it seems to work: ![image](https://user-images.githubusercontent.com/48327001/118779543-718d9080-b88b-11eb-8cd4-d556fb5f89a4.png) <|||||>@NielsRogge What other [task-specific pipelines](https://huggingface.co/transformers/main_classes/pipelines.html) can I use the gpt-neo model with?<|||||>I think the GPT-neo models only support the `TextGenerationPipeline`. But do not that they can be used for summarization, you can just provide a text followed by "TLDR:", and then the model will generate a summary. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,764
closed
[Wav2Vec2] SpecAugment Fast
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR refactores the SpecAugement implementation for Wav2Vec2 by fully relying on PyTorch instead of numpy. 1) The code is made more readable - `attention_mask` is dropped since it's not required to treat masked batch indices differently - Previously every batch_idx was forced to have the same number of masked indices (overlapping masked indices can lead to some batch_indices to have fewer masked indices). This is also not enforced here since it would make the function very dependend on the batch_size which is not good IMO. I don't see a reason why different batch_idx cannot have different # of masked indices. It was verified via training that the change does not lead to a performance drop. 2) Replacing a for loop with tensorized code lead to a 1% speed-up in training (not really noticeable thb) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
05-19-2021 00:59:27
05-19-2021 00:59:27
Noticed a small speed-up when training (1-2%) only though, and even slighly improved results. More importantly I think the code is much more readable now.
transformers
11,763
closed
A cleaner and more scalable implementation of symbolic tracing
# What does this PR do? This PR provides a much cleaner and less hacky implementation of symbolic tracing for models of the library. It also provides support for more architectures: - ALBERT - DistilBERT - MobileBERT - MegatronBERT - GPT2 - GPT Neo
05-18-2021 18:00:21
05-18-2021 18:00:21
What do you think about `dtype` being hardcoded? While this is OK for now hardcoding dtype might be an issue down the road. For most NLP models inputs are int, but for example for wav2vec2 they are floats. And would this have an impact if the final usage is in fp16 for where you used `float`. We can't derive the dtype from the model in this context. Thoughts? This is not a showstopper to merge this, but just something to consider - I'm sure we will cross the bridge if we encounter it.
transformers
11,762
closed
Fix a bug in summarization example which did not load model from config properly
# What does this PR do? Current example script does not load model when config is supplied, just a small bug fix. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
05-18-2021 17:46:54
05-18-2021 17:46:54
transformers
11,761
closed
Add batching to pipelines
# Add batching to pipelines Are there any plans to add a batching option to existing pipelines? Currently, the model tries to process all the input simultaneously, which sometimes (if the input is considerable) leads to memory errors.
05-18-2021 13:05:37
05-18-2021 13:05:37
Hi! You may find the discussion on this PR useful: https://github.com/huggingface/transformers/pull/11251<|||||>Thanks for explaining this
transformers
11,760
closed
add `dataset_name` to data_args and added accuracy metric
# What does this PR do? Added `dataset_name` and `dataset_config_name` to `DataTrainingArguments` to use a compatible dataset from the dataset hub. I tested it with `imdb`. Additionally also resolved `TODO` and added `load_metrics('accuracy')`
05-18-2021 12:21:37
05-18-2021 12:21:37
transformers
11,759
closed
error in load of tokenizer with add_token
Hi, in term of adding tokens to the Bert tokenizer, I tried to add 10k new tokens to my BERT model tokenizer and I saved the tokenizer . So when I want to load the tokenizer to use, I got this error: AssertionError: Non-consecutive added token '#سلام' found. Should have index 100005 but has index 100006 in saved vocabulary. any help?
05-18-2021 11:15:20
05-18-2021 11:15:20
Hello! Could you provide the code that you used, library version, etc (everything asked in the issue template) thanks!<|||||>here is the code to add tokens to tokenizer and then train on the corpus as a pretrained model: after training is finished when I want to load the tokenizer, I got Error. transformers version : 4.5.1 ubuntu: 16.04 python: 3.7 pytorch: 1.6.0+cu101 ..................................................................... ```py from transformers import AutoConfig, AutoTokenizer, AutoModel from transformers import BertTokenizer, BertForMaskedLM from transformers import Trainer, TrainingArguments from transformers import LineByLineTextDataset from transformers import DataCollatorForLanguageModeling import torch config = AutoConfig.from_pretrained("HooshvareLab/bert-fa-base-uncased") tokenizer = AutoTokenizer.from_pretrained("HooshvareLab/bert-fa-base-uncased",max_len=256) vocab=[] with open('vocab30k.txt', mode='r',encoding="utf8",errors='ignore') as file2: for line2 in file2: line2=line2.split('\n')[0] vocab.append(line2) vocab=vocab[:10000] tokenizer.add_tokens(vocab) tokenizer.save_pretrained("tokenizer/") model= BertForMaskedLM.from_pretrained("HooshvareLab/bert-fa-base-uncased") model.resize_token_embeddings(len(tokenizer)) print(" model load") dataset = LineByLineTextDataset( tokenizer=tokenizer, file_path="fa_shuffeled.txt", block_size=128, ) print("data load") data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=True, mlm_probability=0.15 ) training_args = TrainingArguments( output_dir="fineTunedModel/", overwrite_output_dir=True, num_train_epochs=3, per_gpu_train_batch_size=16, save_steps=10_000, save_total_limit=2, prediction_loss_only=True, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=dataset, ) print("start train") trainer.train() trainer.save_model("fineTunedModel2/") ```<|||||>I don't have access to `vocab30k` so I tried locally by adding tokens that were not part of the initial vocabulary, saving the tokenizer, reoloading it; but I couldn't manage to have the same issue. If you could share a reproducible example in colab it would be easier to see what's going on.<|||||>> I don't have access to `vocab30k` so I tried locally by adding tokens that were not part of the initial vocabulary, saving the tokenizer, reoloading it; but I couldn't manage to have the same issue. If you could share a reproducible example in colab it would be easier to see what's going on. the problem was due to some new tokens that weren't in utf-8 encoding, so when I removed them the problem was solved.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,758
closed
Add more subsections to main doc
# What does this PR do? This PR adds a subsection right before the list of supported models & the big table of supported frameworks for each model. Merging this PR would change the "welcome" doc page as follows: ![pic1](https://user-images.githubusercontent.com/23423619/118638456-c4edd900-b7ce-11eb-8191-aa9bfc62248a.png) and ![pic2](https://user-images.githubusercontent.com/23423619/118638477-ca4b2380-b7ce-11eb-8dbe-f9d68b3c6908.png) The motivation for this PR is mainly to be able to better link to all supported models and framework. *E.g.* when asking which models are supported by Flax, it's nice to have a direct link instead of having to scroll down
05-18-2021 10:47:57
05-18-2021 10:47:57
transformers
11,757
closed
Fix incorrect newline in #11650
# What does this PR do? I found that I broke the link by accidentally adding a newline (probably by my formatter) in #11650. Here is a fix for that. Sorry for any inconvenience. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @patil-suraj Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
05-18-2021 10:28:21
05-18-2021 10:28:21
transformers
11,756
closed
word_to_tokens method of XLNetTokenizerFast not behaving correctly
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.3.3 ### Who can help @LysandreJik ## Information The `word_to_tokens` method of `XLNetTokenizerFast` seems not behaving correctly. ## To reproduce Code below for example ```py batch_claim = [ ['Colin', 'Kaepernick', 'became', 'a'], ['Tilda', 'Swinton', 'is', 'a', 'vegan', '.'] ] batch_evidence = [ ['He', 'remained', 'the', 'team', "'s", 'starting', 'quarterback'], ['Katherine', 'Matilda', '`', '`', 'Tilda', "''", 'Swinton', '-LRB-', 'born', '5', 'November', '1960'] ] tokenizer = XLNetTokenizerFast.from_pretrained('xlnet-base-cased') # tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', add_prefix_space=True) tokenized = tokenizer( batch_claim, batch_evidence, padding=True, truncation='do_not_truncate', is_split_into_words=True, return_tensors='pt' ) print(tokenized) print(tokenized.word_to_tokens(0, 0, 0)) ``` gives None. (Maybe it's because that `XLNetTokenizer` pads on the front that causes this misbehavior?) Output: ``` {'input_ids': tensor([[ 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 8041, 2066, 93, 1371, 9797, 403, 24, 4, 69, 1493, 18, 230, 17, 26, 23, 1541, 6217, 4, 3], [15731, 1011, 22588, 577, 27, 24, 28629, 17, 9, 4, 17067, 6883, 902, 1011, 2651, 2651, 15731, 1011, 17, 12, 22588, 577, 17, 13, 1039, 12573, 13, 1094, 306, 704, 2726, 4, 3]]), 'token_type_ids': tensor([[3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2]]), 'attention_mask': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])} None ``` <!-- A clear and concise description of what you would expect to happen. -->
05-18-2021 09:23:20
05-18-2021 09:23:20
Indeed, thanks for reporting. This is related to https://github.com/huggingface/tokenizers/issues/552<|||||>Thanks! Could you please indicate the time this could be fixed? I'll decide whether to align it locally haha..<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,755
closed
A problem of Ibert IntSoftmax
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - `transformers` version: 4.6.0.dev0 - Platform: Linux-4.15.0-122-generic-x86_64-with-glibc2.10 - Python version: 3.8.1 - PyTorch version (GPU?): 1.9.0a0+git3c87fe9 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: yes - ### Who can help @kssteven418 <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> Hi, I have found a strange thing in the IntSoftmax class of Ibert. def forward(self, x, scaling_factor): if not self.quant_mode: return nn.Softmax(dim=-1)(x), None x_int = x / scaling_factor x_int_max, _ = x_int.max(dim=-1, keepdim=True) x_int = x_int - x_int_max exp_int, exp_scaling_factor = self.int_exp(x_int, scaling_factor) # Avoid overflow exp, exp_scaling_factor = self.act(exp_int, exp_scaling_factor) exp_int = exp / exp_scaling_factor exp_int_sum = exp_int.sum(dim=-1, keepdim=True) factor = floor_ste.apply(2 ** self.max_bit / exp_int_sum) exp_int = floor_ste.apply(exp_int * factor / 2 ** (self.max_bit - self.output_bit)) scaling_factor = 1 / 2 ** self.output_bit return exp_int * scaling_factor, scaling_factor The code above is the forward func of IntSoftmax. And the problem is that `exp, exp_scaling_factor = self.act(exp_int, exp_scaling_factor)` in which self.act is an instance of QuantAct of which the input should be real number, but the exp_int is a quant int number. Although the result of the trained model word well, I think this is not right.
05-18-2021 05:57:00
05-18-2021 05:57:00
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,754
closed
Trainer accumulates GPU usage at the beginning of each step
Hello, My problem is that GPU usage gets increased at the beginning of each step. Although the usage gets decreased with the help of torch.cuda.empty_cache() and gc.collector() during training, OOM errors happened after a while. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.5.1 - Platform: colab - Python version: 3.7.10 - PyTorch version (GPU?): 1.8.1+cu101 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @sgugger @patrickvonplaten <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Models I am using: wav2vec2 and MBart The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Build e2e_model with the following classes: wav2vec2_learn_repr and e2emodel 2. Feed audio and translation under the requirement of the following data_collator. 3. Model training with Trainer. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> The code for reproducing the error: ``` class wav2vec2_learn_repr(Wav2Vec2PreTrainedModel): def __init__(self, config): super().__init__(config) self.wav2vec2 = Wav2Vec2Model(config) self.dropout = nn.Dropout(config.final_dropout) self.collapse = collapse_layer self.init_weights = () def forward(self, input_values, attention_mask=None, output_attentions=None, output_hidden_states=None, return_dict=None, labels=None): return_dict = return_dict if return_dict is not None else self.config.use_return_dict outputs = self.wav2vec2( input_values, attention_mask=attention_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) hidden_states = outputs[0] hidden_states = self.dropout(hidden_states) collapsed_embeddings, attention_masks=cal_collapse_embeddings(hidden_states) show_gpu(f'In wav2vec2_learn_repr before del') del outputs, hidden_states show_gpu(f'In wav2vec2_learn_repr after del') torch.cuda.empty_cache() show_gpu(f'In wav2vec2_learn_repr empty cache') gc.collect() show_gpu(f'In wav2vec2_learn_repr gc.collect()') return collapsed_embeddings, attention_masks ``` ``` class e2emodel(PreTrainedModel): def __init__(self, wav2vec2_name = "facebook/wav2vec2-large-xlsr-53", mbart_model_name = 'facebook/mbart-large-50-many-to-many-mmt', ): super().__init__(PretrainedConfig()) self.wav2vec2_repr_model = wav2vec2_learn_repr.from_pretrained(wav2vec2_name) self.mbart_model = MBartForConditionalGeneration.from_pretrained(mbart_model_name) self.wav2vec2_repr_model.to(device) self.mbart_model.to(device) def forward(self, input_ids, attention_mask=None, output_attentions=None, output_hidden_states=None, return_dict=None, labels=None): show_gpu(f'At start') torch.cuda.empty_cache() show_gpu(f'empty cache') gc.collect() show_gpu(f'gc.collect()') input_ids.to(device) # print(f' inputs devices: {input_ids.device}, {labels.device}') show_gpu(f'load input_ids') collapsed_embeddings, attention_masks = self.wav2vec2_repr_model(input_ids) print('collapsed_embeddings, attention_masks', collapsed_embeddings.device, attention_masks.device) show_gpu(f'after wav2vec2') del input_ids show_gpu(f'delete input_ids') torch.cuda.empty_cache() show_gpu(f'empty cache') gc.collect() show_gpu(f'gc.collect()') labels.to(device) show_gpu(f'load mbart inputs') output = self.mbart_model(inputs_embeds = collapsed_embeddings, attention_mask = attention_masks, labels = labels) show_gpu(f'after mbart') del collapsed_embeddings, attention_masks, labels show_gpu(f'delete mbart inputs') torch.cuda.empty_cache() show_gpu(f'empty cache') return output ``` ``` def data_collator(data): translation = [d['translation'] for d in data] input_features = [{'input_values': get_inputs_values_from_audio_path(feature_extractor, d['path'])} for d in data] #TODO: remove empty audio and its translation wav2vec2_inputs = feature_extractor.pad(input_features, padding=True, max_length=None, pad_to_multiple_of=None, return_tensors="pt", ) batch={} batch['inputs_embeds'], batch['attention_mask'] = wav2vec2_learn_repr(wav2vec2_inputs['input_values']) # size [batch_size, nr_sample, 1024] with tokenizer.as_target_tokenizer(): batch['labels'] = tokenizer([d['translation']for d in data], return_tensors='pt', padding=True).input_ids return batch ``` ``` import torchaudio resampler = torchaudio.transforms.Resample(orig_freq=48000, new_freq=16000) def get_inputs_values_from_audio_path(processor, path: str): signal, sr = torchaudio.load(main_path + '{}/clips/'.format(src_lang) + path) signal = signal.squeeze(0) d = (signal.shape[0]/sr) resampler.orig_freq = sr signal=resampler.forward(signal).numpy() input_values = processor(signal, sampling_rate=resampler.new_freq).input_values return input_values.tolist()[0] ``` ``` import gc import subprocess def show_gpu(msg): """ ref: https://discuss.pytorch.org/t/access-gpu-memory-usage-in-pytorch/3192/4 """ def query(field): return(subprocess.check_output( ['nvidia-smi', f'--query-gpu={field}', '--format=csv,nounits,noheader'], encoding='utf-8')) def to_int(result): return int(result.strip().split('\n')[0]) used = to_int(query('memory.used')) total = to_int(query('memory.total')) pct = used/total print('\n' + msg, f'{100*pct:2.1f}% ({used} out of {total})') ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> Here is the GPU usage history from step 7. **At start 86.9% (14149 out of 16280)** empty cache 54.3% (8835 out of 16280) gc.collect() 54.3% (8835 out of 16280) load input_ids 54.3% (8835 out of 16280) In wav2vec2_learn_repr before del 56.7% (9231 out of 16280) In wav2vec2_learn_repr after del 56.7% (9231 out of 16280) In wav2vec2_learn_repr empty cache 56.7% (9229 out of 16280) In wav2vec2_learn_repr gc.collect() 56.7% (9229 out of 16280) collapsed_embeddings, attention_masks cuda:0 cuda:0 after wav2vec2 56.7% (9229 out of 16280) delete input_ids 56.7% (9229 out of 16280) empty cache 56.7% (9229 out of 16280) gc.collect() 56.7% (9229 out of 16280) load mbart inputs 56.7% (9229 out of 16280) after mbart 64.3% (10473 out of 16280) delete mbart inputs 64.3% (10473 out of 16280) empty cache 64.3% (10473 out of 16280) **At start 85.5% (13925 out of 16280)** empty cache 54.3% (8835 out of 16280) gc.collect() 54.3% (8835 out of 16280) load input_ids 54.3% (8835 out of 16280) In wav2vec2_learn_repr before del 58.9% (9593 out of 16280) In wav2vec2_learn_repr after del 58.9% (9593 out of 16280) In wav2vec2_learn_repr empty cache 58.9% (9593 out of 16280) In wav2vec2_learn_repr gc.collect() 58.9% (9593 out of 16280) collapsed_embeddings, attention_masks cuda:0 cuda:0 after wav2vec2 58.9% (9593 out of 16280) delete input_ids 58.9% (9593 out of 16280) empty cache 58.9% (9593 out of 16280 gc.collect() 58.9% (9593 out of 16280) load mbart inputs 58.9% (9593 out of 16280) after mbart 66.7% (10853 out of 16280) delete mbart inputs 66.7% (10853 out of 16280) empty cache 66.7% (10853 out of 16280) **At start 95.4% (15529 out of 16280)** empty cache 94.6% (15393 out of 16280) gc.collect() 94.6% (15393 out of 16280) load input_ids 94.6% (15393 out of 16280) RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 15.90 GiB total capacity; 14.96 GiB already allocated; 21.75 MiB free; 15.00 GiB reserved in total by PyTorch)
05-17-2021 23:12:09
05-17-2021 23:12:09
I've experienced the same issue.
transformers
11,753
closed
Add Flax Examples and Cloud TPU README
# What does this PR do? Adds a Flax examples README. Pretty bare for now, but will include a link to Cloud TPU instructions once they are up. I hope my use of relative links works well, but looking for feedback. The main goal here is to have a canonical link we can point to. Perhaps later this should live on the proper docs page but I thought a README is a fine first step. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
05-17-2021 20:49:04
05-17-2021 20:49:04
cc @patrickvonplaten
transformers
11,752
closed
Fixed: Better names for nlp variables in pipelines' tests and docs.
# What does this PR do? Fixes #9455 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @Narsil @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
05-17-2021 19:26:59
05-17-2021 19:26:59
Could you take care of the merge conflicts and we should be good to merge? Thanks!<|||||>Thanks a lot for this !
transformers
11,751
closed
parallelize and deparallelize method for GPT-Neo series model
# 🚀 Feature request Parallelize and deparallelize methods for distribution of attention modules across multiple GPUs. ## Motivation Finetuning GPT Neo 2.7B model on 12 GB GPU gives out of memory error. Having a parallelize method would allow us to train that model by splitting attention modules across multiple GPUs of smaller VRAM. ## Your contribution Considering [this line](https://github.com/huggingface/transformers/blob/daf0d6a97bb0225a2571a2612b8285e2c3913992/src/transformers/models/gpt2/modeling_gpt2.py#L522) in GPT2 code and the absence of doc for parallelize method in [GPT2 documentation](https://huggingface.co/transformers/model_doc/gpt2.html#gpt2model), wanted to know if these methods are still supported. If not, what is the recommended method for fine-tuning large transformer models like GPT-Neo? If they are still supported, I can take up this task and submit PR for both methods as well as documentation fix.
05-17-2021 18:42:12
05-17-2021 18:42:12
This is answered in #11054. (I'm in a similar situation as you. I'm just going to go with the suggestion and use DeepSpeed instead of model parallelism.)<|||||>Thanks, didn't saw that. Parallelism notes are also awesome.
transformers
11,750
closed
Flax BERT fix token type init
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Token type ids are 0 by default not 1 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
05-17-2021 18:37:15
05-17-2021 18:37:15
transformers
11,749
closed
[deepspeed] supporting `--adafactor`
It was flagged that in this example https://github.com/huggingface/transformers/issues/11044 `--adafactor` is used, but Deepspeed doesn't get it passed since the DS config's optimizer overrides it. So need to sort it out.
05-17-2021 16:56:51
05-17-2021 16:56:51
transformers
11,748
closed
Fix checkpoint deletion
# What does this PR do? As pointed out on the [forums](https://discuss.huggingface.co/t/checkpoint-missing-optimizer-pt-how-to-resume/6138) there is a problem in the way checkpoints are deleted currently when `save_total_limit` is set and `load_best_model_at_end` is True. Since the best checkpoint is switched with the last checkpoint, we end up deleting the last checkpoint instead of the oldest available one. This PR fixes this issue and adds tests.
05-17-2021 16:17:56
05-17-2021 16:17:56
transformers
11,747
closed
mbart-large-cc25 tokenization_utils_fast.py TypeError
## Environment info Hi, I am trying to fine-tune a dutch summarization algorithm. I used the [following ](https://github.com/huggingface/notebooks/blob/master/examples/summarization.ipynb) example notebook provided by huggingface.co. To prepare the targets for the model, we need to tokenize them inside the as_target_tokenizer context manager. This will make sure the tokenizer uses the special tokens corresponding to the targets. This is achieved by running the following code: ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25") with tokenizer.as_target_tokenizer(): print(tokenizer(["Hello, this one sentence", "This is another sentence."])) ``` However, I get the following error: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-24-0fc6af9091da> in <module>() ----> 1 with tokenizer.as_target_tokenizer(): 2 print(tokenizer(["Hello, this one sentence", "This is another sentence."])) 3 3 frames /usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_fast.py in convert_ids_to_tokens(self, ids, skip_special_tokens) 293 tokens = [] 294 for index in ids: --> 295 index = int(index) 296 if skip_special_tokens and index in self.all_special_ids: 297 continue TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType' ``` How can I work around this TypeError? I am not a professional and this is actually the first time submitting any question at all. Thanks in advance :)
05-17-2021 14:52:50
05-17-2021 14:52:50
Hi @lysa-n, For multilingual models you must define input language(src_lang) and target language(tgt_lang). Since you are using it for summarization for the Dutch language the src_lang and tgt_lang will be the same. This should work: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25", src_lang='nl_XX', tgt_lang='nl_XX') with tokenizer.as_target_tokenizer(): print(tokenizer(["Hello, this one sentence", "This is another sentence."])) ``` Note: Please cross-check the Dutch language code<|||||>> Hi @lysa-n, > For multilingual models you must define input language(src_lang) and target language(tgt_lang). Since you are using it for summarization for the Dutch language the src_lang and tgt_lang will be the same. > This should work: > > ```python > from transformers import AutoTokenizer, AutoModelForSeq2SeqLM > > tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25", src_lang='nl_XX', tgt_lang='nl_XX') > > with tokenizer.as_target_tokenizer(): > print(tokenizer(["Hello, this one sentence", "This is another sentence."])) > ``` > > Note: Please cross-check the Dutch language code Hi @vishal-burman, This seems to work. Thank you so much!
transformers
11,746
closed
Use new evaluation loop in TrainerQA
# What does this PR do? When writing the new evaluation loop, the code of the special `Trainer` or question answering was not updated, this PR fixes that. Fixes #11721
05-17-2021 13:56:35
05-17-2021 13:56:35
transformers
11,745
closed
[Flax MLM] Refactor run mlm with optax
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
05-17-2021 12:49:40
05-17-2021 12:49:40
transformers
11,744
closed
[BigBird Pegasus] Make tests faster
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> BigBird Pegasus Tests are faster now ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
05-17-2021 10:10:33
05-17-2021 10:10:33
transformers
11,743
closed
Wrong output used by RobertaForSequenceClassification classification head
Hi, According to the [documentation](https://huggingface.co/transformers/model_doc/roberta.html#transformers.RobertaForSequenceClassification) classification head should work `on top of the pooled output`, which makes sense, considering the fact that RoBERTa, unlike BERT, wasn't trained on Next Sentence Prediction task, so the `<s>` token equivalent to `CLS` is not as useful. However, if you look at the at the code [here](https://github.com/huggingface/transformers/blob/master/src/transformers/models/roberta/modeling_roberta.py#L1166) and [here](https://github.com/huggingface/transformers/blob/master/src/transformers/models/roberta/modeling_roberta.py#L1394) you'll see that it doesn't appear to actually use the pooler output for Sequence Classification, as one would expect. Meanwhile, RobertaForMultipleChoice does use it. It's not clear to me whether this is intended or not, however RoBERTa using a representation of `<s>` for classification may perform considerably _worse_ than a regular BERT on some tasks.
05-17-2021 09:58:46
05-17-2021 09:58:46
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, I'd still like to get a comment on whether this is the intended behavior. If it is, then why it is done this way?<|||||>Hello! Sorry for getting back to this so late. When porting models over to the `transformers` library, we aim to keep them identical to their original implementation. The original RoBERTa implementation in fairseq uses the same classification head, hence why it was ported like this: https://github.com/pytorch/fairseq/blob/c2e8904b6072d8eddab362ac50b324e374b5951d/fairseq/models/roberta/model.py#L382 I recommend opening an issue over at fairseq if you have questions relative to how they designed their architecture. Thank you!<|||||>Ah, I see. Didn't realize it's done the same way in the original implementation. Thank you for your response!
transformers
11,742
closed
Issue with symbolic tracing for T5
# What does this PR do? This solves the issue for symbolic tracing with T5.
05-17-2021 09:55:32
05-17-2021 09:55:32
transformers
11,741
closed
Convert blenderbot checkpoint to tensorflow (TF)
Hi! Thank you for a great project. I wonder if I can convert blenderbot checkpoint to tensorflow. If I can, how I convert checkpoint? Give me some comment.
05-17-2021 09:26:50
05-17-2021 09:26:50
@patrickvonplaten Can you help me?<|||||>I convert Parl-AI's checkpoint to huggingface using `convert_blenderbot_original_pytorch_checkpoint_to_pytorch.py`. And I convert pytorch checkpoint to tf checkpoint using `convert_pytorch_checkpoint_to_tf2.py`. If there is something wrong, please comment.<|||||>Hey @sooftware, Could you add a code snippet you are trying to execute here? E.g. which checkpoint do you want to convert exactly?<|||||>Hi @patrickvonplaten !! I want to convert Parl-AI's blanerbot (3B, 9B) models. I tried to convert Parl-AI to huggingface checkpoint by `convert_blenderbot_original_pytorch_checkpoint_to_pytorch.py`. I change some keys. ```python def rename_layernorm_keys(sd): keys = [ "model.encoder.layernorm_embedding.weight", "model.encoder.layernorm_embedding.bias", "model.decoder.layernorm_embedding.weight", "model.decoder.layernorm_embedding.bias", ] ``` Ex) `model.encoder.layernorm_embedding.weight` => `encoder.norm_embeddings.weight`. And I got a config file by `wget https://huggingface.co/facebook/blenderbot-3B/resolve/main/config.json`. Next, I tried to convert huggingface pytorch checkpoint to tensorflow checkpoint by `huggingface convert_pytorch_to_tf.py`. I set `model_type` by `bart`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,740
closed
Add visual + link to Premium Support webpage
Close #11635
05-17-2021 09:08:18
05-17-2021 09:08:18
Built docs is at https://212403-155220641-gh.circle-artifacts.com/0/docs/_build/html/index.html <img width="1360" alt="Screenshot 2021-05-17 at 11 08 58" src="https://user-images.githubusercontent.com/326577/118463639-ff238180-b6cd-11eb-8c45-a6a1e6471b60.png"> <|||||>CI failure seems unrelated
transformers
11,739
closed
Remove tapas model card
the one in https://huggingface.co/google/tapas-base is slightly but not significantly different. cc @NielsRogge
05-17-2021 07:53:02
05-17-2021 07:53:02
transformers
11,738
closed
Remove extra self from _save_checkpoint call
Currently this code is completely broken with non-distributed training. I'm not clear on how it has ever worked: ``` File "run.py", line 152, in <module> trainer.train() #resume_from_checkpoint=get_last_checkpoint("/opt/ml/checkpoints")) File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1105, in train self._maybe_log_save_evaluate(tr_loss, model, trial, epoch) File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1202, in _maybe_log_save_evaluate self._save_checkpoint(model, trial, metrics=metrics) File "/opt/conda/lib/python3.6/site-packages/transformers/sagemaker/trainer_sm.py", line 245, in _save_checkpoint super()._save_checkpoint(self, model, trial, metrics=metrics) TypeError: _save_checkpoint() got multiple values for argument 'metrics' ``` This is because the `self` argument shouldn't be passed, so `trial` ends up as `metrics` via it's position.
05-16-2021 20:41:57
05-16-2021 20:41:57
This is a PR against an older version of Transformers, which we do not accept. This code has been completely removed since then and is now fully integrated into Trainer.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,737
closed
Add regression tests for slow sentencepiece tokenizers.
This PR adds regression tests for slow sentencepiece tokenizers. These tests are needed for a refactoring in PR #11716 ## Strange findings - s2t: `_convert_token_to_id` of `"<s>"` does not give 0 ## ToDo - <s>add test for `_convert_token_to_id`</s> - done, see `test_convert_token_and_id` - <s>add test for `_convert_id_to_token`</s> - done, see `test_convert_token_and_id` - <s>add test for `get_vocab`</s> - done - <s>add test for `vocab_size`</s> - done - <s>add test for `convert_tokens_to_string`</s> - done, see `test_sentencepiece_tokenize_and_convert_tokens_to_string` in `TokenizerTesterMixin` - <s>add test for pickle</s> - is tested in `test_pickle_subword_regularization_tokenizer` - <s>manual review</s> - done - <s>fix / add reformer integration test</s> - see https://github.com/huggingface/transformers/pull/11737#issuecomment-850769064 - done - <s>add typing</s> - done - <s>add docstrings</s> - done
05-16-2021 19:22:27
05-16-2021 19:22:27
rebased on master<|||||>This PR is ready for review please. @LysandreJik @sgugger The failing test is connected to #11731<|||||>> Cool, thanks a lot for working on these tests! I think that these are already somewhat covered by the common tests, but they're fast and should help identify issues faster. > > However, in order to make sure PR #11716 can be merged, I was mentioning integration tests, rather than regression/unit tests. For example the ALBERT integration test: > > https://github.com/huggingface/transformers/blob/b8344a274fe13b390fa60c74b76117f5ea8144cb/tests/test_tokenization_albert.py#L108-L152 > > Those are particularly important when doing refactors that may affect the encoding/decoding aspect of tokenizers. > > I think this is a bit of a larger work though, so we can post "Good first issues" for the SPM-based tokenizers in a first step so that the community may help. Ok @LysandreJik . So I will extend the PR and add integration tests for the `_tokenizer` function like the one you linked above to all sentencepiece tokenizers. Do you think the already written tests can stay as they are? What other steps are needed?<|||||>Hi @PhilipMay, thanks for offering to do it! Feel free to let us know if you would like us to offer some of these to the community, as it can be a bit of work to get every tokenizer tested. Other than the integration tests, I don't think anything is needed. Also, you might be interested in rebasing on the `master` branch - we've solved the issue regarding the `run_tests_torch` timing out yesterday so by rebasing you would have reliable CI feedback.<|||||>> Hi @PhilipMay, thanks for offering to do it! Feel free to let us know if you would like us to offer some of these to the community, as it can be a bit of work to get every tokenizer tested. I was thinking to add integration tests for the tokeinzers that I want to refactor (the sentencepiece). And not foll all tokenizers. **What about this:** In this PR I add integration tests for the **sentencepiece** tokeinzers only - a full list see here #11417 After that has been merged I (or you) will open an issue asking for similar tests for all tokenizers. @LysandreJik what do you think? <|||||>Yes, sentencepiece tokenizers only, definitely! But even so, that's quite a large number of tokenizers :)<|||||>> Also, you might be interested in rebasing on the master branch - we've solved the issue regarding the run_tests_torch timing out yesterday so by rebasing you would have reliable CI feedback. Rebased on master - CI is green again. :-)<|||||>@LysandreJik I refactored the tokenizer integration test of albert; https://github.com/German-NLP-Group/transformers/blob/1894bcc5d116d0107150f4659551e6e21111d736/tests/test_tokenization_albert.py#L127 By adding a util class to `TokenizerTesterMixin` https://github.com/German-NLP-Group/transformers/blob/1894bcc5d116d0107150f4659551e6e21111d736/tests/test_tokenization_common.py#L186 And also added an integration test to Barthez: https://github.com/German-NLP-Group/transformers/blob/1894bcc5d116d0107150f4659551e6e21111d736/tests/test_tokenization_barthez.py#L99 What do you think about this "pattern"? Should I continue in that direction and add the other tokenizers?<|||||>I like it, I find it very clean!<|||||>@LysandreJik the reformer tokenizer integration test somehow fails: ```text Expected :Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert) provides general-purpose architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet...) for Natural Language Understanding (NLU) and Natural Language Generation (NLG) with over 32+ pretrained models in 100+ languages and deep interoperability between Jax, PyTorch and TensorFlow. Actual :Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert provides general-purpose architectures (BERT, GPT-, RoBERTa, LM, DistilBert, LNet... for Natural Language nderstanding (NL and Natural Language Generation (NLG with over pretrained models in languages and deep interoperability between ax, PyTorch and TensorFlow. ``` Characters like ")" are missing from the vocab. They are converted to `0` or `<unk>`. @LysandreJik I just pass in an simpler test text to make the test succeed. Or should we investigate this stange error to discover a possible hidden bug?<|||||>@LysandreJik while we are here - can we remove this or is it some kind of "open todo"? https://github.com/huggingface/transformers/blob/8d171628fe84bdf92ee40b5375d7265278180f14/tests/test_tokenization_common.py#L178-L184<|||||>@LysandreJik @sgugger as discussed above the suggested integration tests are added to the sentencepiece tokenizers. CI is green, IMO this is done and ready for merge. Please have a look at the strange behavior of the reformer tokenizer: https://github.com/huggingface/transformers/pull/11737#issuecomment-850769064 And this question: https://github.com/huggingface/transformers/pull/11737#issuecomment-850776366 <|||||>Pinging @patrickvonplaten regarding the Reformer test. Regarding https://github.com/huggingface/transformers/pull/11737#issuecomment-850776366 we can just remove this<|||||>> Regarding #11737 (comment) we can just remove this Done.<|||||>Ok - so this should be ready to be merged so I can continue with #11716 ?<|||||>Thanks again for all your work!
transformers
11,736
closed
Support for running Gpt-Neo 2.7B with 6 GB vram for inference
# What does this PR do? It adds functionality to allow gpt-Neo 2.7B to run in 6gb vram. If it detects that some modules are on gpu and there is not enough vram, a dict called extrastorage is created which holds the data for model.transformer.h These weights are loaded from ram to vram one at a time, reducing vram usage. Expected speed is around 1 token/2s. (slower on the first run) ## Usage 1. Have between 5 and 9.5 Gb Vram 2. run - ``` model.eval().half().to("cpu") model.transformer.wte.to("cuda") model.transformer.wpe.to("cuda") model.transformer.ln_f.to("cuda") model.lm_head.to("cuda") torch.cuda.empty_cache() ``` 3. Use `model.generate() `or `model(**inputs) ` ## Motivation Will become faster as ram->vram (pcie) bandwidth increases Running larger models on consumer hardware is important ## Incomplete I need some help with the documentation, also I'm not sure if `import copy` should be inside an if statement or not (line 769). ## Before submitting * [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? * [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. * [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). * [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. Models: gpt-neo: @patil-suraj
05-16-2021 12:05:51
05-16-2021 12:05:51
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,735
closed
Problem with mT5 and the official Summarization notebook
## Environment info - `transformers` version: 4.6.0 - Platform: Linux-5.4.109+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyTorch version (GPU?): 1.8.1+cu101 (True) - Tensorflow version (GPU?): 2.4.1 (True) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help @patrickvonplaten @patil-suraj @sgugger ## Information I am using mT5-small on the [official summarization notebook](https://github.com/huggingface/transformers/tree/master/examples/pytorch). However when trained, the model gets nan loss values and outputs non-sense. I made some changes to speed up the training such as loading 5% of the data, changing max input length to 256 from 1024 and the batch size to 8 from 16 however my settings work perfectly fine with t5-small and I get a high rouge score with sensible outputs and loss values. The problem seems to be mT5. ## To reproduce Here is my [Colab notebook ](https://colab.research.google.com/drive/16-6yIHFQQ1Q8meVYqFn21Tw2eoG9wliU?usp=sharing)which you can see the output at the end. TrainOutput(global_step=1276, training_loss=nan, metrics={'train_runtime': 340.4428, 'train_samples_per_second': 3.748, 'total_flos': 1196714720985600.0, 'epoch': 1.0, 'init_mem_cpu_alloc_delta': 1543725056, 'init_mem_gpu_alloc_delta': 1200707584, 'init_mem_cpu_peaked_delta': 0, 'init_mem_gpu_peaked_delta': 0, 'train_mem_cpu_alloc_delta': 10055680, 'train_mem_gpu_alloc_delta': 1203914240, 'train_mem_cpu_peaked_delta': 65540096, 'train_mem_gpu_peaked_delta': 4225469440}) ## Expected behavior Training loss should not be nan.
05-16-2021 08:56:42
05-16-2021 08:56:42
disabling fp16 seems to solve the issue of nan loss, but I wouldn't call this issue closed because this doubles the training time :(<|||||>Hey @demegire, Sadly MT5 doesn't really work with fp16. There are a bunch of issues regarding this problem...see: - https://discuss.huggingface.co/t/t5-fp16-issue-is-fixed/3139/5 - https://github.com/huggingface/transformers/issues/10830<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,734
closed
Cant load google/reformer-enwik8
OSError: Can't load tokenizer for 'google/reformer-enwik8'. Make sure that: - 'google/reformer-enwik8' is a correct model identifier listed on 'https://huggingface.co/models' - or 'google/reformer-enwik8' is the correct path to a directory containing relevant tokenizer files https://huggingface.co/google/reformer-enwik8?text=My+name+is+Julien+and+I+like+to
05-15-2021 12:29:48
05-15-2021 12:29:48
Duplicate of #11649
transformers
11,733
closed
CPU Memory Leak when using RoBERTa for just word vector representation
Hi, I do not use model for training or fine tuning. I just want to give strings and take their representations. My dataset is Robust04 (about 2G) and max length of each document is truncated to 1024 tokens. So, I break each documents to pieces of 64 tokens and represent it and then concatenate the representation of each 64 tokens pieces to get word vector representation of document with length of 1024 (1024*768). I use 64GB CPU RAM but it crashed after about 40% of documents represented. The used code is in the following: `tokenizer = AutoTokenizer.from_pretrained('roberta-base')` `model = AutoModel.from_pretrained('roberta-base')` `cl_text = tokenizer.encode("I am using RoBERTa")` `piece = tokenizer.decode(cl_text)` `piece = tokenizer(piece, return_tensors="pt", max_length=le, pad_to_max_length=True, truncation=True, return_attention_mask=True, return_token_type_ids=True, add_special_tokens=False)` `piece = model(**piece)` `piece = piece.last_hidden_state` `piece.detach()` Would you please guide me? Thanks in advance, Regards
05-15-2021 07:47:04
05-15-2021 07:47:04
Hi @ZahraGithub, 1) Please use `model.eval()` to reduce your memory consumption. 2) From what I understood, you're processing 64 tokens in one go and total tokens are 1024. This means you'll get vector representation of (16,768) for each document right? How are you storing these representations? You won't be able to load all of them in your RAM hence that CPU memory leak. Try saving these representations to disk for each (that can vary) document to avoid memory leak.<|||||>Hi @bhavitvyamalik 1. I did it but again the memory consumption is high. 2. I can not understand what is the amount of 16768?<|||||>For 2, I think what's happening is you're storing all your representations in a list or something. You should load them off your RAM and store in it your disk (you can save it as .npy file and later load them) so as to avoid 100% memory consumption.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,732
open
Import `SPIECE_UNDERLINE` from `file_utils` instead of WET definition
Many places define `SPIECE_UNDERLINE` in the code like this: `SPIECE_UNDERLINE = "▁"` Instead it should me imported: `from transformers.file_utils import SPIECE_UNDERLINE`. I can provide a PR...
05-15-2021 06:29:51
05-15-2021 06:29:51
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I am still planning to provide a PR later.
transformers
11,731
closed
`ci/circleci: run_tests_torch` reaches 10 min. time limit
`ci/circleci: run_tests_torch` reaches 10 min. time limit - see here: https://app.circleci.com/pipelines/github/huggingface/transformers/23426/workflows/349bd527-b66a-46ed-a168-365794da6856/jobs/211948
05-15-2021 05:38:20
05-15-2021 05:38:20
An other one here: https://app.circleci.com/pipelines/github/huggingface/transformers/23647/workflows/b0de5fa6-3f1d-446f-8ce9-11461ff1fb10/jobs/214869<|||||>@sgugger are you aware of this issue?<|||||>Yes we are aware. This is something we will work on in the next weeks, we're just wrapping another project first.<|||||>Seems to be fixed now. Closing.
transformers
11,730
closed
Bert2bert on Swag with very low accuracy
Hello everyone, I try to build multiple choice QA system using Bert2Bert. I follow the model given for Swag using t5 in [https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb](url) My complete code is here.[https://colab.research.google.com/drive/1MAGCi5TC1S6GNW3CFEB0f2cMkQ5gpxdN?usp=sharing](url) To integrate bert2bert model, I follow this [https://colab.research.google.com/drive/1Ekd5pUeCX7VOrMx94_czTkwNtLN32Uyu?usp=sharing](url) notebook. I created a Bert2BertFineTuner class considering T5FineTuner class in [https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb](url) I add the following changes to T5FineTuner class for Bert2Bert consideration. I just add > EncoderDecoderModel.from_encoder_decoder_pretrained(.) and > BertTokenizer.from_pretrained(.) ``` class Bert2BertFineTuner(pl.LightningModule): def __init__(self, hparams): super(Bert2BertFineTuner, self).__init__() self.hparams = hparams #self.model = T5ForConditionalGeneration.from_pretrained(hparams.model_name_or_path) #self.tokenizer = T5Tokenizer.from_pretrained(hparams.tokenizer_name_or_path) self.tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") self.model = EncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-uncased", "bert-base-uncased") self.model.config.decoder_start_token_id = self.tokenizer.bos_token_id self.model.config.eos_token_id = self.tokenizer.eos_token_id self.model.config.pad_token_id = self.tokenizer.pad_token_id # sensible parameters for beam search self.model.config.vocab_size = self.model.config.decoder.vocab_size self.model.config.max_length = 142 self.model.config.min_length = 56 self.model.config.no_repeat_ngram_size = 3 self.model.config.early_stopping = True self.model.config.length_penalty = 2.0 self.model.config.num_beams = 4 def is_logger(self): return self.trainer.proc_rank <= 0 def forward( self, input_ids=None, attention_mask=None, decoder_input_ids=None, decoder_attention_mask=None, lm_labels=None ): return self.model( input_ids=input_ids, attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, labels=lm_labels, ) def _step(self, batch): lm_labels = batch["target_ids"] lm_labels[lm_labels[:, :] == self.tokenizer.pad_token_id] = -100 outputs = self( input_ids=batch["source_ids"], attention_mask=batch["source_mask"], lm_labels=lm_labels, decoder_attention_mask=batch['target_mask'], decoder_input_ids=batch['target_ids'] ) loss = outputs[0] return loss ``` As above, I have updated the model, config, and tokenizer for bert2bert model. Also, sample input and target encoded pairs are as: ``` data = dataset[6] print(tokenizer.decode(data['source_ids'])) print("**") print(tokenizer.decode(data['target_ids'])) ``` [CLS] context : in what spanish speaking north american country can you get a great cup of coffee? options : 1 : mildred's coffee shop 2 : mexico 3 : diner 4 : kitchen 5 : canteen < / s > [SEP] [PAD] [PAD] [PAD] [PAD] [PAD] ** [CLS] 2 < / s > [SEP] ``` ``` In the above example, 2 is indicating the label. And I run the model with the following parameters: `{'output_dir': 't5_swag', 'model_name_or_path': 'bert2bert', 'tokenizer_name_or_path': 'bert-base', 'max_seq_length': 512, 'learning_rate': 3e-05, 'weight_decay': 0.0, 'adam_epsilon': 1e-08, 'warmup_steps': 0, 'train_batch_size': 8, 'eval_batch_size': 8, 'num_train_epochs': 4, 'gradient_accumulation_steps': 16, 'n_gpu': 1, 'early_stop_callback': False, 'fp_16': False, 'opt_level': 'O1', 'max_grad_norm': 1.0, 'seed': 42, 'data_dir': ''}` It finishes the execution with following loss values: ``` Validation sanity check: 100% 5/5 [00:03<00:00, 1.71it/s] INFO:__main__:LOOKING AT train INFO:__main__:hello Epoch 4: 100% 1370/1370 [35:51<00:00, 1.57s/it, loss=0.017, v_num=0, val_loss=0.268] Validating: 100% 153/153 [01:31<00:00, 1.67it/s] INFO:__main__:***** Validation results ***** INFO:__main__:avg_val_loss = tensor(0.2726, device='cuda:0') INFO:__main__:loss = tensor(0.2695, device='cuda:0') INFO:__main__:train_loss = tensor(0.2695, device='cuda:0') INFO:__main__:val_loss = tensor(0.2726, device='cuda:0') Validating: 100% 153/153 [01:31<00:00, 1.67it/s] INFO:__main__:***** Validation results ***** INFO:__main__:avg_train_loss = tensor(1.1325, device='cuda:0') INFO:__main__:avg_val_loss = tensor(0.2689, device='cuda:0') INFO:__main__:epoch = 0 INFO:__main__:loss = tensor(0.2677, device='cuda:0') INFO:__main__:train_loss = tensor(0.2677, device='cuda:0') INFO:__main__:val_loss = tensor(0.2689, device='cuda:0') Validating: 100% 153/153 [01:33<00:00, 1.64it/s] INFO:__main__:***** Validation results ***** INFO:__main__:avg_train_loss = tensor(0.2719, device='cuda:0') INFO:__main__:avg_val_loss = tensor(0.2686, device='cuda:0') INFO:__main__:epoch = 1 INFO:__main__:loss = tensor(0.2674, device='cuda:0') INFO:__main__:train_loss = tensor(0.2674, device='cuda:0') INFO:__main__:val_loss = tensor(0.2686, device='cuda:0') Validating: 100% 153/153 [01:33<00:00, 1.64it/s] INFO:__main__:***** Validation results ***** INFO:__main__:avg_train_loss = tensor(0.2702, device='cuda:0') INFO:__main__:avg_val_loss = tensor(0.2684, device='cuda:0') INFO:__main__:epoch = 2 INFO:__main__:loss = tensor(0.2623, device='cuda:0') INFO:__main__:train_loss = tensor(0.2623, device='cuda:0') INFO:__main__:val_loss = tensor(0.2684, device='cuda:0') ``` The validation part: ``` model.model.eval() outputs = [] targets = [] for batch in tqdm(loader): outs = model.model.generate(input_ids=batch['source_ids'].cuda(), attention_mask=batch['source_mask'].cuda()) dec = [tokenizer.decode(ids) for ids in outs] target = [tokenizer.decode(ids) for ids in batch["target_ids"]] outputs.extend(dec) targets.extend(target) ``` metrics.accuracy_score(targets1, outputs1) 0.20065520065520065 ``` The accuracy is too low. What can the reason be? Most probably I am missing something, but I could not find it.
05-14-2021 21:30:00
05-14-2021 21:30:00
Hey @helloworld123-lab, Thanks for the issue :-) Is there a specific reason to use Bert2bert for SWAG instead of just a BERT model?<|||||>I am sorry for the issue :) actually i am new in this field. i just started working on models using transformers. T5 is a text-to-text model, I just wanted to try how it can perform with bert2bert. Is this the wrong approach to Swag?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,729
closed
[Benchmark]
# 🖥 Benchmarking `transformers` ## Benchmark Which part of `transformers` did you benchmark? ## Set-up What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use? ## Results Put your results here!
05-14-2021 17:51:34
05-14-2021 17:51:34
transformers
11,728
closed
ImportError: cannot import name 'load_dataset' from 'datasets'
## Environment info - `transformers` version: 4.6.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.8.3 - PyTorch version (GPU?): 1.7.1 (True) - Using GPU in script?: Possibly? - Using distributed or parallel set-up in script?: No ### Who can help @LysandreJik and @lhoestq helped on the other issues that I looked at so they might be able to help here. But I'll take anyone really. ## Information I am attempting to run finBERT and am having trouble with the datasets package. I looked at a couple of other issues from people who had similar problems but none of their solutions worked for me. I'm sorry if I didn't provide some information or missed something obvious, I'm new to programming and very new to machine learning so I don't quite know what/where everything is yet! The problem arises when using: * [ ] my own modified scripts: (give details below) I am using the first model in this [example script](https://github.com/yya518/FinBERT/blob/master/FinBert%20Model%20Example.ipynb) from the finBERT model developers. The tasks I am working on is: * [ ] my own task or dataset: (give details below) I am just trying to use transformers to run finBERT without using the API. ## To reproduce Steps to reproduce the behavior: 1. Import datasets That's as far as I'm able to get before I get this error: """"" ImportError Traceback (most recent call last) <ipython-input-5-f2837d51185d> in <module> 24 from sklearn.metrics import classification_report 25 import transformers ---> 26 from transformers import AutoModel, BertTokenizerFast 27 28 ~\anaconda3\lib\site-packages\transformers\__init__.py in __getattr__(self, name) 2485 if name == "__version__": 2486 return __version__ -> 2487 return super().__getattr__(name) 2488 2489 sys.modules[__name__] = _LazyModule(__name__, _import_structure) ~\anaconda3\lib\site-packages\transformers\file_utils.py in __getattr__(self, name) 1698 elif name in self._class_to_module.keys(): 1699 module = self._get_module(self._class_to_module[name]) -> 1700 value = getattr(module, name) 1701 else: 1702 raise AttributeError(f"module {self.__name__} has no attribute {name}") ~\anaconda3\lib\site-packages\transformers\file_utils.py in __getattr__(self, name) 1697 value = self._get_module(name) 1698 elif name in self._class_to_module.keys(): -> 1699 module = self._get_module(self._class_to_module[name]) 1700 value = getattr(module, name) 1701 else: ~\anaconda3\lib\site-packages\transformers\models\auto\__init__.py in _get_module(self, module_name) 196 197 def _get_module(self, module_name: str): --> 198 return importlib.import_module("." + module_name, self.__name__) 199 200 sys.modules[__name__] = _LazyModule(__name__, _import_structure) ~\anaconda3\lib\importlib\__init__.py in import_module(name, package) 125 break 126 level += 1 --> 127 return _bootstrap._gcd_import(name[level:], package, level) 128 129 ~\anaconda3\lib\site-packages\transformers\models\auto\modeling_auto.py in <module> 197 from ..pegasus.modeling_pegasus import PegasusForCausalLM, PegasusForConditionalGeneration, PegasusModel 198 from ..prophetnet.modeling_prophetnet import ProphetNetForCausalLM, ProphetNetForConditionalGeneration, ProphetNetModel --> 199 from ..rag.modeling_rag import ( # noqa: F401 - need to import all RagModels to be in globals() function 200 RagModel, 201 RagSequenceForGeneration, ~\anaconda3\lib\site-packages\transformers\models\rag\modeling_rag.py in <module> 27 from ...utils import logging 28 from .configuration_rag import RagConfig ---> 29 from .retrieval_rag import RagRetriever 30 31 ~\anaconda3\lib\site-packages\transformers\models\rag\retrieval_rag.py in <module> 37 38 if is_datasets_available(): ---> 39 from datasets import Dataset, load_dataset, load_from_disk 40 41 if is_faiss_available(): ImportError: cannot import name 'load_dataset' from 'datasets' (C:\Users\bookw\Dropbox\Equity-Analyst-Project\equity-analysts-sentiment\datasets.py) """'" ## Expected behavior I would expect the package to import correctly.
05-14-2021 16:27:23
05-14-2021 16:27:23
Hi ! When you `import datasets`, python looks at your installed packages, but also at the modules defined in the directory from which you run your code. It is the case because the current working directory is added to your python path when you run your code. In your case I think it tries to load your `datasets.py` in the `equity-analysts-sentiment` folder, since the name is conflicting. If you rename this file you should be good.<|||||>Ok so I renamed the file and it still wouldn't run. I also tried moving it around to run it in other directories and see if I had better luck but I still got this same error everywhere I tried it.<|||||>If you're still having this error: ``` ImportError: cannot import name 'load_dataset' from 'datasets' (C:\Users\bookw\Dropbox\Equity-Analyst-Project\equity-analysts-sentiment\datasets.py) ``` Then it probably means that `C:\Users\bookw\Dropbox\Equity-Analyst-Project\equity-analysts-sentiment` is still in your python path. Can you check that you didn't add this path to your python path via environment variables or via your IDE ? I know that some of them like PyCharm add project directories to the python path automatically for example.<|||||>I don't think I'm using a virtual enviroment or IDE, just Jupyter Notebooks. I'll paste my python path below but I don't see that in there. C:\Users\bookw\anaconda3;C:\Users\bookw\anaconda3\Library\mingw-w64\bin;C:\Users\bookw\anaconda3\Library\usr\bin;C:\Users\book w\anaconda3\Library\bin;C:\Users\bookw\anaconda3\Scripts;C:\Users\bookw\anaconda3\bin;C:\Users\bookw\anaconda3\condabin;C:\Use rs\bookw\anaconda3;C:\Users\bookw\anaconda3\Library\mingw-w64\bin;C:\Users\bookw\anaconda3\Library\usr\bin;C:\Users\bookw\anac onda3\Library\bin;C:\Users\bookw\anaconda3\Scripts;C:\Program Files\Common Files\Oracle\Java\javapath;C:\Windows\system32;C:\W indows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0;C:\Windows\System32\OpenSSH;C:\Program Files (x86)\ NVIDIA Corporation\PhysX\Common;C:\Program Files\NVIDIA Corporation\NVIDIA NvDLISR;C:\Program Files\Git\cmd;C:\Program Files\P uTTY;C:\Program Files\dotnet;C:\Program Files\Microsoft SQL Server\130\Tools\Binn;C:\Program Files\Microsoft SQL Server\Client SDK\ODBC\170\Tools\Binn;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0;C: \WINDOWS\System32\OpenSSH;C:\Users\bookw\AppData\Local\Microsoft\WindowsApps;C:\Users\bookw\AppData\Local\Programs\MiKTeX\mikt ex\bin\x64;C:\Users\bookw\.dotnet\tools;.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I am facing the same issue when trying to follow the datasets tutorial from the Huggingface course. The line `from datasets import load_dataset` causes the following error: `ImportError: cannot import name 'load_dataset' from 'datasets' (unknown location)`. My environment: - macOS Big Sur 11.6. on M1 Macbook - python 3.8.0 - conda 4.11.0 - transformers 4.16.2 - datasets 1.18.3 (installed with `conda install -c huggingface -c conda-forge datasets`) - torch 1.10.2 The Colab notebook provided by the course works fine. This error occurs only locally. Could this be an M1 related issue on the Macbook? I have had problems in the past with conda installations and also with tensorflow on the M1. @eadsa1998 Did you manage to resolve the problem?<|||||>the same issue also<|||||>I had the same issue and solved it by reinstalling the datasets package.<|||||>the same issue also now<|||||>I'm having the same issue, and it still don't work after reinstalling the datasets package.<|||||>Can you check that you don't have a directory named "datasets" or a file "datasets.py" in your working directory or in directories in your python path (including the ones that your IDE may be adding) ? The ImportError can also show the location of the diretory/file that is imported instead of the `datasets` package
transformers
11,727
closed
Improvements to Flax finetuning script
# What does this PR do? - Ensures we actually use the `weight_decay` command-line argument - Simplified `jax.value_and_grad` by removing the auxiliary (which we don't use) - Simplified replication logic in eval step - Fixes a bug in RNG handling. We weren’t splitting them appropriately before sharding them during training, which is not good practice, RNGs should always be split and not re-used, see: https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html#jax-prng Note the new RNG handling affects the training accuracy, so I reran all experiments and report the new numbers, which aren't much different from the previous ones. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
05-14-2021 14:22:58
05-14-2021 14:22:58
transformers
11,726
closed
[Flax] Correct example script
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Remove useless arg & make sure that state is not replicated 2 times in a row. Thanks for spotting it @marcvanzee ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
05-14-2021 11:00:46
05-14-2021 11:00:46