repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
15,546
closed
gptj doc tied weights mistake?
https://github.com/huggingface/transformers/blob/5f1918a4a8ed893822aa7dd2b75acf83f255ad79/src/transformers/models/gptj/modeling_gptj.py#L686 ``` The GPT-J Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). ``` The lm head isn't tied to the input embeddings for gpt-j
02-07-2022 17:02:34
02-07-2022 17:02:34
Good catch!
transformers
15,545
closed
Not able to load CharacterBERT
@LysandreJik @helboukkouri I faced a problem when I tried to load CharcterBERT using the following code: ```python from transformers import AutoTokenizer, AutoConfig from transformers import AutoModel config = AutoConfig.from_pretrained("helboukkouri/character-bert") tokenizer = AutoTokenizer.from_pretrained("helboukkouri/character-bert") model = AutoModel.from_pretrained("helboukkouri/character-bert") ``` I got this error related to model_type not found: ``` KeyError Traceback (most recent call last) /tmp/ipykernel_208703/3530088082.py in <module> 38 39 ---> 40 config = AutoConfig.from_pretrained("helboukkouri/character-bert") 41 tokenizer = AutoTokenizer.from_pretrained("helboukkouri/character-bert") 42 model = AutoModel.from_pretrained("helboukkouri/character-bert") ~/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs) 630 return config_class.from_pretrained(pretrained_model_name_or_path, **kwargs) 631 elif "model_type" in config_dict: --> 632 config_class = CONFIG_MAPPING[config_dict["model_type"]] 633 return config_class.from_dict(config_dict, **kwargs) 634 else: ~/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py in __getitem__(self, key) 345 return self._extra_content[key] 346 if key not in self._mapping: --> 347 raise KeyError(key) 348 value = self._mapping[key] 349 module_name = model_type_to_module_name(key) KeyError: 'character_bert' ``` The same issue happens when I try any variant of CharacterBERT Thanks for your help ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.16.2 - Platform: Linux-5.4.0-91-generic-x86_64-with-glibc2.10 - Python version: 3.8.12 - PyTorch version (GPU?): 1.10.1 (True) - Tensorflow version (GPU?): 2.8.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No
02-07-2022 15:43:48
02-07-2022 15:43:48
Hi, As far as I know, the model has not yet been added to the library (see #15061). If you're interested in character/byte-level models, we do have 3 models available in the library right now: * [CANINE](https://huggingface.co/docs/transformers/model_doc/canine) * [Perceiver](https://huggingface.co/docs/transformers/model_doc/perceiver) * [ByT5](https://huggingface.co/docs/transformers/model_doc/byt5) <|||||>Hi all, so Indeed the model has not been integrated to the library yet 😢. Which is unfortunate, but at the same time I can't seem to find the time to work on that lately. If you're interested @hadi-abdine I can ping you whenever I make progress on this. And by the way, there is also [my own code](https://github.com/helboukkouri/character-bert/) which could be enough for you to start using the model. I would just add one small comment regarding what @NielsRogge said: CharacterBERT, despite its name, is actually a word-level model (i.e. it embeds pre-tokenized pieces of strings or "words"). However, to do so, it aggregates character-level (actually, byte-level) information at the level of each of these words. Anyway, thank you for your interest in my work! 😊<|||||>> Hi all, so Indeed the model has not been integrated to the library yet 😢. Which is unfortunate, but at the same time I can't seem to find the time to work on that lately. If you're interested @hadi-abdine I can ping you whenever I make progress on this. And by the way, there is also [my own code](https://github.com/helboukkouri/character-bert/) which could be enough for you to start using the model. > > I would just add one small comment regarding what @NielsRogge said: CharacterBERT, despite its name, is actually a word-level model (i.e. it embeds pre-tokenized pieces of strings or "words"). However, to do so, it aggregates character-level (actually, byte-level) information at the level of each of these words. > > Anyway, thank you for your interest in my work! 😊 Thanks @NielsRogge and @helboukkouri for your replies. @helboukkouri, I just checked your code, I think it's enough for me to use the model. Thank you! But Of course I am also interested to know when CharacterBERT will be added to HuggingFace.
transformers
15,544
closed
[SpeechRecognition Seq2Seq] CUDA out of memory when training on GPU
When training a [`wav2vec2-2-bert-large`](https://huggingface.co/sanchit-gandhi/wav2vec2-2-bert-large/blob/main/create_model.py) model on the LibriSpeech ASR corpus and on an NVIDIA Tesla V100 GPU with the following training hyperparameters: * `per_device_train_batch_size=4 ` * `per_device_eval_batch_size=4 ` * `gradient_accumulation_steps=2` * `generation_num_beams=1` the GPU memory is exhausted and an out of memory error returned: `RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 15.78 GiB total capacity; 13.46 GiB already allocated; 5.25 MiB free; 13.93 GiB reserved in total by PyTorch)` Reducing the training batch size to 1 and increasing the number of gradient accumulation steps still returns an out of memory error. What measures can be taken to effectively reduce the memory?
02-07-2022 15:36:34
02-07-2022 15:36:34
Hey @sanchit-gandhi, Could you provide the exactly training command that you used as well as your environment info so that I can verify on a V100 from my side? <|||||>Model script: ```py # checkpoints to leverage encoder_id = "facebook/wav2vec2-large-lv60" decoder_id = "bert-large-uncased" feature_extractor = AutoFeatureExtractor.from_pretrained(encoder_id) feature_extractor.save_pretrained("./") tokenizer = AutoTokenizer.from_pretrained(decoder_id) tokenizer.save_pretrained("./") model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained(encoder_id, decoder_id, encoder_add_adapter=True) model.config.encoder.feat_proj_dropout = 0.0 model.config.encoder.final_dropout = 0.0 model.config.encoder.mask_time_prob = 0.1 model.config.decoder_start_token_id = tokenizer.cls_token_id model.config.pad_token_id = tokenizer.pad_token_id model.config.eos_token_id = tokenizer.sep_token_id model.config.max_length = 50 model.config.num_beams = 1 model.config.encoder.layerdrop = 0.0 model.config.use_cache = False model.config.decoder.use_cache = False model.config.processor_class = "Wav2Vec2Processor" # check if generation works out = model.generate(torch.ones((1, 2000))) model.save_pretrained("./") ``` Bash script: ``` #!/usr/bin/env bash CUDA_AVAILABLE_DEVICES=0 python run_speech_recognition_seq2seq.py \ --dataset_name="librispeech_asr" \ --model_name_or_path="./" \ --dataset_config_name="clean" \ --train_split_name="train.100" \ --eval_split_name="validation" \ --output_dir="./" \ --preprocessing_num_workers="1" \ --length_column_name="input_length" \ --overwrite_output_dir \ --num_train_epochs="1" \ --per_device_train_batch_size="4" \ --per_device_eval_batch_size="4" \ --gradient_accumulation_steps="2" \ --generation_max_length="40" \ --generation_num_beams="1" \ --learning_rate="3e-4" \ --warmup_steps="500" \ --evaluation_strategy="steps" \ --text_column_name="text" \ --save_steps="500" \ --eval_steps="500" \ --logging_steps="1" \ --save_total_limit="1" \ --freeze_feature_encoder \ --gradient_checkpointing \ --fp16 \ --group_by_length \ --predict_with_generate \ --do_lower_case \ --do_eval --do_train \ --push_to_hub \ --use_auth_token ``` Environment: ``` - `transformers` version: 4.17.0.dev0 - Platform: Linux-5.11.0-1028-gcp-x86_64-with-glibc2.33 - Python version: 3.9.5 - PyTorch version (GPU?): 1.10.2+cu113 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): 0.4.0 (gpu) - Jax version: 0.2.28 - JaxLib version: 0.1.76 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ``` <|||||>Great, thanks for the repro! In this case, it looks like you are using the default Adam optimizer which can be quite heavy (it uses 3x the model parameters for it's optimizer state). In a first step, I would try to replace torch's native Adam by https://github.com/facebookresearch/bitsandbytes' Adam as shown here: https://github.com/huggingface/transformers/blob/552f8d30917cabd738d1c32a9e047f2da3ae1b28/examples/research_projects/robust-speech-event/run_speech_recognition_ctc_bnb.py#L678 If this still uses to much memory, I'd probably start looking into using adafactor instead. This should be as simlpe as adding a `--adafactor` flag to the command above.<|||||>Thanks for the reply, Patrick! I first tried using the 8-bit implementation of Adam from 'bits and bytes' that you cited. Even with a batch size of 1, this throws the CUDA out of memory error on the GPU. Upon inspection of the codebase at https://github.com/facebookresearch/bitsandbytes, `Adam8bit` appears not to support the option of Adafactor. Instead, I tried using the Hugging Face Adafactor optimizer at https://huggingface.co/docs/transformers/main_classes/optimizer_schedules#transformers.Adafactor. However, despite this change, a batch size of 1 still exceeds the GPU memory limit. <details> <summary> `run_speech_recognition_seq2seq.py` with 8bit </summary> ``` #!/usr/bin/env python # coding=utf-8 # Copyright 2021 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Fine-tuning the library models for sequence to sequence speech recognition. """ # You can also adapt this script on your own sequence to sequence speech # recognition task. Pointers for this are left as comments. import logging import os import sys from dataclasses import dataclass, field from typing import Any, Dict, List, Optional, Union import datasets import torch from datasets import DatasetDict, load_dataset, load_metric import bitsandbytes as bnb import transformers from transformers import ( AutoConfig, AutoFeatureExtractor, AutoModelForSpeechSeq2Seq, AutoProcessor, AutoTokenizer, HfArgumentParser, Seq2SeqTrainer, Seq2SeqTrainingArguments, set_seed, ) from transformers.trainer_pt_utils import get_parameter_names from transformers.trainer_utils import get_last_checkpoint, is_main_process from transformers.utils import check_min_version from transformers.utils.versions import require_version # Will error if the minimal version of Transformers is not installed. Remove at your own risks. check_min_version("4.17.0.dev0") require_version("datasets>=1.18.0", "To fix: pip install -r examples/pytorch/speech-recognition/requirements.txt") logger = logging.getLogger(__name__) @dataclass class ModelArguments: """ Arguments pertaining to which model/config/tokenizer we are going to fine-tune from. """ model_name_or_path: str = field( metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"} ) config_name: Optional[str] = field( default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"} ) tokenizer_name: Optional[str] = field( default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"} ) feature_extractor_name: Optional[str] = field( default=None, metadata={"help": "feature extractor name or path if not the same as model_name"} ) cache_dir: Optional[str] = field( default=None, metadata={"help": "Where to store the pretrained models downloaded from huggingface.co"}, ) use_fast_tokenizer: bool = field( default=True, metadata={"help": "Whether to use one of the fast tokenizer (backed by the tokenizers library) or not."}, ) model_revision: str = field( default="main", metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."}, ) use_auth_token: bool = field( default=False, metadata={ "help": "Will use the token generated when running `transformers-cli login` (necessary to use this script " "with private models)." }, ) freeze_feature_encoder: bool = field( default=True, metadata={"help": "Whether to freeze the feature encoder layers of the model."} ) @dataclass class DataTrainingArguments: """ Arguments pertaining to what data we are going to input our model for training and eval. """ dataset_name: str = field( default=None, metadata={"help": "The name of the dataset to use (via the datasets library)."} ) dataset_config_name: Optional[str] = field( default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."} ) text_column: Optional[str] = field( default=None, metadata={"help": "The name of the column in the datasets containing the full texts (for summarization)."}, ) overwrite_cache: bool = field( default=False, metadata={"help": "Overwrite the cached training and evaluation sets"} ) preprocessing_num_workers: Optional[int] = field( default=None, metadata={"help": "The number of processes to use for the preprocessing."}, ) max_train_samples: Optional[int] = field( default=None, metadata={ "help": "For debugging purposes or quicker training, truncate the number of training examples to this " "value if set." }, ) max_eval_samples: Optional[int] = field( default=None, metadata={ "help": "For debugging purposes or quicker training, truncate the number of evaluation examples to this " "value if set." }, ) audio_column_name: str = field( default="audio", metadata={"help": "The name of the dataset column containing the audio data. Defaults to 'audio'"}, ) text_column_name: str = field( default="text", metadata={"help": "The name of the dataset column containing the text data. Defaults to 'text'"}, ) max_duration_in_seconds: float = field( default=20.0, metadata={ "help": "Truncate audio files that are longer than `max_duration_in_seconds` seconds to 'max_duration_in_seconds`" }, ) min_duration_in_seconds: float = field( default=0.0, metadata={"help": "Filter audio files that are shorter than `min_duration_in_seconds` seconds"} ) preprocessing_only: bool = field( default=False, metadata={ "help": "Whether to only do data preprocessing and skip training. " "This is especially useful when data preprocessing errors out in distributed training due to timeout. " "In this case, one should run the preprocessing in a non-distributed setup with `preprocessing_only=True` " "so that the cached datasets can consequently be loaded in distributed training" }, ) train_split_name: str = field( default="train", metadata={ "help": "The name of the training data set split to use (via the datasets library). Defaults to 'train'" }, ) eval_split_name: str = field( default="test", metadata={ "help": "The name of the training data set split to use (via the datasets library). Defaults to 'train'" }, ) do_lower_case: bool = field( default=True, metadata={"help": "Whether the target text should be lower cased."}, ) @dataclass class DataCollatorSpeechSeq2SeqWithPadding: """ Data collator that will dynamically pad the inputs received. Args: processor ([`Wav2Vec2Processor`]) The processor used for proccessing the data. decoder_start_token_id (`int`) The begin-of-sentence of the decoder. """ processor: Any decoder_start_token_id: int def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]: # split inputs and labels since they have to be of different lenghts and need # different padding methods input_features = [{"input_values": feature["input_values"]} for feature in features] label_features = [{"input_ids": feature["labels"]} for feature in features] batch = self.processor.feature_extractor.pad(input_features, return_tensors="pt") labels_batch = self.processor.tokenizer.pad(label_features, return_tensors="pt") # replace padding with -100 to ignore loss correctly labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100) # if bos token is appended in previous tokenization step, # cut bos token here as it's append later anyways if (labels[:, 0] == self.decoder_start_token_id).all().cpu().item(): labels = labels[:, 1:] batch["labels"] = labels return batch def main(): # 1. Parse input arguments # See all possible arguments in src/transformers/training_args.py # or by passing the --help flag to this script. # We now keep distinct sets of args, for a cleaner separation of concerns. parser = HfArgumentParser((ModelArguments, DataTrainingArguments, Seq2SeqTrainingArguments)) if len(sys.argv) == 2 and sys.argv[1].endswith(".json"): # If we pass only one argument to the script and it's the path to a json file, # let's parse it to get our arguments. model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1])) else: model_args, data_args, training_args = parser.parse_args_into_dataclasses() # 2. Setup logging logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", handlers=[logging.StreamHandler(sys.stdout)], ) log_level = training_args.get_process_log_level() logger.setLevel(log_level) datasets.utils.logging.set_verbosity(log_level) transformers.utils.logging.set_verbosity(log_level) transformers.utils.logging.enable_default_handler() transformers.utils.logging.enable_explicit_format() logger.setLevel(logging.INFO if is_main_process(training_args.local_rank) else logging.WARN) # Log on each process the small summary: logger.warning( f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}" f"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}" ) logger.info(f"Training/evaluation parameters {training_args}") # Set the verbosity to info of the Transformers logger (on main process only): if is_main_process(training_args.local_rank): transformers.utils.logging.set_verbosity_info() logger.info("Training/evaluation parameters %s", training_args) # 3. Detecting last checkpoint and eventualy continue from last checkpoint last_checkpoint = None if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir: last_checkpoint = get_last_checkpoint(training_args.output_dir) if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0: raise ValueError( f"Output directory ({training_args.output_dir}) already exists and is not empty. " "Use --overwrite_output_dir to overcome." ) elif last_checkpoint is not None and training_args.resume_from_checkpoint is None: logger.info( f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change " "the `--output_dir` or add `--overwrite_output_dir` to train from scratch." ) # Set seed before initializing model. set_seed(training_args.seed) # 4. Load dataset raw_datasets = DatasetDict() if training_args.do_train: raw_datasets["train"] = load_dataset( data_args.dataset_name, data_args.dataset_config_name, split=data_args.train_split_name ) if training_args.do_eval: raw_datasets["eval"] = load_dataset( data_args.dataset_name, data_args.dataset_config_name, split=data_args.eval_split_name ) if data_args.audio_column_name not in next(iter(raw_datasets.values())).column_names: raise ValueError( f"--audio_column_name '{data_args.audio_column_name}' not found in dataset '{data_args.dataset_name}'. " "Make sure to set `--audio_column_name` to the correct audio column - one of " f"{', '.join(next(iter(raw_datasets.values())).column_names)}." ) if data_args.text_column_name not in next(iter(raw_datasets.values())).column_names: raise ValueError( f"--text_column_name {data_args.text_column_name} not found in dataset '{data_args.dataset_name}'. " "Make sure to set `--text_column_name` to the correct text column - one of " f"{', '.join(next(iter(raw_datasets.values())).column_names)}." ) # 5. Load pretrained model, tokenizer, and feature extractor # # Distributed training: # The .from_pretrained methods guarantee that only one local process can concurrently config = AutoConfig.from_pretrained( model_args.config_name if model_args.config_name else model_args.model_name_or_path, cache_dir=model_args.cache_dir, revision=model_args.model_revision, use_auth_token=True if model_args.use_auth_token else None, ) feature_extractor = AutoFeatureExtractor.from_pretrained( model_args.feature_extractor_name if model_args.feature_extractor_name else model_args.model_name_or_path, cache_dir=model_args.cache_dir, revision=model_args.model_revision, use_auth_token=True if model_args.use_auth_token else None, ) tokenizer = AutoTokenizer.from_pretrained( model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path, cache_dir=model_args.cache_dir, use_fast=model_args.use_fast_tokenizer, revision=model_args.model_revision, use_auth_token=True if model_args.use_auth_token else None, ) model = AutoModelForSpeechSeq2Seq.from_pretrained( model_args.model_name_or_path, config=config, cache_dir=model_args.cache_dir, revision=model_args.model_revision, use_auth_token=True if model_args.use_auth_token else None, ) if model.config.decoder_start_token_id is None: raise ValueError("Make sure that `config.decoder_start_token_id` is correctly defined") if model_args.freeze_feature_encoder: model.freeze_feature_encoder() # 6. Resample speech dataset if necassary dataset_sampling_rate = next(iter(raw_datasets.values())).features[data_args.audio_column_name].sampling_rate if dataset_sampling_rate != feature_extractor.sampling_rate: raw_datasets = raw_datasets.cast_column( data_args.audio_column_name, datasets.features.Audio(sampling_rate=feature_extractor.sampling_rate) ) # 7. Preprocessing the datasets. # We need to read the audio files as arrays and tokenize the targets. max_input_length = data_args.max_duration_in_seconds * feature_extractor.sampling_rate min_input_length = data_args.min_duration_in_seconds * feature_extractor.sampling_rate audio_column_name = data_args.audio_column_name num_workers = data_args.preprocessing_num_workers text_column_name = data_args.text_column_name model_input_name = feature_extractor.model_input_names[0] do_lower_case = data_args.do_lower_case if data_args.max_train_samples is not None: raw_datasets["train"] = raw_datasets["train"].select(range(data_args.max_train_samples)) if data_args.max_eval_samples is not None: raw_datasets["eval"] = raw_datasets["eval"].select(range(data_args.max_eval_samples)) def prepare_dataset(batch): # process audio sample = batch[audio_column_name] inputs = feature_extractor(sample["array"], sampling_rate=sample["sampling_rate"]) # process audio length batch[model_input_name] = inputs.input_values[0] batch["input_length"] = len(batch["input_values"]) # process targets input_str = batch[text_column_name].lower() if do_lower_case else batch[text_column_name] batch["labels"] = tokenizer(input_str).input_ids return batch with training_args.main_process_first(desc="dataset map pre-processing"): vectorized_datasets = raw_datasets.map( prepare_dataset, remove_columns=next(iter(raw_datasets.values())).column_names, num_proc=data_args.preprocessing_num_workers, desc="preprocess train dataset", ) # filter data that is shorter than min_input_length or longer than # max_input_length def is_audio_in_length_range(length): return length > min_input_length and length < max_input_length vectorized_datasets = vectorized_datasets.filter( is_audio_in_length_range, num_proc=num_workers, input_columns=["input_length"], ) # for large datasets it is advised to run the preprocessing on a # single machine first with `args.preprocessing_only` since there will mostly likely # be a timeout when running the script in distributed mode. # In a second step `args.preprocessing_only` can then be set to `False` to load the # cached dataset if data_args.preprocessing_only: cache = {k: v.cache_files for k, v in vectorized_datasets.items()} logger.info(f"Data preprocessing finished. Files cached at {cache}.") return # 8. Load Metric metric = load_metric("wer") def compute_metrics(pred): pred_ids = pred.predictions pred.label_ids[pred.label_ids == -100] = tokenizer.pad_token_id pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True) # we do not want to group tokens when computing the metrics label_str = tokenizer.batch_decode(pred.label_ids, skip_special_tokens=True) wer = metric.compute(predictions=pred_str, references=label_str) return {"wer": wer} # 9. Create a single speech processor if is_main_process(training_args.local_rank): # save feature extractor, tokenizer and config feature_extractor.save_pretrained(training_args.output_dir) tokenizer.save_pretrained(training_args.output_dir) config.save_pretrained(training_args.output_dir) processor = AutoProcessor.from_pretrained(training_args.output_dir) # 10. Define data collator data_collator = DataCollatorSpeechSeq2SeqWithPadding( processor=processor, decoder_start_token_id=model.config.decoder_start_token_id ) decay_parameters = get_parameter_names(model, [torch.nn.LayerNorm]) decay_parameters = [name for name in decay_parameters if "bias" not in name] optimizer_grouped_parameters = [ { "params": [p for n, p in model.named_parameters() if n in decay_parameters], "weight_decay": training_args.weight_decay, }, { "params": [p for n, p in model.named_parameters() if n not in decay_parameters], "weight_decay": 0.0, }, ] optimizer = bnb.optim.Adam8bit( params=optimizer_grouped_parameters, lr=training_args.learning_rate, betas=(training_args.adam_beta1, training_args.adam_beta2), eps=training_args.adam_epsilon, ) optimizers = (optimizer, None) # 11. Initialize Trainer trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset=vectorized_datasets["train"] if training_args.do_train else None, eval_dataset=vectorized_datasets["eval"] if training_args.do_eval else None, tokenizer=feature_extractor, data_collator=data_collator, compute_metrics=compute_metrics if training_args.predict_with_generate else None, ) # 12. Training if training_args.do_train: checkpoint = None if training_args.resume_from_checkpoint is not None: checkpoint = training_args.resume_from_checkpoint elif last_checkpoint is not None: checkpoint = last_checkpoint train_result = trainer.train(resume_from_checkpoint=checkpoint) trainer.save_model() # Saves the feature extractor too for easy upload metrics = train_result.metrics max_train_samples = ( data_args.max_train_samples if data_args.max_train_samples is not None else len(vectorized_datasets["train"]) ) metrics["train_samples"] = min(max_train_samples, len(vectorized_datasets["train"])) trainer.log_metrics("train", metrics) trainer.save_metrics("train", metrics) trainer.save_state() # 13. Evaluation results = {} if training_args.do_eval: logger.info("*** Evaluate ***") metrics = trainer.evaluate( metric_key_prefix="eval", max_length=model.config.max_length, num_beams=model.config.num_beams ) max_eval_samples = ( data_args.max_eval_samples if data_args.max_eval_samples is not None else len(vectorized_datasets["eval"]) ) metrics["eval_samples"] = min(max_eval_samples, len(vectorized_datasets["eval"])) trainer.log_metrics("eval", metrics) trainer.save_metrics("eval", metrics) # 14. Write Training Stats kwargs = {"finetuned_from": model_args.model_name_or_path, "tasks": "speech recognition"} if data_args.dataset_name is not None: kwargs["dataset_tags"] = data_args.dataset_name if data_args.dataset_config_name is not None: kwargs["dataset_args"] = data_args.dataset_config_name kwargs["dataset"] = f"{data_args.dataset_name} {data_args.dataset_config_name}" else: kwargs["dataset"] = data_args.dataset_name if training_args.push_to_hub: trainer.push_to_hub(**kwargs) else: trainer.create_model_card(**kwargs) return results if __name__ == "__main__": main() ``` </details> <details> <summary> `run_speech_recognition_seq2seq.py` with Adafactor </summary> ``` #!/usr/bin/env python # coding=utf-8 # Copyright 2021 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Fine-tuning the library models for sequence to sequence speech recognition. """ # You can also adapt this script on your own sequence to sequence speech # recognition task. Pointers for this are left as comments. import logging import os import sys from dataclasses import dataclass, field from typing import Any, Dict, List, Optional, Union import datasets import torch from datasets import DatasetDict, load_dataset, load_metric import transformers from transformers import ( AutoConfig, AutoFeatureExtractor, AutoModelForSpeechSeq2Seq, AutoProcessor, AutoTokenizer, HfArgumentParser, Seq2SeqTrainer, Seq2SeqTrainingArguments, set_seed, ) from transformers.trainer_utils import get_last_checkpoint, is_main_process from transformers.utils import check_min_version from transformers.utils.versions import require_version from transformers.optimization import Adafactor # Will error if the minimal version of Transformers is not installed. Remove at your own risks. check_min_version("4.17.0.dev0") require_version("datasets>=1.18.0", "To fix: pip install -r examples/pytorch/speech-recognition/requirements.txt") logger = logging.getLogger(__name__) @dataclass class ModelArguments: """ Arguments pertaining to which model/config/tokenizer we are going to fine-tune from. """ model_name_or_path: str = field( metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"} ) config_name: Optional[str] = field( default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"} ) tokenizer_name: Optional[str] = field( default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"} ) feature_extractor_name: Optional[str] = field( default=None, metadata={"help": "feature extractor name or path if not the same as model_name"} ) cache_dir: Optional[str] = field( default=None, metadata={"help": "Where to store the pretrained models downloaded from huggingface.co"}, ) use_fast_tokenizer: bool = field( default=True, metadata={"help": "Whether to use one of the fast tokenizer (backed by the tokenizers library) or not."}, ) model_revision: str = field( default="main", metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."}, ) use_auth_token: bool = field( default=False, metadata={ "help": "Will use the token generated when running `transformers-cli login` (necessary to use this script " "with private models)." }, ) freeze_feature_encoder: bool = field( default=True, metadata={"help": "Whether to freeze the feature encoder layers of the model."} ) @dataclass class DataTrainingArguments: """ Arguments pertaining to what data we are going to input our model for training and eval. """ dataset_name: str = field( default=None, metadata={"help": "The name of the dataset to use (via the datasets library)."} ) dataset_config_name: Optional[str] = field( default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."} ) text_column: Optional[str] = field( default=None, metadata={"help": "The name of the column in the datasets containing the full texts (for summarization)."}, ) overwrite_cache: bool = field( default=False, metadata={"help": "Overwrite the cached training and evaluation sets"} ) preprocessing_num_workers: Optional[int] = field( default=None, metadata={"help": "The number of processes to use for the preprocessing."}, ) max_train_samples: Optional[int] = field( default=None, metadata={ "help": "For debugging purposes or quicker training, truncate the number of training examples to this " "value if set." }, ) max_eval_samples: Optional[int] = field( default=None, metadata={ "help": "For debugging purposes or quicker training, truncate the number of evaluation examples to this " "value if set." }, ) audio_column_name: str = field( default="audio", metadata={"help": "The name of the dataset column containing the audio data. Defaults to 'audio'"}, ) text_column_name: str = field( default="text", metadata={"help": "The name of the dataset column containing the text data. Defaults to 'text'"}, ) max_duration_in_seconds: float = field( default=20.0, metadata={ "help": "Truncate audio files that are longer than `max_duration_in_seconds` seconds to 'max_duration_in_seconds`" }, ) min_duration_in_seconds: float = field( default=0.0, metadata={"help": "Filter audio files that are shorter than `min_duration_in_seconds` seconds"} ) preprocessing_only: bool = field( default=False, metadata={ "help": "Whether to only do data preprocessing and skip training. " "This is especially useful when data preprocessing errors out in distributed training due to timeout. " "In this case, one should run the preprocessing in a non-distributed setup with `preprocessing_only=True` " "so that the cached datasets can consequently be loaded in distributed training" }, ) train_split_name: str = field( default="train", metadata={ "help": "The name of the training data set split to use (via the datasets library). Defaults to 'train'" }, ) eval_split_name: str = field( default="test", metadata={ "help": "The name of the training data set split to use (via the datasets library). Defaults to 'train'" }, ) do_lower_case: bool = field( default=True, metadata={"help": "Whether the target text should be lower cased."}, ) @dataclass class DataCollatorSpeechSeq2SeqWithPadding: """ Data collator that will dynamically pad the inputs received. Args: processor ([`Wav2Vec2Processor`]) The processor used for proccessing the data. decoder_start_token_id (`int`) The begin-of-sentence of the decoder. """ processor: Any decoder_start_token_id: int def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]: # split inputs and labels since they have to be of different lenghts and need # different padding methods input_features = [{"input_values": feature["input_values"]} for feature in features] label_features = [{"input_ids": feature["labels"]} for feature in features] batch = self.processor.feature_extractor.pad(input_features, return_tensors="pt") labels_batch = self.processor.tokenizer.pad(label_features, return_tensors="pt") # replace padding with -100 to ignore loss correctly labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100) # if bos token is appended in previous tokenization step, # cut bos token here as it's append later anyways if (labels[:, 0] == self.decoder_start_token_id).all().cpu().item(): labels = labels[:, 1:] batch["labels"] = labels return batch def main(): # 1. Parse input arguments # See all possible arguments in src/transformers/training_args.py # or by passing the --help flag to this script. # We now keep distinct sets of args, for a cleaner separation of concerns. parser = HfArgumentParser((ModelArguments, DataTrainingArguments, Seq2SeqTrainingArguments)) if len(sys.argv) == 2 and sys.argv[1].endswith(".json"): # If we pass only one argument to the script and it's the path to a json file, # let's parse it to get our arguments. model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1])) else: model_args, data_args, training_args = parser.parse_args_into_dataclasses() # 2. Setup logging logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", handlers=[logging.StreamHandler(sys.stdout)], ) log_level = training_args.get_process_log_level() logger.setLevel(log_level) datasets.utils.logging.set_verbosity(log_level) transformers.utils.logging.set_verbosity(log_level) transformers.utils.logging.enable_default_handler() transformers.utils.logging.enable_explicit_format() logger.setLevel(logging.INFO if is_main_process(training_args.local_rank) else logging.WARN) # Log on each process the small summary: logger.warning( f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}" f"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}" ) logger.info(f"Training/evaluation parameters {training_args}") # Set the verbosity to info of the Transformers logger (on main process only): if is_main_process(training_args.local_rank): transformers.utils.logging.set_verbosity_info() logger.info("Training/evaluation parameters %s", training_args) # 3. Detecting last checkpoint and eventualy continue from last checkpoint last_checkpoint = None if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir: last_checkpoint = get_last_checkpoint(training_args.output_dir) if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0: raise ValueError( f"Output directory ({training_args.output_dir}) already exists and is not empty. " "Use --overwrite_output_dir to overcome." ) elif last_checkpoint is not None and training_args.resume_from_checkpoint is None: logger.info( f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change " "the `--output_dir` or add `--overwrite_output_dir` to train from scratch." ) # Set seed before initializing model. set_seed(training_args.seed) # 4. Load dataset raw_datasets = DatasetDict() if training_args.do_train: raw_datasets["train"] = load_dataset( data_args.dataset_name, data_args.dataset_config_name, split=data_args.train_split_name ) if training_args.do_eval: raw_datasets["eval"] = load_dataset( data_args.dataset_name, data_args.dataset_config_name, split=data_args.eval_split_name ) if data_args.audio_column_name not in next(iter(raw_datasets.values())).column_names: raise ValueError( f"--audio_column_name '{data_args.audio_column_name}' not found in dataset '{data_args.dataset_name}'. " "Make sure to set `--audio_column_name` to the correct audio column - one of " f"{', '.join(next(iter(raw_datasets.values())).column_names)}." ) if data_args.text_column_name not in next(iter(raw_datasets.values())).column_names: raise ValueError( f"--text_column_name {data_args.text_column_name} not found in dataset '{data_args.dataset_name}'. " "Make sure to set `--text_column_name` to the correct text column - one of " f"{', '.join(next(iter(raw_datasets.values())).column_names)}." ) # 5. Load pretrained model, tokenizer, and feature extractor # # Distributed training: # The .from_pretrained methods guarantee that only one local process can concurrently config = AutoConfig.from_pretrained( model_args.config_name if model_args.config_name else model_args.model_name_or_path, cache_dir=model_args.cache_dir, revision=model_args.model_revision, use_auth_token=True if model_args.use_auth_token else None, ) feature_extractor = AutoFeatureExtractor.from_pretrained( model_args.feature_extractor_name if model_args.feature_extractor_name else model_args.model_name_or_path, cache_dir=model_args.cache_dir, revision=model_args.model_revision, use_auth_token=True if model_args.use_auth_token else None, ) tokenizer = AutoTokenizer.from_pretrained( model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path, cache_dir=model_args.cache_dir, use_fast=model_args.use_fast_tokenizer, revision=model_args.model_revision, use_auth_token=True if model_args.use_auth_token else None, ) model = AutoModelForSpeechSeq2Seq.from_pretrained( model_args.model_name_or_path, config=config, cache_dir=model_args.cache_dir, revision=model_args.model_revision, use_auth_token=True if model_args.use_auth_token else None, ) if model.config.decoder_start_token_id is None: raise ValueError("Make sure that `config.decoder_start_token_id` is correctly defined") if model_args.freeze_feature_encoder: model.freeze_feature_encoder() # 6. Resample speech dataset if necassary dataset_sampling_rate = next(iter(raw_datasets.values())).features[data_args.audio_column_name].sampling_rate if dataset_sampling_rate != feature_extractor.sampling_rate: raw_datasets = raw_datasets.cast_column( data_args.audio_column_name, datasets.features.Audio(sampling_rate=feature_extractor.sampling_rate) ) # 7. Preprocessing the datasets. # We need to read the audio files as arrays and tokenize the targets. max_input_length = data_args.max_duration_in_seconds * feature_extractor.sampling_rate min_input_length = data_args.min_duration_in_seconds * feature_extractor.sampling_rate audio_column_name = data_args.audio_column_name num_workers = data_args.preprocessing_num_workers text_column_name = data_args.text_column_name model_input_name = feature_extractor.model_input_names[0] do_lower_case = data_args.do_lower_case if data_args.max_train_samples is not None: raw_datasets["train"] = raw_datasets["train"].select(range(data_args.max_train_samples)) if data_args.max_eval_samples is not None: raw_datasets["eval"] = raw_datasets["eval"].select(range(data_args.max_eval_samples)) def prepare_dataset(batch): # process audio sample = batch[audio_column_name] inputs = feature_extractor(sample["array"], sampling_rate=sample["sampling_rate"]) # process audio length batch[model_input_name] = inputs.input_values[0] batch["input_length"] = len(batch["input_values"]) # process targets input_str = batch[text_column_name].lower() if do_lower_case else batch[text_column_name] batch["labels"] = tokenizer(input_str).input_ids return batch with training_args.main_process_first(desc="dataset map pre-processing"): vectorized_datasets = raw_datasets.map( prepare_dataset, remove_columns=next(iter(raw_datasets.values())).column_names, num_proc=data_args.preprocessing_num_workers, desc="preprocess train dataset", ) # filter data that is shorter than min_input_length or longer than # max_input_length def is_audio_in_length_range(length): return length > min_input_length and length < max_input_length vectorized_datasets = vectorized_datasets.filter( is_audio_in_length_range, num_proc=num_workers, input_columns=["input_length"], ) # for large datasets it is advised to run the preprocessing on a # single machine first with `args.preprocessing_only` since there will mostly likely # be a timeout when running the script in distributed mode. # In a second step `args.preprocessing_only` can then be set to `False` to load the # cached dataset if data_args.preprocessing_only: cache = {k: v.cache_files for k, v in vectorized_datasets.items()} logger.info(f"Data preprocessing finished. Files cached at {cache}.") return # 8. Load Metric metric = load_metric("wer") def compute_metrics(pred): pred_ids = pred.predictions pred.label_ids[pred.label_ids == -100] = tokenizer.pad_token_id pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True) # we do not want to group tokens when computing the metrics label_str = tokenizer.batch_decode(pred.label_ids, skip_special_tokens=True) wer = metric.compute(predictions=pred_str, references=label_str) return {"wer": wer} # 9. Create a single speech processor if is_main_process(training_args.local_rank): # save feature extractor, tokenizer and config feature_extractor.save_pretrained(training_args.output_dir) tokenizer.save_pretrained(training_args.output_dir) config.save_pretrained(training_args.output_dir) processor = AutoProcessor.from_pretrained(training_args.output_dir) # 10. Define data collator data_collator = DataCollatorSpeechSeq2SeqWithPadding( processor=processor, decoder_start_token_id=model.config.decoder_start_token_id ) decay_parameters = get_parameter_names(model, [torch.nn.LayerNorm]) decay_parameters = [name for name in decay_parameters if "bias" not in name] optimizer_grouped_parameters = [ { "params": [p for n, p in model.named_parameters() if n in decay_parameters], "weight_decay": training_args.weight_decay, }, { "params": [p for n, p in model.named_parameters() if n not in decay_parameters], "weight_decay": 0.0, }, ] optimizer = Adafactor( params=optimizer_grouped_parameters, lr=training_args.learning_rate, beta1=training_args.adam_beta1, eps=training_args.adam_epsilon, relative_step=False, ) optimizers = (optimizer, None) # 11. Initialize Trainer trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset=vectorized_datasets["train"] if training_args.do_train else None, eval_dataset=vectorized_datasets["eval"] if training_args.do_eval else None, tokenizer=feature_extractor, data_collator=data_collator, compute_metrics=compute_metrics if training_args.predict_with_generate else None, ) # 12. Training if training_args.do_train: checkpoint = None if training_args.resume_from_checkpoint is not None: checkpoint = training_args.resume_from_checkpoint elif last_checkpoint is not None: checkpoint = last_checkpoint train_result = trainer.train(resume_from_checkpoint=checkpoint) trainer.save_model() # Saves the feature extractor too for easy upload metrics = train_result.metrics max_train_samples = ( data_args.max_train_samples if data_args.max_train_samples is not None else len(vectorized_datasets["train"]) ) metrics["train_samples"] = min(max_train_samples, len(vectorized_datasets["train"])) trainer.log_metrics("train", metrics) trainer.save_metrics("train", metrics) trainer.save_state() # 13. Evaluation results = {} if training_args.do_eval: logger.info("*** Evaluate ***") metrics = trainer.evaluate( metric_key_prefix="eval", max_length=model.config.max_length, num_beams=model.config.num_beams ) max_eval_samples = ( data_args.max_eval_samples if data_args.max_eval_samples is not None else len(vectorized_datasets["eval"]) ) metrics["eval_samples"] = min(max_eval_samples, len(vectorized_datasets["eval"])) trainer.log_metrics("eval", metrics) trainer.save_metrics("eval", metrics) # 14. Write Training Stats kwargs = {"finetuned_from": model_args.model_name_or_path, "tasks": "speech recognition"} if data_args.dataset_name is not None: kwargs["dataset_tags"] = data_args.dataset_name if data_args.dataset_config_name is not None: kwargs["dataset_args"] = data_args.dataset_config_name kwargs["dataset"] = f"{data_args.dataset_name} {data_args.dataset_config_name}" else: kwargs["dataset"] = data_args.dataset_name if training_args.push_to_hub: trainer.push_to_hub(**kwargs) else: trainer.create_model_card(**kwargs) return results if __name__ == "__main__": main() ``` </details> <|||||>> ```python > # checkpoints to leverage > encoder_id = "facebook/wav2vec2-large-lv60" > decoder_id = "bert-large-uncased" > > feature_extractor = AutoFeatureExtractor.from_pretrained(encoder_id) > feature_extractor.save_pretrained("./") > tokenizer = AutoTokenizer.from_pretrained(decoder_id) > tokenizer.save_pretrained("./") > > model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained(encoder_id, decoder_id, encoder_add_adapter=True) > model.config.encoder.feat_proj_dropout = 0.0 > model.config.encoder.final_dropout = 0.0 > model.config.encoder.mask_time_prob = 0.1 > model.config.decoder_start_token_id = tokenizer.cls_token_id > model.config.pad_token_id = tokenizer.pad_token_id > model.config.eos_token_id = tokenizer.sep_token_id > model.config.max_length = 50 > model.config.num_beams = 1 > model.config.encoder.layerdrop = 0.0 > model.config.use_cache = False > model.config.decoder.use_cache = False > model.config.processor_class = "Wav2Vec2Processor" > > # check if generation works > out = model.generate(torch.ones((1, 2000))) > > model.save_pretrained("./") > ``` Small tip here. I cannot copy-paste and re-run the command. It says `"NameError: name 'AutoFeatureExtractor' is not defined"`. It saves a lot of time if every script can directly be re-run without missing imports :-)<|||||>I see what the error probably is. It's not `"CUDA_AVAILABLE_DEVICES"`, but `CUDA_VISIBLE_DEVICES` (sorry I might have given you that non-existing command :D). Just tried it out on a dummy dataset and it works fine with `CUDA_VISIBLE_DEVICES=0` even with normal Adam and `batch_size=4` Could you try again? Pretty sure it should work at least with `bnb` this time on the larger dataset.<|||||>Some more explanation on what happened and how one could have debugged this. Since `CUDA_AVAILABLE_DEVICES` doesn't exist, adding the bash variable didn't have any effect, which then meant that you used PyTorch's Data Parallelism by default: https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html . This is not really maintained anymore by PyTorch actually and it's strongly recommended to switch to DDP: https://pytorch.org/docs/stable/notes/ddp.html instead. However what we want to do here is simply use one GPU so by adding the (correct) bash env `CUDA_VISIBLE_DEVICES=0` we can run two trainings (one on each GPU at the same time).<|||||>For debugging tips: It's often a good idea to monitor the GPUs when starting to train. This can be done by having a window that runs `watch -n 0.1 nvidia-smi` and should monitor GPU usage. Here it quickly became obvious that both GPUs are used instead of just one meaning that there was a problem with the bash command<|||||>@sgugger - do you think it makes sense to throw a warning when a user is using PyTorch's DP with the Trainer? I don't really see a use case where DP is preferred over DDP<|||||>A warning seems a bit violent, it's not something PyTorch has deprecated, but we can certainly show an info.<|||||>Correcting `CUDA_AVAILABLE_DEVICES` to `CUDA_VISIBLE_DEVICES` rectified the issue! On the full LibriSpeech dataset, I am able to use a `batch_size=8` and the 8-bit `bnb` optimizer to run training at ~15GB memory usage on a single GPU. Thanks Patrick!
transformers
15,543
closed
Move generic PyTorch utils function from modeling_utils.py to pytorch_utils
# 🚀 Feature request > Since we're creating a new module file, can we maybe move some other functions there? Like the no_init_weight context manager, and: > > all pruning stuff > apply_chunking_to_forward > get_parameter_device > get_parameter_dtype > so that the modeling utils file stays focused on PreTrainedModel and the layers it defines? Taken from comment here: https://github.com/huggingface/transformers/pull/15498#pullrequestreview-874783288 ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
02-07-2022 15:03:51
02-07-2022 15:03:51
may I work on this issue?<|||||>Yes please :-)<|||||>Great job @davidegavio ! Really nice clean-up :-)
transformers
15,542
closed
Save DistilBert Model and Convert
Hello, I want to save a transformer model to [TensorFlow Lite](https://www.tensorflow.org/lite), in order to put in on a mobile device. ``` from transformers import AutoModelForTokenClassification, AutoTokenizer model = AutoModelForTokenClassification.from_pretrained("elastic/distilbert-base-uncased-finetuned-conll03-english") type(model) ``` transformers.models.distilbert.modeling_distilbert.DistilBertForTokenClassification Then i tried to save the model in tensorflow format: ```tf.saved_model.save(model, saved_model_dir)``` @jplu https://github.com/huggingface/transformers/issues/6864#issuecomment-690086932 Here, I get the following Error: ValueError: Expected an object of type `Trackable`, such as `tf.Module` or a subclass of the `Trackable` class, for export. Got DistilBertForTokenClassification( (distilbert): DistilBertModel ... As a second step I was planning to convert it tflite: ``` converter = tf.lite.TFLiteConverter.from_saved_model(model_path) tflite_model = converter.convert() ``` What am I doing wrong and is there an easier way to do that?
02-07-2022 14:19:18
02-07-2022 14:19:18
Hi, you are using a pytorch model here. `AutoModelForTokenClassification` will load the pt version of `DistilBert`. For TF one should use `TFAutoModelForTokenClassification`.<|||||>> thank you alot @patil-suraj 👍 , i am using the following now, which works fine: ``` model = TFDistilBertForTokenClassification.from_pretrained(model_path, from_pt=True) inputs_1 = tokenizer("Hugging Face Inc. is a company based in New York City.", return_tensors="tf") outputs = model(inputs_1["input_ids"]) tf.saved_model.save(model, "/path/to/file") ``` The import works as in the following, but ... ``` loaded = tf.saved_model.load("/path/to/file") inference_func = loaded.signatures["serving_default"] inputs_2 = tokenizer("Hugging Face Inc", return_tensors="tf") outputs_1 = inference_func(input_ids=inputs_1["input_ids"]) outputs_2 = inference_func(input_ids=inputs_2["input_ids"]) ``` i can only do infer for inputs_2 not for inputs_1 I get the following error: ``` InvalidArgumentError: Incompatible shapes: [1,15,768] vs. [1,5,768] [[node tf_distil_bert_for_token_classification_3/distilbert/embeddings/add (defined at /tmp/ipykernel_377675/508456377.py:1) ]] [Op:__inference_signature_wrapper_541974] Errors may have originated from an input operation. Input Source operations connected to node tf_distil_bert_for_token_classification_3/distilbert/embeddings/add: In[0] tf_distil_bert_for_token_classification_3/distilbert/embeddings/Identity: In[1] tf_distil_bert_for_token_classification_3/distilbert/embeddings/Identity_1: ``` how can this be fixed and what is the reason for this?<|||||>Hi @Schnittchenkraus! The issue is that, in essence, `loaded` (the TF loaded object) is a constrained version of `model` (an instance of a Hugging Face class). `loaded` stores the computation graph, which is generated with the first call, while `model` is a bunch of TF operations that can accept variable-length tensors. Now, to the solution. Typically, the correct solution revolves around defining signatures (see [this doc](https://www.tensorflow.org/api_docs/python/tf/saved_model/save)), where you can define inputs with variable length. However, our code does not fit the requirements for this, so it is not doable. The less correct solution is to make sure the computation graph and the inputs are constrained to a fixed length (for instance, the maximum sequence length, 512). To do that, you have to pad the inputs and override the first input to the model. Without getting into too many details, if you run the script below while changing [this line](https://github.com/huggingface/transformers/blob/master/src/transformers/file_utils.py#L312) to `DUMMY_INPUTS = [[1]*512]`, you should be able to get what you want. This is overly convoluted, but we are working on making it easier :) <details><summary>Working script</summary> ```python from functools import partial import tensorflow as tf from transformers import TFDistilBertForTokenClassification, AutoTokenizer model_path = "elastic/distilbert-base-uncased-finetuned-conll03-english" model = TFDistilBertForTokenClassification.from_pretrained(model_path, from_pt=True) tokenizer = AutoTokenizer.from_pretrained(model_path) tokenize_fn = partial(tokenizer, return_tensors="tf", padding="max_length", truncation=True) inputs_1 = tokenize_fn("Hugging Face Inc. is a company based in New York City.") inputs_2 = tokenize_fn("Hugging Face Inc.") outputs_2 = model(inputs_2) outputs_1 = model(inputs_1) tf.saved_model.save(model, "~/test_model") loaded = tf.saved_model.load("~/test_model") inference_func = loaded.signatures["serving_default"] outputs_1 = inference_func(input_ids=inputs_1["input_ids"]) outputs_2 = inference_func(input_ids=inputs_2["input_ids"]) ``` </details><|||||>This works nicely, thank you alot @gante !
transformers
15,541
closed
[ASR pipeline] correct asr pipeline for seq2seq models
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes the following daily tests: ``` FAILED tests/test_pipelines_automatic_speech_recognition.py::AutomaticSpeechRecognitionPipelineTests::test_speech_to_text_leveraged FAILED tests/test_pipelines_automatic_speech_recognition.py::AutomaticSpeechRecognitionPipelineTests::test_torch_speech_encoder_decoder FAILED tests/test_pipelines_automatic_speech_recognition.py::AutomaticSpeechRecognitionPipelineTests::test_xls_r_from_en FAILED tests/test_pipelines_automatic_speech_recognition.py::AutomaticSpeechRecognitionPipelineTests::test_xls_r_to_en ``` and adds a fast test for asr pipeline for `wav2vec2-2-...` model. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
02-07-2022 12:24:45
02-07-2022 12:24:45
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @Narsil - will merge since you're on holiday at the moment.
transformers
15,540
closed
Revert "Handle PyTorch to Flax conversion of 1D convolutions"
Reverts huggingface/transformers#15519 @patil-suraj @sanchit-gandhi - please see comment on PR. Let me know if I'm mistaken there and we actually do need this new weight conversion, but I really don't see why this would be the case.
02-07-2022 10:30:17
02-07-2022 10:30:17
_The documentation is not available anymore as the PR was closed or merged._
transformers
15,539
closed
[Trainer] Deeper length checks for IterableDatasetShard
# What does this PR do? As discussed in https://github.com/huggingface/transformers/pull/15309#discussion_r795553017 , in cases where `IterableDatasetShard` wraps a non-sized dataset, the check for `isinstance(dataset, Sized)` passes, while `len(dataset)` fails. This PR adds a universal `has_length(dataset)` that tries to get the dataset's length explicitly without relying on type checking.
02-07-2022 09:56:37
02-07-2022 09:56:37
_The documentation is not available anymore as the PR was closed or merged._
transformers
15,538
closed
Keep waiting for push command to finish at the end of running run_speech_recognition_ctc.py
## Environment info - `transformers` version: 4.17.0.dev0 - Platform: Linux-5.11.0-37-generic-x86_64-with-glibc2.10 ( running at OVH Cloud 's AI Training Jobs) - Python version: 3.8.8 - PyTorch version (GPU?): 1.10.2+cu102 (True) - Tensorflow version (GPU?): not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten @anton-l Models: facebook/wav2vec2-xls-r-300m Library: - Speech: @patrickvonplaten, @anton-l ## Information Model I am using facebook/wav2vec2-xls-r-300m: The problem arises when using: * [ ] the official example scripts: https://github.com/huggingface/transformers/blob/master/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py The tasks I am working on is: * [ ] dataset: mozilla-foundation/common_voice_8_0 ## Problem Description I ran training script `run_speech_recognition_ctc.py` in 100 epochs. After finishing training and evaluation at last step, the program yet shutting down and keeps waiting for the push command to finish: ![image](https://user-images.githubusercontent.com/23013350/152737186-81602a1c-9c54-4324-9a00-16c44291aa3f.png) However, everything is pushed to hub successfully. So, at the end, I `Ctrl + C` to quit the program manually. ## To reproduce I ran the official training script `run_speech_recognition_ctc.py` and the command is shown below: `python run_speech_recognition_ctc.py \ --dataset_name="mozilla-foundation/common_voice_8_0" \ --model_name_or_path="facebook/wav2vec2-xls-r-300m" \ --dataset_config_name="zh-HK" \ --output_dir="./" \ --num_train_epochs="100" \ --per_device_train_batch_size="32" \ --per_device_eval_batch_size="16" \ --gradient_accumulation_steps="2" \ --learning_rate="3e-4" \ --warmup_steps="500" \ --text_column_name="sentence" \ --length_column_name="input_length" \ --layerdrop="0.0" \ --save_total_limit="3" \ --freeze_feature_encoder \ --gradient_checkpointing \ --fp16 \ --group_by_length \ --use_auth_token \ --push_to_hub \ --do_train \ --do_eval \ --max_duration_in_seconds="6" \ --evaluation_strategy='epoch' \ --save_strategy='epoch' \` Notes: You may change `--num_train_epochs="100"` to shorter runn as the behaviour still persist. ## Expected behavior The program itself should shut down gracefully after finishing the training, evaluation and pushing to hub steps.
02-07-2022 06:39:58
02-07-2022 06:39:58
Thanks for opening this issue! I've actually never such an issue before - @sgugger any idea what the problem could be here? <|||||>Not really. It's coming from `huggingface_hub` async push, so deferring to @LysandreJik :-)<|||||>Hey @IvanLauLinTiong, have you verified the push has finished, and the weights were updated on the hub? It should tell you that as long as the push is unfinished. If the program were to terminate early, then the push would be interrupted.<|||||>Hi @LysandreJik , May I know what is the last default commit message for the weights file if uploaded properly to the hub? I just ran the script `run_speech_recognition_ctc.py` with everything default in 100 epochs, and my `pytorch_model.bin` latest commit message is `Training in progress, epoch 100`. What is the default commit message if it is uploaded successfully? Can you help me to check? Here is my repo link: https://huggingface.co/ivanlau/wav2vec2-large-xls-r-300m-cantonese/tree/main That's because some people also encountered this issue during the Robust Speech Event (screenshot below) and prolly I was too fast to assume this is a HF bug. ![image](https://user-images.githubusercontent.com/23013350/152911424-05353b61-ddc7-4838-8979-afde242e1791.png) ![image](https://user-images.githubusercontent.com/23013350/152911475-9ec4f967-f71d-4d5f-84af-065d0dd023b0.png) If this is not a bug, hmmm, shouldn't the warning message be more descriptive a bit ? like 'currently pushing file A....' or some progress bar. :) Thanks.<|||||>Thanks for bringing it up! I'm not certain it's a bug from one of our libraries, I'd say it's probably linked to a very slow upload speed. I agree that the message could be better. However, you shouldn't be afraid of losing anything: the files are committed, so you can feel free to Ctrl+C, and check in the local folder to see if the files have been pushed. You can do so by doing the `git status` command, which should tell you if your local clone is up to date with the remote, or if some commits have not been pushed. If some commits have not been pushed, simply do `git push` :) Thanks for your issue!<|||||>@LysandreJik ok thanks 👍 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,537
closed
Fix LongformerModel hidden states
# What does this PR do? `LongformerModel` currently returns `hidden_states` that is padded, while `TFLongformerModel` and `LEDEncoder` return un-padded version. This PR makes `LongformerModel` return the un-padded `hidden_states` . ---- `LEDEncoder` also returns un-padded `attentions`. Maybe the same logic should be applied to `LongformerModel` too.
02-06-2022 13:38:08
02-06-2022 13:38:08
_The documentation is not available anymore as the PR was closed or merged._<|||||>The test needs to be adjusted due to the change. But I would like to have some feedback first before proceeding.<|||||>@patrickvonplaten could you take a look here ?<|||||>Thanks for reviewing! I will finalize this PR by adjusting the test.<|||||>@patrickvonplaten @patil-suraj (update) - unpad `attentions` too for both `LongformerModel` / `TFLongformerModel`. (see below for the reasons) - move unpadding logic in `(TF)LongformerModel` to `(TF)LongformerEncoder` - update the tests ---- # unpad `attentions` Current `LongformerModel` / `TFLongformerModel` don't unpad `attentions`. I am going to unpad the `attentions` just like what have been done for `LEDModel` & `TFLEDModel`. This is required to make `TFLongformerModelTest` pass (in an easy way). (I don't think there is particular reason not to unpad `attentions` in `LongformerModel` / `TFLongformerModel`, but want to make sure) ## More details In `LEDModel` & `TFLEDModel`, the `attentions` are also un-padded: https://github.com/huggingface/transformers/blob/57882177becb85560f1ff931abb1b0b75d67e70d/src/transformers/models/led/modeling_led.py#L1871-L1872 https://github.com/huggingface/transformers/blob/57882177becb85560f1ff931abb1b0b75d67e70d/src/transformers/models/led/modeling_tf_led.py#L1814-L1819 However, `LongformerModel` / `TFLongformerModel` don't perform unpadding for `attentions` (but do unpadding for `hidden_states`). This causes a problem for `TFLongformerModelTester`: https://github.com/huggingface/transformers/blob/57882177becb85560f1ff931abb1b0b75d67e70d/tests/test_modeling_tf_longformer.py#L77-L81 The testings for both `attentions` and `hidden_states` are controlled by `self.encoder_seq_length` => so we need to unpad `attentions` too (because `hidden_states` are unpadded).
transformers
15,536
closed
Error when passing encoder_outputs as tuple to EncoderDecoder models
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.17.0.dev0 - Platform: Linux-5.13.0-27-generic-x86_64-with-glibc2.34 - Python version: 3.9.7 - PyTorch version (GPU?): 1.10.1+cu102 (True) - Tensorflow version (GPU?): 2.7.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.3.6 (cpu) - Jax version: 0.2.26 - JaxLib version: 0.1.75 - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help @patrickvonplaten ## Information In EncoderDecoder models one can pass `encoder_outputs` [as a tuple of Tensors ](https://github.com/jsnfly/transformers/blob/8ce133063120683018b214fe10d1449e4c2401da/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L106). However, if you do that [this line](https://github.com/jsnfly/transformers/blob/8ce133063120683018b214fe10d1449e4c2401da/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L549) will fail with ```python AttributeError: 'tuple' object has no attribute 'last_hidden_state' ``` since the tuple isn't modified in the `forward` method. So if it is a tuple, `encoder_outputs` could maybe wrapped in a `ModelOutput` class or something similar. Or handle the tuple somehow explicitly. ## On a slight tangent I made a `SpeechEncoderDecoderModel` for the robust speech challenge: https://huggingface.co/jsnfly/wav2vec2-large-xlsr-53-german-gpt2. I found that adding the position embeddings of the decoder model to the outputs of the encoder model improved performance significantly (basically didn't work without it). This needs [small modifications](https://huggingface.co/jsnfly/wav2vec2-large-xlsr-53-german-gpt2/blob/main/training/model.py#L8) to the `__init__` and `forward` methods of the `SpeechEncoderDecoderModel`. At the moment this seems to me too much of a "hack" to add it to the `SpeechEncoderDecoderModel` class generally (for example via a flag), because it may differ for different `decoder` models and probably also needs more verification. @patrickvonplaten showed some interest that this could be included in Transformers nonetheless. What do you think?
02-06-2022 11:29:30
02-06-2022 11:29:30
Hey @jsnfly, Regarding the first point - agree, it'd be good to check if the input is a tuple and if it is we can wrap it into a `ModelOutput` object. Would you be interested in opening a PR for this? :-) Regarding the 2nd point - that's very interesting (cc @sanchit-gandhi). Also makes a lot of sense since ASR by itself is monotonic so knowing the order of words to transcribe together with the encoder speech frames seems like a sensible design architecture. Thanks a lot for sharing this here!<|||||>The embedding hack is a really neat find - nice one @jsnfly! It's something we're going to take a look into in our ASR experiments! It seems like it could help with alignment in a much cleaner and more compact way than the encoder-decoder cross-attention mechanism.<|||||>> Regarding the first point - agree, it'd be good to check if the input is a tuple and if it is we can wrap it into a `ModelOutput` object. Would you be interested in opening a PR for this? :-) I have opened one - feel free to take a look. > Regarding the 2nd point - that's very interesting (cc @sanchit-gandhi). Also makes a lot of sense since ASR by itself is monotonic so knowing the order of words to transcribe together with the encoder speech frames seems like a sensible design architecture. Thanks a lot for sharing this here! > The embedding hack is a really neat find - nice one @jsnfly! It's something we're going to take a look into in our ASR experiments! It seems like it could help with alignment in a much cleaner and more compact way than the encoder-decoder cross-attention mechanism. Thanks for your feedback :) I will also try to experiment with this a bit more and let you know if I get some more results. <|||||>@jsnfly , thank you for this PR... Is it possible to do this fix for a T5 model as well.. It is also a sequence to sequence model and sometime we may want to pass a tuple to the decoder. If you guys don't see any issue I can do that. For context, I am playing with the[ Fusion In Decoder, ](https://github.com/facebookresearch/FiD) which is a version of the T5 model. The encoder, a tuple which is the hidden state of all encoder blocks concatenated as one vector, but the code is failing because it is expecting a tuple. I am going to apply this fix to the T5 model locally and see how it behaves.. @patrickvonplaten, let me know what you think .. <|||||>@espoirMur FID's requirement is transformers 3.0.2 so, this version's model output is formed as tuple. you can fix this issue if add input variable 'return_dict=False' at model input. (on transformers latest version) as follow ` train_loss = model( input_ids=context_ids.cuda(), attention_mask=context_mask.cuda(), labels=labels.cuda(), return_dict=False )[0] ` <|||||>> @espoirMur FID's requirement is transformers 3.0.2 so, this version's model output is formed as tuple. you can fix this issue if add input variable 'return_dict=False' at model input. (on transformers latest version) as follow `train_loss = model( input_ids=context_ids.cuda(), attention_mask=context_mask.cuda(), labels=labels.cuda(), return_dict=False )[0]` Thanks for your response and it helps a lot~ <|||||>> @espoirMur FID's requirement is transformers 3.0.2 so, this version's model output is formed as tuple. you can fix this issue if add input variable 'return_dict=False' at model input. (on transformers latest version) as follow `train_loss = model( input_ids=context_ids.cuda(), attention_mask=context_mask.cuda(), labels=labels.cuda(), return_dict=False )[0]` Thanks! helps me a lot!!
transformers
15,535
open
vits support?
add support for vits, an tts transformer based e2e model. https://github.com/jaywalnut310/vits
02-06-2022 07:57:52
02-06-2022 07:57:52
@jinfagang can I work on this issue? Can you briefly explain what has to be and from where can I start?
transformers
15,534
closed
Cannot set deterministic=False for FlaxRobertaPreTrainedModel, therefore dropout doesn't work?
Looking at FlaxRobertaPreTrainedModel's [call function](https://github.com/huggingface/transformers/blob/db7d6a80e82d66127b2a44b6e3382969fdc8b207/src/transformers/models/roberta/modeling_flax_roberta.py#L610), there doesn't seem to be a way to set deterministic=False. Therefore, when it calls self.module.apply it leaves deterministic on its default (True). This isn't necessarily a problem if you just want to do inference with the model, but if we want to fine tune then we cannot use dropout. Is there some other place to set this argument that I am missing?
02-06-2022 00:23:43
02-06-2022 00:23:43
Actually as far as I can tell, the argument train: bool is acting as the deterministic switch. So if I set train=True, then dropout is used. The documentation could use some explanation in my opinion, and also it needs to be explained how if this is true then we need to provide a dropout seed.<|||||>Hi, you are right. `train` is used to control the `deterministic` arg, which is only used in internal modules. All public-facing models expose `train` arg to prevent this. And yes, this should be documented. > also it needs to be explained how if this is true then we need to provide a dropout seed. The seed is provided when initialising/loading the model, and the default is `True`. And the dropout seed is created by splitting this main seed. Would you like to open a PR to add the docstring for `train` arg ? We'll need to update this for all models in the `..._INPUTS_DOCSTRING` variable.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,533
open
Porting Compressive Transformer to Huggingface
# 🌟 New model addition As part of my Master-Thesis I will implement or adapt different transformer architecture for LM that are specifically designed for long-context situation. As part of this I started with porting the Compressive Transformer to the huggingface interface. And will probably do the same for other architectures in the future. Let me know if you're interested for a pull-request. ## Model description [Paper](https://arxiv.org/pdf/1911.05507.pdf) The Compressive Transformer is an extension of the Transformer-XL architecture with an additional compressed memory. Memories that would get discarded in the Transformer-XL get compressed and added to the compressed memory. The compression function can take different forms but the best performance on word-level LM is a Conv1d compression. Training of the C-Transformer happens the same way to how the Transformer-XL architecture is trained. But additional to that there is an "attention-reconstruction-loss" which compares the attention that we get from using the memory with the attention we get from its compressed counter-part. Using the MSE-loss we can perform gradient updates on the compression function ## Open source status * [x] the model implementation is available: https://nn.labml.ai/transformers/compressive/index.html Is an open source implemention under the MIT license https://github.com/deepmind/pg19 The data-set used in parts of the experiment by the Authors https://github.com/vilmarzti/long_context_transformers/blob/main/longcontext/transformers/compressive_transformer.py My humble start of porting the architecture to the huggingface-format. * [x] the model weights are available: None that I could find. Weights (for wikitext-2 and 103) might become available as my thesis progresses and I start training * [x] who are the authors: Jack W. Rae (https://github.com/dm-jrae) Anna Potapenko (https://github.com/AnyaP) Siddhant M. Jayakumar (github-profile not found) Chloe Hillier (github profile not found) Timothy P. Lillicrap (github profile not found)
02-05-2022 16:40:20
02-05-2022 16:40:20
transformers
15,532
closed
Error converting fine-tuned GPTNeoForCausalLM model to ONNX
Hi, I am trying to convert a fine-tuned GPT-Neo (125M) model to ONNX using the code below: ``` from transformers import pipeline, convert_graph_to_onnx, GPTNeoForCausalLM, GPT2Tokenizer from pathlib import Path import torch model_name = "EleutherAI/gpt-neo-125M" pipeline_name = "text-generation" tokenizer = GPT2Tokenizer.from_pretrained("EleutherAI/gpt-neo-125M", bos_token='<|startoftext|>', eos_token='<|endoftext|>', pad_token='<|pad|>') nlp = pipeline(pipeline_name, model=model_name, tokenizer=tokenizer) with torch.no_grad(): ( input_names, output_names, dynamic_axes, tokens, ) = convert_graph_to_onnx.infer_shapes(nlp, "pt") ordered_input_names, model_args = convert_graph_to_onnx.ensure_valid_input( nlp.model, tokens, input_names ) model_name = 'gpt-neo' predictor_path = './' + model_name model2 = GPTNeoForCausalLM.from_pretrained(predictor_path) text = "I feel happy and " prompt = f'<|startoftext|>Review: {text} Sentiment: <|endoftext|>' encodings_dict = nlp.tokenizer(prompt, truncation=True, max_length=300, padding="max_length", return_tensors="pt") torch.onnx.export( model2, (encodings_dict['input_ids'], encodings_dict['attention_mask']), 'model_test.onnx', input_names=input_names, output_names=output_names, dynamic_axes=dynamic_axes, do_constant_folding=True, use_external_data_format=True, # Needed because of model size enable_onnx_checker=True, opset_version=13 ) ``` But I get this error: ``` ValueError Traceback (most recent call last) <ipython-input-15-024e093371a4> in <module>() 43 use_external_data_format=True, # Needed because of model size 44 enable_onnx_checker=True, ---> 45 opset_version=13 46 ) 3 frames /usr/local/lib/python3.7/dist-packages/torch/onnx/utils.py in _validate_dynamic_axes(dynamic_axes, model, input_names, output_names) 1300 for i, x in enumerate(value): 1301 if not isinstance(x, int): -> 1302 raise ValueError("The type of axis index is expected to be an integer") 1303 if x in value_dict: 1304 warnings.warn("Duplicate dynamic axis index {} was provided for input {}." ValueError: The type of axis index is expected to be an integer ``` But if I remove dynamic_axes, I get this: ``` IndexError Traceback (most recent call last) <ipython-input-16-89ddee8c10f8> in <module>() 43 use_external_data_format=True, # Needed because of model size 44 enable_onnx_checker=True, ---> 45 opset_version=13 46 ) 15 frames /usr/local/lib/python3.7/dist-packages/transformers/models/gpt_neo/modeling_gpt_neo.py in forward(self, input_ids, past_key_values, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict) 545 past_key_values = tuple([None] * len(self.h)) 546 else: --> 547 past_length = past_key_values[0][0].size(-2) 548 549 device = input_ids.device if input_ids is not None else inputs_embeds.device IndexError: dimension specified as -2 but tensor has no dimensions ``` Can someone help me please?
02-05-2022 13:41:39
02-05-2022 13:41:39
cc @lewtun @michaelbenayoun <|||||>Hey @toby-htx thanks for raising the issue! It looks like you're using the `convert_graph_to_onnx` package which has been deprecated in favour of the `transformers.onnx` package. You can export that checkpoint using the following command: ```bash # We need to increase the tolerance from the default for this model head python -m transformers.onnx --model=EleutherAI/gpt-neo-125M --feature=causal-lm --atol=5e-4 ./onnx/ ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,531
closed
Add PoolFormer
# What does this PR do? This PR adds the PoolFormer model to the 🤗 repository. I also opened an Issue for adding the model https://github.com/huggingface/transformers/issues/14584 # Who can review? @NielsRogge
02-05-2022 13:22:18
02-05-2022 13:22:18
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for all your work on this, merging!
transformers
15,530
closed
Make TF Wav2Vec2 outputs the same as PT's version
# What does this PR do? Current TF Wav2Vec2 doesn't return `extract_features`, unlike in PT's version. This PR adds this to TF Wav2Vec2 outputs. @patrickvonplaten
02-05-2022 08:41:34
02-05-2022 08:41:34
_The documentation is not available anymore as the PR was closed or merged._
transformers
15,529
closed
feat: add debertav2 fast tokenizer
# What does this PR do? I think the changes made by @alcinos in the current PR #14928 are ready to be merged into main. Please let me know if this is alright, but the PR has gone stale and I'm just helping to move the needle here by collating all the changes so sorry for the one 300 LoC commit - Full credits should go to @alcinos for his great work! This PR implements a fast tokenizer for DeBERTaV2 and all related models: - DeBERTav2 - DeBERTav3 - mDeBERTav3 Fixes # (issue) #11529 #14712 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? There was one initial test `test_right_and_left_truncation` that failed initially on my local when I was running tests on Tokenizers 10.3. https://github.com/huggingface/transformers/blob/39b5d1a63a07d60e496a6bd98c3a60d32e8b9e6d/tests/test_tokenization_common.py#L1521 This test failure wasn’t present in the previous PR when it was still transformers 4.16.0[dev] - I received 'Ignored unknown kwarg option direction' in my logs which appears to be related to https://github.com/huggingface/transformers/pull/15266 Updating my Tokenizers to 11.4 fixed the test locally, and it seems that it passes all tests now - but not super clear to me why this fixes the failing test and it is now able to pass: Should anything be done regarding this issue like introduce some form of backward compatibility to Tokenizers < 11.0? ``` from transformers import DebertaV2Tokenizer, DebertaV2TokenizerFast tokenizer = DebertaV2Tokenizer('./tests/fixtures/spiece.model') tokenizer_fast = DebertaV2TokenizerFast('./tests/fixtures/spiece.model') def test_right_and_left_truncation(tokenizer, sequence="This is a test sequence"): truncation_size = 3 # RIGHT PADDING tokenizer.truncation_side = "right" encoded_sequence = tokenizer.encode(sequence, add_special_tokens=False) sequence_length = len(encoded_sequence) truncated_sequence = tokenizer.encode( sequence, max_length = sequence_length - truncation_size, truncation=True, add_special_tokens=False ) truncated_sequence_length = len(truncated_sequence) print(f"sequence length: {sequence_length}, {truncated_sequence_length + truncation_size}") print(f"encoded sequence: {encoded_sequence[:-truncation_size]}, {truncated_sequence}") # LEFT PADDING tokenizer.truncation_side = "left" sequence_length = len(encoded_sequence) truncated_sequence = tokenizer.encode( sequence, max_length=sequence_length - truncation_size, truncation=True, add_special_tokens=False ) truncated_sequence_length = len(truncated_sequence) print(f"sequence length: {sequence_length}, {truncated_sequence_length + truncation_size}") print(f"encoded sequence: {encoded_sequence[truncation_size:]}, {truncated_sequence}") ``` In Tokenizers 11.4: ``` >>> test_right_and_left_truncation(tokenizer) sequence length: 7, 7 encoded sequence: [13, 1, 4398, 25], [13, 1, 4398, 25] sequence length: 7, 7 encoded sequence: [25, 21, 1289, 4030], [25, 21, 1289, 4030] >>> test_right_and_left_truncation(tokenizer_fast) sequence length: 7, 7 encoded sequence: [13, 1, 4398, 25], [13, 1, 4398, 25] sequence length: 7, 7 encoded sequence: [25, 21, 1289, 4030], [25, 21, 1289, 4030] ``` In Tokenizers 10.3: ``` >>> test_right_and_left_truncation(tokenizer) sequence length: 7, 7 encoded sequence: [13, 1, 4398, 25], [13, 1, 4398, 25] sequence length: 7, 7 encoded sequence: [25, 21, 1289, 4030], [25, 21, 1289, 4030] >>> test_right_and_left_truncation(tokenizer_fast) Ignored unknown kwarg option direction sequence length: 7, 7 encoded sequence: [13, 1, 4398, 25], [13, 1, 4398, 25] Ignored unknown kwarg option direction sequence length: 7, 7 encoded sequence: [25, 21, 1289, 4030], [13, 1, 4398, 25] ``` ## Who can review? @SaulLu @stefan-it
02-05-2022 08:00:51
02-05-2022 08:00:51
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15529). All of your documentation changes will be reflected on that endpoint.<|||||>Thank you so much for taking the relay on this PR! It seems to me that [this point](https://github.com/huggingface/transformers/pull/14928#discussion_r775445739) of the PR was the last point that remained to be addressed. What do you think? Also, I see that you no longer have the commits from @alcinos's original PR. Is there any particular reason? Otherwise, I think it would be nice to at least put all the co-authors of the PR back in the commit message. If it's easier, I could do it in the merge message of this PR on master when it's ready. About the error, you raise a good point we should not allow versions of tokenizers before v.11 (at least) because they did not allow truncate left. So the error is not related to this PR. :blush: <|||||>> Thank you so much for taking the relay on this PR! > > It seems to me that [this point](https://github.com/huggingface/transformers/pull/14928#discussion_r775445739) of the PR was the last point that remained to be addressed. What do you think? > > Also, I see that you no longer have the commits from @alcinos's original PR. Is there any particular reason? Otherwise, I think it would be nice to at least put all the co-authors of the PR back in the commit message. If it's easier, I could do it in the merge message of this PR on master when it's ready. > > About the error, you raise a good point we should not allow versions of tokenizers before v.11 (at least) because they did not allow truncate left. So the error is not related to this PR. 😊 @SaulLu No I agree with you, it wasn't my intention to create a new PR for this - but I've tried and couldn't push to the original branch in the original PR created... so I just ended up pushing to my own repo instead, is there a way for me to push to the original PR without write access to @alcinos repo? Or I can modify my commit message - do I need to follow any convention for the commit message when I add the co-authors? And I've missed that point in the original PR, can I just confirm there needs to be: - an additional test to check the behavior of backend tokenizer and fast tokenizer for all combinations of arguments (`do_lower_case`, `split_by_punct`) - If behaviors are differing: for example if the spilt_by_punct behavior differs, then modify the behavior of the Slow Tokenizer to match that of the Fast Tokenizer<|||||>Thanks a lot for your kindness @mingboiz ! Concerning the mention of co-authors, don't worry too much about it, I'll take care of it when I merge the PR on main :slightly_smiling_face: - but indeed it is necessary to follow a template cf the corresponding documentation here. Concerning the `do_lower_case` and `split_by_punct` arguments, you are right, I also think that the best thing to do is to add a test first! However, this test will most probably fail and you will have to add some code in the `__init__` method of `DebertaV2TokenizerFast` (as explained briefly [here](https://github.com/huggingface/transformers/pull/14928#discussion_r777623922)). Then, if behaviors are differing between the slow and the fast version, the tokenizer we need to modify is the fast one :relaxed: . Don't hesitate to ping me if you have any difficulty, these last 2 points are not the easiest to handle. <|||||>Hi @SaulLu, CI has been failing after the addition of `split_by_punct` and `do_lower_case` arguments and tests. The failed jobs seem unrelated to the PR, requesting your help in pointing out why the CI is failing and how I can fix it, thank you!<|||||>Thanks for your last additions. Regarding the failing CI, I think I have a little idea where it might have come from. Do you remember the last time you rebased your branch on master? If it's been a long time, I think that the quicker fix might be to rebase your branch on master: :relaxed: <|||||>Hey @mingboiz and @SaulLu! What is the status of this PR? Do you plan to merge this? The tests seem to pass on the CI.<|||||>Thanks for the reminder @bogdankostic ! Indeed, it would be nice if we could merge @mingboiz and @alcinos' amazing work! We'd just have to do a new PR review with the latest changes - I think the tests for `split_by_punct` and `do_lower_case` arguments have been added. To review this PR, it would be really great if we could rebase this branch on main (or merge main into this branch). Let me know if you have any difficulty to do that @mingboiz (I've tried to do it for you [here](https://github.com/SaulLu/transformers/tree/deberta-v2-fast-tokenizer) but I can't open a PR to your fork).<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Sorry for the lack of updates @bogdankostic it has been hectic for me recently (quarterly releases 🥲 ) Thank you for catching it @nbroad1881, I wasn't as careful with my tests as I would have liked - maybe would you and @SaulLu let me know what you think of the new change and this PR? 😄 <|||||>FYI, I just used this PR to train `DebertaV2ForTokenClassification` via `microsoft/deberta-v3-base` on the `conll2003` dataset (didn't play with the training arguments, just a simple test). Previously, this wasn't possible due to a missing fast tokenizer (maybe it was, but my script is written around this feature). Here are the results and they look great: Dev ``` {'eval_loss': 0.039761029183864594, 'eval_precision': 0.9413465012646032, 'eval_recall': 0.954218044194848, 'eval_f1': 0.9477385716017946, 'eval_accuracy': 0.9896374761497467, 'eval_runtime': 8.4867, 'eval_samples_per_second': 382.952, 'eval_steps_per_second': 3.064, 'epoch': 5.0} ``` Test ``` {'LOC': {'precision': 0.9063653136531366, 'recall': 0.9317211948790897, 'f1': 0.9188683656768764, 'number': 2109}, 'MISC': {'precision': 0.7502590673575129, 'recall': 0.7479338842975206, 'f1': 0.7490946714950854, 'number': 968}, 'ORG': {'precision': 0.8810440735986307, 'recall': 0.9216651745747538, 'f1': 0.9008969590899146, 'number': 2234}, 'PER': {'precision': 0.9716033202271734, 'recall': 0.9590340664079344, 'f1': 0.9652777777777778, 'number': 2319}, 'overall_precision': 0.8985694032736178, 'overall_recall': 0.9137614678899083, 'overall_f1': 0.9061017609981157, 'overall_accuracy': 0.9797932835020207} ``` Thanks a lot! @mingboiz 🙏
transformers
15,528
closed
How to use pipeline with Pytorch framework on Windows?
I used framework parameter with "pt" in pipeline constructor, but after run I get: 2022-02-05 13:48:25.332617: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cudart64_110.dll Why doesn't transformers detect Pytorch installation?
02-05-2022 07:52:54
02-05-2022 07:52:54
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,527
closed
Make Swin work with VisionEncoderDecoderModel
# What does this PR do? This PR sets the `hidden_size` attribute of the config of Swin Transformer, allowing it to be used with the `VisionEncoderDecoderModel` framework. It also adds an attribute_map to the config of Swin Transformer. Fixes #15526
02-05-2022 07:32:56
02-05-2022 07:32:56
_The documentation is not available anymore as the PR was closed or merged._
transformers
15,526
closed
SwinTransformer as encoder and Bart as decoder
## Environment info - `transformers` version: 4.16.2 - Platform: Linux-5.4.0-81-generic-x86_64-with-glibc2.10 - Python version: 3.8.8 - PyTorch version (GPU?): 1.10.2+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help @NielsRogge ## Information I wanted to use an encoder decoder model with ```SwinTransformer``` as an encoder and ```bart-large``` as a decoder. I used ```VisionEncoderDecoderModel.from_encoder_decoder_pretrained("microsoft/swin-base-patch4-window12-384", "facebook/bart-large")``` command and it results in the following. ``` File "train_bart.py", line 158, in <module> model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained("microsoft/swin-base-patch4-window12-384", "facebook/bart-large") File "/private/home/rbh/anaconda3/lib/python3.8/site-packages/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py", line 399, in from_encoder_decoder_pretrained return cls(encoder=encoder, decoder=decoder, config=config) File "/private/home/rbh/anaconda3/lib/python3.8/site-packages/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py", line 213, in __init__ self.encoder.config.hidden_size != self.decoder.config.hidden_size File "/private/home/rbh/anaconda3/lib/python3.8/site-packages/transformers/configuration_utils.py", line 250, in __getattribute__ return super().__getattribute__(key) AttributeError: 'SwinConfig' object has no attribute 'hidden_size' ``` Can the SwinTransformer with a Bart decoder be initialized using ```VisionEncoderDecoderModel``` or do I need to write my own model with SwinTransformer as enocder and Bart as decoder ?
02-04-2022 23:36:06
02-04-2022 23:36:06
Hi @NielsRogge I did the changes mentioned in the PR above. But then I get the following error. ``` Traceback (most recent call last): File "train_donut.py", line 260, in <module> trainer.train() File "/private/home/rbh/trocr/transformers/src/transformers/trainer.py", line 1398, in train tr_loss_step = self.training_step(model, inputs) File "/private/home/rbh/trocr/transformers/src/transformers/trainer.py", line 1980, in training_step loss = self.compute_loss(model, inputs) File "/private/home/rbh/trocr/transformers/src/transformers/trainer.py", line 2012, in compute_loss outputs = model(**inputs) File "/private/home/rbh/anaconda3/envs/donut/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/private/home/rbh/anaconda3/envs/donut/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 168, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/private/home/rbh/anaconda3/envs/donut/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 178, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/private/home/rbh/anaconda3/envs/donut/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply output.reraise() File "/private/home/rbh/anaconda3/envs/donut/lib/python3.7/site-packages/torch/_utils.py", line 434, in reraise raise exception RuntimeError: Caught RuntimeError in replica 0 on device 0. Original Traceback (most recent call last): File "/private/home/rbh/anaconda3/envs/donut/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker output = module(*input, **kwargs) File "/private/home/rbh/anaconda3/envs/donut/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/private/home/rbh/trocr/transformers/src/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py", line 478, in forward encoder_hidden_states = self.enc_to_dec_proj(encoder_hidden_states) File "/private/home/rbh/anaconda3/envs/donut/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/private/home/rbh/anaconda3/envs/donut/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 103, in forward return F.linear(input, self.weight, self.bias) File "/private/home/rbh/anaconda3/envs/donut/lib/python3.7/site-packages/torch/nn/functional.py", line 1848, in linear return torch._C._nn.linear(input, weight, bias) RuntimeError: mat1 and mat2 shapes cannot be multiplied (576x1024 and 128x1024) ``` Do we need to add another projection layer?<|||||>The way `VisionEncoderDecoderModel` works is as follows: First, the image (`pixel_values`) is sent through the encoder to obtain some `last_hidden_state`. These are the features at the last layer of the model. In case of `microsoft/swin-base-patch4-window12-384`, this is a tensor of shape (1, 144, 1024) in case of a single image, as can be seen as follows: ``` import torch from transformers import SwinModel model = SwinModel.from_pretrained("microsoft/swin-base-patch4-window12-384") pixel_values = torch.randn(1,3,384,384) outputs = model(pixel_values) print(outputs.last_hidden_state.shape) ``` Next, the decoder will use these hidden states to perform cross-attention. If you're using `facebook/bart-large` to initialize the weights of the decoder, the decoder will have a hidden size (embedding dimension) of 1024, as can be seen below: ``` from transformers import BartConfig config = BartConfig.from_pretrained("facebook/bart-large") print(config.hidden_size) ``` Hence, no projection is needed as both have 1024 as hidden dimension (making cross-attention possible). For now, I'd advise you to set the `hidden_size` attribute of the encoder to 1024, in which case no projection will be added. ``` from transformers import SwinModel, BartForCausalLM, VisionEncoderDecoderModel encoder = SwinModel.from_pretrained("microsoft/swin-base-patch4-window12-384") decoder = BartForCausalLM.from_pretrained("facebook/bart-large") # Set encoder config hidden size (which will make sure no projection layer is added) setattr(encoder.config, "hidden_size", 1024) # Initializing a model with a pretrained Swin as encoder & a pretrained BART-large as decoder model = VisionEncoderDecoderModel(encoder=encoder, decoder=decoder) ``` Looking into it, the PR above will not solve the issue appropriately. Swin Transformer is a bit special in the sense that the shapes of the hidden states are not identical after every layer: the channel dimension is increased by a factor of 2 after each stage.<|||||>I'll update my PR accordingly to set the `hidden_size` attribute. <|||||>If you install transformers from my PR, this now works: ``` from transformers import VisionEncoderDecoderModel # Initializing a model with a pretrained Swin as encoder & a pretrained BART-large as decoder model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained("microsoft/swin-base-patch4-window12-384", "facebook/bart-large") ```<|||||>@RishabhMaheshwary let me know how it goes, then I'll merge the PR!<|||||>Hi, I am getting this error now (during validation). ``` ***** Running Evaluation ***** Num examples = 3124 Batch size = 8 Traceback (most recent call last): File "train_donut.py", line 261, in <module> trainer.train() File "/private/home/rbh/trocr/transformers/src/transformers/trainer.py", line 1473, in train self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) File "/private/home/rbh/trocr/transformers/src/transformers/trainer.py", line 1598, in _maybe_log_save_evaluate metrics = self.evaluate(ignore_keys=ignore_keys_for_eval) File "/private/home/rbh/trocr/transformers/src/transformers/trainer_seq2seq.py", line 70, in evaluate return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix) File "/private/home/rbh/trocr/transformers/src/transformers/trainer.py", line 2260, in evaluate metric_key_prefix=metric_key_prefix, File "/private/home/rbh/trocr/transformers/src/transformers/trainer.py", line 2427, in evaluation_loop loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys) File "/private/home/rbh/trocr/transformers/src/transformers/trainer_seq2seq.py", line 175, in prediction_step **gen_kwargs, File "/private/home/rbh/anaconda3/envs/donut/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context return func(*args, **kwargs) File "/private/home/rbh/trocr/transformers/src/transformers/generation_utils.py", line 1095, in generate inputs_tensor, model_kwargs, model_input_name File "/private/home/rbh/trocr/transformers/src/transformers/generation_utils.py", line 508, in _prepare_encoder_decoder_kwargs_for_generation model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs) File "/private/home/rbh/anaconda3/envs/donut/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) TypeError: forward() got an unexpected keyword argument 'attention_mask' ``` Is this similar to https://github.com/huggingface/transformers/issues/13812 <|||||>Can you provide a notebook to reproduce? Swin Transformer (as any other vision encoder) doesn't take `attention_mask` as input. <|||||>Here is the code I am using. https://colab.research.google.com/drive/1x0ARdnAk0XlchxktcRHI7s21FvaCbCWC?usp=sharing It is basically the TrOCR demo. <|||||>Here https://github.com/huggingface/transformers/blob/fcb4f11c9232cee2adce8140a3a7689578ea97de/src/transformers/trainer_seq2seq.py#L172 It passes ```attention_mask``` as model params. When I remove the ```attention_mask ``` I am able to run the above [script](https://colab.research.google.com/drive/1x0ARdnAk0XlchxktcRHI7s21FvaCbCWC?usp=sharing) without any error.<|||||>@NielsRogge Is it possible to do the same thing using Convnext as encoder ? thank you !<|||||>Hi, The `VisionEncoderDecoderModel` class doesn't support it at the moment, but it's possible technically. As Convnext is not Transformer-based, one would need a custom strategy to turn the final feature map of ConvNext into a sequence of "tokens", which can be used for cross-attention with the language decoder. A typical strategy here would be to flatten the 2D feature map into a sequence of tokens, and use a linear projection layer to make sure the dimensions between the encoder and decoder match. This is used in ViT hybrid for instance, see [here](https://github.com/huggingface/transformers/blob/9a6c6ef97fa5df4b1fb8dbc9e8c10ee3a9ed7e2a/src/transformers/models/vit_hybrid/modeling_vit_hybrid.py#L200-L201). For models like ViT, BEiT, DeiT and Swin Transformer, this is straightforward as they output a sequence of "tokens" by default.
transformers
15,525
closed
Wav2Vec2 for long audio with N-gram Language model
I'm trying to use Wav2Vec2 for transcribing a long audio file following this blog post here (https://huggingface.co/blog/asr-chunking). The mentioned example is this : ``` from transformers import pipeline pipe = pipeline(model="facebook/wav2vec2-base-960h") output = pipe("very_long_file.mp3", chunk_length_s=10, stride_length_s=(4, 2)) ``` The author said that it is possible to use N-GRAM LM but it is not clear to me as the pipeline produces only the "text", not the logits. Som my question is **how to let the pipeline produces logits instead of pure text**?
02-04-2022 22:07:52
02-04-2022 22:07:52
according to this (https://huggingface.co/blog/wav2vec2-with-ngram) and source for pipeline if the model has an N GRAM LM as the decoder it should be used automatically used for decoding if pyctcdecode and kenlm are installed<|||||>As @Dharisd said, I will help you out with a little bit of code snippet. First thing to do is to install pyctcdecode and kenlm. Use: `!pip install pyctcdecode` `!pip install https://github.com/kpu/kenlm/archive/master.zip` Next make the changes in the file _transformers/pipelines/automatic_speech_recognition.py_ . Add the following code in the line after 315. `return logits` (This will return the logits instead of text when you use pipeline). Finally, make sure you are using the N GRAM LM instead of _"facebook/wav2vec2-base-960h"_. The following snippet will give you the desired output: ``` from transformers import pipeline pipe = pipeline(model="patrickvonplaten/wav2vec2-base-100h-with-lm") output = pipe("very_long_file.mp3", chunk_length_s=10, stride_length_s=(4, 2)) ``` Hope this helps! :D <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,524
closed
Unable to Import KeyDataset for Pipeline Iterator in 4.16.2
## Environment info - `transformers` version: 4.16.2 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyTorch version (GPU?): 1.10.0+cu111 (False) - Tensorflow version (GPU?): 2.7.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: NA - Using distributed or parallel set-up in script?: NA ### Who can help - Pipelines: @Narsil - Documentation: @sgugger ## To reproduce Steps to reproduce the behavior: Follow the commands in this google colab: https://colab.research.google.com/drive/1BkvrzISZbt_N0S6ntgG7_fN3LdfJE2DH?usp=sharing Attempting to import KeyDataset ``` from transformers.pipelines.base import KeyDataset ``` Produces the following error: ``` ImportError: cannot import name 'KeyDataset' from 'transformers.pipelines.base' ``` ## Expected behavior Successfully import KeyDataset to perform the example in the documentation for batch processing pipeline predictions (https://huggingface.co/docs/transformers/v4.16.2/en/main_classes/pipelines#pipeline-batching): ``` from transformers import pipeline from transformers.pipelines.base import KeyDataset import datasets dataset = datasets.load_dataset("imdb", name="plain_text", split="unsupervised") pipe = pipeline("text-classification", device=0) for out in pipe(KeyDataset(dataset, "text"), batch_size=8, truncation="only_first"): print(out) ```
02-04-2022 20:33:53
02-04-2022 20:33:53
Everything works fine in 4.15.0, so not sure if this was deprecated/removed on purpose and the documentation is outdated or what, but any help is greatly appreciated!<|||||>Hi, thanks for reporting! 🤗 The KeyDataset class is now in `transformers.pipelines.pt_utils` and #15607 should update this part of the docs.<|||||>We should do the re-export from within `transformers.pipelines.__init__.py` this is a breaking change which shouldn't be there.<|||||>We should also fix the doc example to show the proper import path. Mostly, if it's an object we expect users to import in common cases, it should be in the main init, so it can be accessible at the root level, and we can change our internals without worry of breaking changes.<|||||>Update the doc too (kept the re-export for now)
transformers
15,523
closed
Various issues with accelerate launch command for Large example
## Environment info https://github.com/huggingface/transformers/tree/master/examples/pytorch/speech-pretraining Python 3.6.9 ## Information I'm trying to run this command: https://github.com/huggingface/transformers/tree/master/examples/pytorch/speech-pretraining#large and found various issues with it: 1. first line contains an erroneous space after the escape character: `accelerate launch run_wav2vec2_pretraining_no_trainer.py \ ` 2. The following line is missing the newline escape: `--model_name_or_path=./ ` After fixing both I found another problem: 3. The value of --model_name_or_path is invalid: `./` ## Suggested Fix: I fixed this with ` --model_name_or_path=patrickvonplaten/wav2vec2-large-xlsr-53-spanish-with-lm` This command works now: ``` accelerate launch run_wav2vec2_pretraining_no_trainer.py \ --dataset_name=librispeech_asr \ --dataset_config_names clean clean other \ --dataset_split_names train.100 train.360 train.500 \ --output_dir=./test \ --max_train_steps=200000 \ --num_warmup_steps=32000 \ --gradient_accumulation_steps=8 \ --learning_rate=0.001 \ --weight_decay=0.01 \ --max_duration_in_seconds=20.0 \ --min_duration_in_seconds=2.0 \ --model_name_or_path=patrickvonplaten/wav2vec2-large-xlsr-53-spanish-with-lm \ --logging_steps=1 \ --saving_steps=10000 \ --per_device_train_batch_size=2 \ --per_device_eval_batch_size=4 \ --adam_beta1=0.9 \ --adam_beta2=0.98 \ --adam_epsilon=1e-06 \ --gradient_checkpointing ```
02-04-2022 19:26:14
02-04-2022 19:26:14
cc @patrickvonplaten <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @ThomasGmeinder, Thanks for spotting `1.` and `2.` - feel free to open a PR to correct the docs :-) The reason why I put `--model_name_or_path=./` is because this way you can train from your local directory, but the docs should have been cleaner here indeed<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
15,522
closed
Getting Error
Hello Getting this error when running it on colab https://prnt.sc/26osvuh
02-04-2022 18:31:04
02-04-2022 18:31:04
Hello, please follow the issue template to report bugs, thank you.
transformers
15,521
closed
Add Wav2Vec2 Adapter Weights to Flax
# What does this PR do? Fixes #15476 - Adds an adapter to the Flax Wav2Vec2 model to reduce the time dimension of the extracted feature vectors beyond that of the standard Wav2Vec2 model. The encoder's output hidden states thus have a time context window that is more similar to that of a subword token instead of just a character. - Shape and values of Flax output logits match those of the PyTorch model. - Flax model uses all PyTorch model weights, including those of the adapter. Running the script in #15476 resolved to yield identical results (within 4e-2 threshold).
02-04-2022 17:36:51
02-04-2022 17:36:51
_The documentation is not available anymore as the PR was closed or merged._<|||||>I think the commit history is messed up here - the best is usually to just reopen a new PR and to just extract your changes from this PR.
transformers
15,520
closed
Warn if using a CTC+LM model with no decoder (asr pipeline)
When feeding a model to the automatic-speech-recognition pipeline, if the decoder is forgotten, the model is used as normal ctc without language model applied. This is likely to be accidental, and might go unnoticed. To address this, I propose to add a warning. cc @LysandreJik
02-04-2022 16:54:00
02-04-2022 16:54:00
_The documentation is not available anymore as the PR was closed or merged._<|||||>It would be nice to document the `decoder` parameter, because I did not include it because it is not documented anywhere, but this is a separate issue I guess. Closing this pull request based on review feedback.
transformers
15,519
closed
Handle PyTorch to Flax conversion of 1D convolutions
# What does this PR do? Currently, only 2-dimensional convolutional layers are renamed and reshaped in the PyTorch to Flax conversion script. This PR handles the case of 1-dimensional convolutions layers, in an entirely equivalent way to their 2-dimensional counterparts.
02-04-2022 15:17:22
02-04-2022 15:17:22
_The documentation is not available anymore as the PR was closed or merged._<|||||>I'm a bit surprised that we needed that. We already had 1D Conv layers in Flax in Wav2Vec2 and the conversion worked
transformers
15,518
closed
How to load bert_base-augmented-batch_size=128-lr=2e-5-max_gloss=6 model in offline mode in jupyter nb
I have downloaded "bert_base-augmented-batch_size=128-lr=2e-5-max_gloss=6" model locally but while trying to use it in jupyter notebook getting error: ``` HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/api/models//bert_base-augmented-batch_size=128-lr=2e-5-max_gloss=6 ``` How to fix this? code I am using to use the model is as below: ``` import torch import math from transformers import BertModel, BertConfig, BertPreTrainedModel, BertTokenizer class BertWSD(BertPreTrainedModel): def __init__(self, config): super().__init__(config) self.bert = BertModel(config) self.dropout = torch.nn.Dropout(config.hidden_dropout_prob) self.ranking_linear = torch.nn.Linear(config.hidden_size, 1) self.init_weights() DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model_dir = "/bert_base-augmented-batch_size=128-lr=2e-5-max_gloss=6" model = BertWSD.from_pretrained(model_dir) tokenizer = BertTokenizer.from_pretrained(model_dir) # add new special token if '[TGT]' not in tokenizer.additional_special_tokens: tokenizer.add_special_tokens({'additional_special_tokens': ['[TGT]']}) assert '[TGT]' in tokenizer.additional_special_tokens model.resize_token_embeddings(len(tokenizer)) model.to(DEVICE) model.eval() ```
02-04-2022 14:00:53
02-04-2022 14:00:53
transformers
15,517
closed
* How to convert pytorch.bin to *.ckpt files
Model I am using MuRIL BERT Language I am using the model on Indian language bert model folder containd these files: config.json tf_model.h5 tokenizer_config.json tokenizer.json vocab.txt instead of these if we require these files bert_config.json bert_model.ckpt.data-00000-of-00001 bert_model.ckpt.index bert_model.ckpt.meta vocab.txt then how to do this Problem is that * How to convert pytorch.bin to *.ckpt files
02-04-2022 13:00:21
02-04-2022 13:00:21
i need .ckpt files because in my code , I need these file otherwise I have to change in my whole code and that will take lot of time<|||||>(already replied to in #15490)
transformers
15,516
closed
T5Tokenizer loses most special tokens after I add a new special token.
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.15.0 - Platform: CentOS - Python version: 3.8.8 - PyTorch version (GPU?): 1.7.0+cu110 - Tensorflow version (GPU?): No - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik - T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten - Blenderbot, MBART: @patil-suraj - Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten - FSMT: @stas00 - Funnel: @sgugger - GPT-2, GPT: @patrickvonplaten, @LysandreJik - RAG, DPR: @patrickvonplaten, @lhoestq - TensorFlow: @Rocketknight1 - JAX/Flax: @patil-suraj - TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge - GPT-Neo, GPT-J, CLIP: @patil-suraj - Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Library: - Benchmarks: @patrickvonplaten - Deepspeed: @stas00 - Ray/raytune: @richardliaw, @amogkam - Text generation: @patrickvonplaten @narsil - Tokenizers: @SaulLu - Trainer: @sgugger - Pipelines: @Narsil - Speech: @patrickvonplaten, @anton-l - Vision: @NielsRogge, @sgugger Documentation: @sgugger Model hub: - for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj For research projetcs, please ping the contributor directly. For example, on the following projects: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information I am using T5 and find a bug of T5Tokenizer.@patrickvonplaten The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Just run my code: ```python from transformers import T5Tokenizer tokenizer = T5Tokenizer.from_pretrained("t5-base") print(tokenizer.all_special_ids) tokenizer.add_special_tokens({"additional_special_tokens": ["[EVENT]"]}) print(tokenizer.all_special_ids) ``` ## Expected behavior I think the output should be: [1, 2, 0, 32099, 32098, 32097, 32096, 32095, 32094, 32093, 32092, 32091, 32090, 32089, 32088, 32087, 32086, 32085, 32084, 32083, 32082, 32081, 32080, 32079, 32078, 32077, 32076, 32075, 32074, 32073, 32072, 32071, 32070, 32069, 32068, 32067, 32066, 32065, 32064, 32063, 32062, 32061, 32060, 32059, 32058, 32057, 32056, 32055, 32054, 32053, 32052, 32051, 32050, 32049, 32048, 32047, 32046, 32045, 32044, 32043, 32042, 32041, 32040, 32039, 32038, 32037, 32036, 32035, 32034, 32033, 32032, 32031, 32030, 32029, 32028, 32027, 32026, 32025, 32024, 32023, 32022, 32021, 32020, 32019, 32018, 32017, 32016, 32015, 32014, 32013, 32012, 32011, 32010, 32009, 32008, 32007, 32006, 32005, 32004, 32003, 32002, 32001, 32000] [1, 2, 0, 32099, 32098, 32097, 32096, 32095, 32094, 32093, 32092, 32091, 32090, 32089, 32088, 32087, 32086, 32085, 32084, 32083, 32082, 32081, 32080, 32079, 32078, 32077, 32076, 32075, 32074, 32073, 32072, 32071, 32070, 32069, 32068, 32067, 32066, 32065, 32064, 32063, 32062, 32061, 32060, 32059, 32058, 32057, 32056, 32055, 32054, 32053, 32052, 32051, 32050, 32049, 32048, 32047, 32046, 32045, 32044, 32043, 32042, 32041, 32040, 32039, 32038, 32037, 32036, 32035, 32034, 32033, 32032, 32031, 32030, 32029, 32028, 32027, 32026, 32025, 32024, 32023, 32022, 32021, 32020, 32019, 32018, 32017, 32016, 32015, 32014, 32013, 32012, 32011, 32010, 32009, 32008, 32007, 32006, 32005, 32004, 32003, 32002, 32001, 32000, 32100] However, the current output is: [1, 2, 0, 32099, 32098, 32097, 32096, 32095, 32094, 32093, 32092, 32091, 32090, 32089, 32088, 32087, 32086, 32085, 32084, 32083, 32082, 32081, 32080, 32079, 32078, 32077, 32076, 32075, 32074, 32073, 32072, 32071, 32070, 32069, 32068, 32067, 32066, 32065, 32064, 32063, 32062, 32061, 32060, 32059, 32058, 32057, 32056, 32055, 32054, 32053, 32052, 32051, 32050, 32049, 32048, 32047, 32046, 32045, 32044, 32043, 32042, 32041, 32040, 32039, 32038, 32037, 32036, 32035, 32034, 32033, 32032, 32031, 32030, 32029, 32028, 32027, 32026, 32025, 32024, 32023, 32022, 32021, 32020, 32019, 32018, 32017, 32016, 32015, 32014, 32013, 32012, 32011, 32010, 32009, 32008, 32007, 32006, 32005, 32004, 32003, 32002, 32001, 32000] [1, 2, 0, 32100] So, it seems that a lot of special tokens get lost after I add a new special token.
02-04-2022 12:50:07
02-04-2022 12:50:07
Hey @zhaowei-wang98, The reason why your code sample behaves the way it does is because you completely overwrite the list of `"additional_special_tokens"`. If you just want to add a single token, you should rather use the `add_tokens(...)` function. Think the following code-snippet should make things clearer: ```python from transformers import T5Tokenizer tokenizer = T5Tokenizer.from_pretrained("t5-base") print("HEY") print({k: v for k, v in tokenizer.get_vocab().items() if int(v) > 32000}) tokenizer.add_tokens(["[EVENT]"]) print("HEY") print({k: v for k, v in tokenizer.get_vocab().items() if int(v) > 32000}) ```<|||||>Thanks a lot. Sorry for my misunderstanding.<|||||>No worries!
transformers
15,515
closed
Add attention_scores as output in RoBERTa
# 🚀 Feature request Hi, I propose to add `attention_scores`, which is a local variable in `RobertaSelfAttention`'s `forward` method to model output on user request by adding `output_attention_scores` keyword argument to RoBERTa model. ## Motivation Currently, there is `output_attentions` keyword argument which makes model output `attention_probs`, but due to the softmax being applied to it, it can not get involved in computing losses well. I mean softmax-related numerical instabilities which might arise. ## Your contribution I would like to do the job, if you will. RoBERTa could be the starting point and then we could do the same to other models
02-04-2022 11:12:02
02-04-2022 11:12:02
Interesting request -> to enable this, we would ideally enable it across all models which would be quite a bit of work. In which situations do you need the `attention_scores`? cc @patrickvonplaten @patil-suraj @sgugger <|||||>In my case, I need to change attention distribution of a part of the sentence toward another part of it, using `torch.nn.CrossEntropyLoss` which expects input to be raw unnormalized scores (the `attention_scores` here) to internally compute softmax and log in a stable manner. Generally, current attention output, namely `attention_probs` seems to serve almost just in visualization purposes. Its precomputed softmax does not let it to be used in certain loss functions.<|||||>Can't you just revert the softmax operation? Think PyTorch's softmax implementation is pretty stable<|||||>Yes, it can be reverted, but introduces extra computation in certain use cases like Cross Entropy. By the way, I see that the `attention_scores` requirement might not worth needed effort and added complexity to models' inputs.<|||||>Hmm, I think given that it's quite an edge-case and that it can be achieved by reverting the softmax operation locally, I don't think it's worth adding more outputs here to be honest. It would create a lot of work and the users would have to get used to another output for every model.
transformers
15,514
closed
Download models from a private hub
In the context of a private hub deployment, customers would like to use from_pretrained() to load models from their hub, not from the public hub. This doesn't seem to be configurable at the moment and it would be nice to add this feature. The obvious workaround is to clone the repo first and then load it from local storage, but this adds an extra step. It'd be great to have the same experience regardless of where the hub is hosted. The same issue exists with the datasets library and the CLI. I'm going to create issues there as well, and I'll reference them below.
02-04-2022 10:58:13
02-04-2022 10:58:13
For reference: https://github.com/huggingface/datasets/issues/3679 https://github.com/huggingface/huggingface_hub/issues/650<|||||>Correct me if I'm wrong, but this is already supported. You only need to pass an additional argument `use_auth_token=True` for private models. This authentication token can be obtained by either doing the following in a notebook: ``` from huggingface_hub import notebook_login notebook_login() ``` Note that you don't need to install `huggingface_hub`, it's included in the Transformers library. Or, in case you're working with a terminal, by running the following command: ``` huggingface-cli login ```<|||||>no @NielsRogge here Julien would like to switch the endpoint to hit a private alternative to the hf.co hub (not just a private repo) e.g. like passing a custom `HF_ENDPOINT` in `dataset` (i think we may have the same here) Maybe it can also a a param in the `from_pretrained()` calls for ease of use (we used to have something like that for mirrors at some point)<|||||>I looked around, and what we'd basically need to do is to check for an environment variable (this already works for datasets and the hub CLI): ``` hub_url = os.getenv("HF_ENDPOINT", "https://huggingface.co") ``` What would be the best place to do this once and once only? Is there a common data structure that is globally visible in the lib? Then, we have to update all urls for config files, tokenizers and models, e.g. ``` DISTILBERT_PRETRAINED_CONFIG_ARCHIVE_MAP = { "distilbert-base-uncased": f"{hub_url}/distilbert-base-uncased/resolve/main/config.json", "distilbert-base-uncased-distilled-squad": f"{hub_url}/distilbert-base-uncased-distilled-squad/resolve/main/config.json", "distilbert-base-cased": f"{hub_url}/distilbert-base-cased/resolve/main/config.json", "distilbert-base-cased-distilled-squad": f"{hub_url}/distilbert-base-cased-distilled-squad/resolve/main/config.json", "distilbert-base-german-cased": f"{hub_url}/distilbert-base-german-cased/resolve/main/config.json", "distilbert-base-multilingual-cased": f"{hub_url}/distilbert-base-multilingual-cased/resolve/main/config.json", "distilbert-base-uncased-finetuned-sst-2-english": f"{hub_url}/distilbert-base-uncased-finetuned-sst-2-english/resolve/main/config.json", } ``` What do you think @julien-c @sgugger @LysandreJik ?<|||||>Those urls are not used anywhere except internal tests, so there is no need to update them. The endpoint just needs to be set via the env variable `HUGGINGFACE_CO_RESOLVE_ENDPOINT`, I think this is the only thing to do to use a custom endpoint.<|||||>Yep i think that's what I was alluding to when I wrote > (i think we may have the same here) Maybe we can just rename (or support both env vars) `HUGGINGFACE_CO_RESOLVE_ENDPOINT` to `HF_ENDPOINT` for consistency across the different libraries? WDYT?<|||||>That works for me. Suggested in the PR linked above.<|||||>Great, thanks!
transformers
15,513
closed
How can I see the masked words during pre-learning by MLM?
I would like to know what words are masked during pre-learning by masked language modeling. How can I see the masked words during pre-learning? For example, Below is sample code. ``` tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertForMaskedLM.from_pretrained('bert-base-uncased') dataset = LineByLineTextDataset( tokenizer=tokenizer, file_path=corpus, block_size=max_length, ) data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=True, mlm_probability=0.15 ) training_args = TrainingArguments( output_dir=outputdir, overwrite_output_dir=False, num_train_epochs=epochs, per_device_train_batch_size=batch_size, save_steps=2000, save_total_limit=2, prediction_loss_only=True, logging_steps=2000, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=dataset ) trainer.train() ``` Thank you.
02-04-2022 10:36:36
02-04-2022 10:36:36
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discuss.huggingface.co) instead? Thanks!<|||||>Thank you for checking an issue! I'll ask my question in the huggingface forum.
transformers
15,512
closed
Benchmarking with T5 or T5-small fails
## Environment info - `transformers` version: 4.10.3 - Platform: Linux-5.13.0-27-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyTorch version (GPU?): 1.10.0+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No Models: - T5, T5-small: @patrickvonplaten Library: - Benchmarks: @patrickvonplaten ## Information PyTorchBenchmarks fails to execute with the T5 or T5-small models. The problem arises when using: * [] the official example scripts: (give details below) ## To reproduce Steps to reproduce the behavior: Execute benchmark for T5 or T5-small directly following the code from [link](https://huggingface.co/docs/transformers/benchmarks) Traceback: module transformers has no attribute T5WithLMHeadModel module transformers has no attribute T5WithLMHeadModel ValueError Traceback (most recent call last) <ipython-input-25-870c51726ecc> in <module> ----> 1 results = benchmark.run() ~/environments/Tyflos/lib/python3.8/site-packages/transformers/benchmark/benchmark_utils.py in run(self) 705 if self.args.inference: 706 if self.args.memory: --> 707 memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length) 708 inference_result_memory[model_name]["result"][batch_size][sequence_length] = memory 709 if self.args.speed: ValueError: too many values to unpack (expected 2)
02-04-2022 09:05:07
02-04-2022 09:05:07
Hey @pranaydeeps, Could you please provide a reproducible code snippet?<|||||>```python from transformers import PyTorchBenchmark, PyTorchBenchmarkArguments args = PyTorchBenchmarkArguments(models=["t5-base"], batch_sizes=[8], training=True) benchmark = PyTorchBenchmark(args) results = benchmark.run() ``` This should be enough to reproduce. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @pranaydeeps, Sorry to answer only now. We've sadly deprecated the Benchmarking utilities as there are simply not accurate enough and it's really difficult to keep up with all the possible performance improvements that could be benchmarked (fp16, bf16, onnx, ...). Could you try to use some other open-sourced benchmarking libraries instead? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
15,511
closed
Fix TF T5/LED missing cross attn in retrun values
# What does this PR do? Add cross attention to return values for TF T5/LED (so aligned with PT's version). (Also needed to add a fix of `undoing padding`. Same undoing padding is required for longformer, but it is done in another PR) @patrickvonplaten
02-04-2022 07:17:00
02-04-2022 07:17:00
_The documentation is not available anymore as the PR was closed or merged._<|||||>Great PR - thanks a lot!<|||||>CI failure is unrelated
transformers
15,510
closed
Fix TFRemBertEncoder all_hidden_states
# What does this PR do? Current `TFRemBertEncoder` adds (initial, after a projection) `hidden_states` twice to `all_hidden_states`. This PR fixes it. In the test, it comments `# RemBERT also returns the upprojected word embeddings as an hidden layers`, but the code actually added the projected embedding twice. Furthermore, PyTorch's RemBERT only added the projected one. @Rocketknight1 @gante
02-04-2022 06:45:32
02-04-2022 06:45:32
_The documentation is not available anymore as the PR was closed or merged._<|||||>Applied suggestion https://github.com/huggingface/transformers/pull/15510#discussion_r799506824
transformers
15,509
closed
Fix TFElectraForMultipleChoice
# What does this PR do? Current `TFElectraForMultipleChoice` calls `TFElectraMainLayer` using positional arguments, and it doesn't have ``` encoder_hidden_states, encoder_attention_mask past_key_values use_cache ``` so `inputs["output_attentions"]` is received as `encoder_hidden_states` by the main layer, etc. This PR fixes it (using kwargs). @Rocketknight1 @gante
02-04-2022 06:13:45
02-04-2022 06:13:45
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi, @gante . It does affect the results - I made a typo in the original message. ``` so inputs["output_attentions"] is received as encoder_hidden_states by the main layer, etc. ``` (and this is why a correction is needed.)<|||||>You're absolutely right, it was being passed to the wrong variables. Thanks for lending your keen eye 👁️ <|||||>This also means that `TFElectraForMultipleChoice` was not being properly tested -- how did tests like `test_pt_tf_model_equivalence`, which require `output_hidden_states = True` across the layers, were passing? (I've checked the [PT implementation](https://github.com/huggingface/transformers/blob/master/src/transformers/models/electra/modeling_electra.py#L1477), it is passing the keyword arguments correctly, so this mismatch should have been caught)<|||||>> This also means that `TFElectraForMultipleChoice` was not being properly tested -- how did tests like `test_pt_tf_model_equivalence`, which require `output_hidden_states = True` across the layers, were passing? (I've checked the [PT implementation](https://github.com/huggingface/transformers/blob/master/src/transformers/models/electra/modeling_electra.py#L1477), it is passing the keyword arguments correctly, so this mismatch should have been caught) Current TF Electra Test's `prepare_config_and_inputs_for_common` only provides `inputs_dict = {"input_ids": input_ids, "token_type_ids": token_type_ids, "attention_mask": input_mask}`. So no `output_hidden_states` is used for the equivalence test. In the enhanced `test_pt_tf_model_equivalence`, I manually set `output_hidden_states=True` and `output_attentions=True` in order to have a more complete check --> and this issue is then identified. BTW, the pt/tf equivalence test uses TF model's `prepare_config_and_inputs_for_common` to prepare the inputs.<|||||>The equivalence test sets the `output_hidden_states` to `True` ([here](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_tf_common.py#L355)), and on the line that you've changed this PR, I can confirm that `inputs["output_hidden_states"]` is `True` during the test :) However, because it was set to `True` in the config, it may overwrite the `None` in the `ElectraMainLayer` (in [this function](https://github.com/huggingface/transformers/blob/525dbbf84a0d2933686281c513689da9794b7dd1/src/transformers/modeling_tf_utils.py#L295)). In other words, before this PR, `output_hidden_states` was not being set (because of the error you uncovered 🙌 ), so the config value took charge, and the test passed anyway. In other words, we probably should be more strict about the use of keyword arguments, to avoid these sorts of errors -- especially because the config values may override a `None` :) <|||||>Great analysis about `output_hidden_states` , @gante , you are right. Thank you! However, the equivalence test only tests the first output: https://github.com/huggingface/transformers/blob/525dbbf84a0d2933686281c513689da9794b7dd1/tests/test_modeling_tf_common.py#L384 I think the test will pass anyway even if there is an **actual** mismatch (i.e. even if the config doesn't take the charge). ---- I was thinking more why `encoder_hidden_states` gets value `False` and `encoder_attention_mask` gets `True` don't cause any problem - it turns out that these values are used only when the model is used as `decoder`, and `TFElectraForMultipleChoice` is not (so these wrong values are just ignored by the model without causing any exception) ![pycharm64_vWHXucHlIi](https://user-images.githubusercontent.com/2521628/152532303-370094d3-6e6f-4bfb-ac31-3a123fffce65.jpg) ----<|||||>@ydshieh Keep these fantastic PRs coming 🙏
transformers
15,508
closed
TrainingArguments --learning_rate should not be used to set both "lr" and "warmup_max_lr" in DeepSpeed
According to this document: https://huggingface.co/docs/transformers/v4.16.2/en/main_classes/deepspeed#shared-configuration, --learning_rate is used to set two parameters in DeepSpeed configuration: "lr" and "warmup_max_lr". This causes learning rate never changes, because the initial value is already the maximal value. Here is my DeepSpeed configuration: ``` "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto" , "warmup_num_steps": "auto" } }, ``` When running with this DeepSpeed configuration, I saw both "lr" and "warmup_max_lr" are set to 5e-5: ![image](https://user-images.githubusercontent.com/51274745/152467841-4c06b0bd-c882-4989-8370-60d09d3f70e3.png) Unfortunately, the learning rate stayed at 5e-5 all the way because "lr" is bounded by "warmup_max_lr". When not using DeepSpeed, HF is in charge of learning rate, and I see learning rate changes. I think --learning_rate should be used to set "warmup_min_lr", not the max.
02-04-2022 03:41:29
02-04-2022 03:41:29
@stas00 <|||||>Let's look at the WarmupLR docs: https://deepspeed.readthedocs.io/en/latest/schedulers.html#warmuplr > Increase the learning rate of each parameter group from min lr to max lr over warmup_num_steps steps, **and then fix at max lr**. So max lr is the same as `args.learning_rate` in this context. And you're suggesting that it's not. Perhaps you shouldn't use the `WarmupLR` scheduler if you don't want its behavior? I'd say perhaps you want this one instead? https://deepspeed.readthedocs.io/en/latest/schedulers.html#warmupdecaylr Please help me understand where you think it's not doing the right thing. <|||||>Ah, sorry. You're right. I misread the document. I thought it's saying learning rate increases from "lr" (not "min lr") to max lr. I also noticed that my warmup_step is 0. That may be the real cause why my rate is not changing at all. This should probably go to another thread. But have you ever seen learning curve as odd as this one? The X is epoch and the Y is loss. At the 1st step of each epoch, training loss drops significantly while validation loss jumps. As early as 2nd epoch, the validation loss is overfitted. I've been investigating whole day today, but got no clue (and tried to blame LR scheduler :( ). This is from GPT-J model. If I switch the model to GPT-neo-1.3B, I have a perfectly fine learning curve and the validation loss doesn't overfit so quickly. ![image](https://user-images.githubusercontent.com/51274745/152485064-32c0ca45-c7d2-40e6-8b76-710d499f95b8.png) <|||||>Yes, please definitely open a new issue as it's not related to Deepspeed anymore and surely others might be able to help as I have no experience finetuning GPT-J. The only thing I can suggest is to add more data if you can so that you don't need to repeat it more than once. I'm closing this Issue otherwise.
transformers
15,507
closed
Add Data2Vec
# What does this PR do? Add Data2Vec to transformers. I started by cloning the RoBERTa model and fixing the conversion script for Data2Vec. We then added the audio model using some components from Wav2Vec2. Conversion logs show identical forward passes for both text and audio: ``` max_absolute_diff = 0.0 Do both models output the same tensors? 🔥 ``` Example usage for text: ```python from transformers import RobertaTokenizer, Data2VecTextForSequenceClassification, Data2VecTextConfig import torch tokenizer = RobertaTokenizer.from_pretrained("roberta-large") config = Data2VecTextConfig.from_pretrained("facebook/data2vec-text-base") model = Data2VecTextForSequenceClassification.from_pretrained("facebook/data2vec-text-base", config=config) # Fine-tune this model inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs) prediction_logits = outputs.logits ``` `data2vec-text-base` converted weights are [here](https://huggingface.co/facebook/data2vec-text-base), from [fairseq original weights](https://dl.fbaipublicfiles.com/fairseq/data2vec/nlp_base.pt) `data2vec-audio-base` converted weights are [here](facebook/data2vec-audio-base) from [fairseq original weights](https://dl.fbaipublicfiles.com/fairseq/data2vec/audio_base_ls.pt) with no finetuning `data2vec-audio-base-960h` converted weights are [here](https://huggingface.co/facebook/data2vec-audio-base-960h) from [fairseq original weights](https://dl.fbaipublicfiles.com/fairseq/data2vec/audio_base_ls_960h.pt) fine-tuned on 960 hours of Librispeech NOTE: Data2Vec image model weights have not been released yet. NOTE: The current implementation does not support pre-training yet, only fine-tuning on text and audio tasks. Fixes # (issue) ## Who can review? @patil-suraj @patrickvonplaten @mrm8488 @anton-l
02-03-2022 21:38:08
02-03-2022 21:38:08
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15507). All of your documentation changes will be reflected on that endpoint.<|||||>Ah the new test design just got merged [here](https://github.com/huggingface/transformers/pull/15725) :sweat_smile: @edugp - I'm afraid you'll have to merge current master into your branch here (or do `git rebase`) and resolve the conflicts before we can merge this one. Would be amazing if you could give it a stab - should be pretty easy I think, just create a test dir called for data2vec and move the files there :-) Here some step-by-step explanation from @sgugger :-) To fix your open PR if it touched some test files, three simple steps: Get the latest master ``` git checkout origin/master git pull ``` 2. Rebase ``` git checkout your_branch git rebase origin/master ``` This will automatically port your changes to the test modeling files where they now are. If you have new tests files (e.g. a model PR) make sure to adapt the structure to match the new one (with a subfolder for your new model for instance) 3. Force-push your changes (force to avoid GitHub showing a diff of 666 files) ``` git push -u your_branch remote_branch -f ```<|||||>Thanks @patrickvonplaten, I synced the latest master with this branch and moved the tests to the new directory structure (and verified that they still pass). I also added all `Copied from` statements and addressed the rest of comments. Thanks for the thorough review! <|||||>Fixed the red CI @edugp - should be good for merge now. @sgugger - I'm not 100% whether it's ok what I've done here: https://github.com/huggingface/transformers/pull/15507/files#r813404423 . Given that we have one `data2vec` folder and one `data2vec` paper I think it's also better to have one doc page no? If you're ok with this change in the `utils/...` I think we can merge.<|||||>> # What does this PR do? > Add Data2Vec to transformers. > > I started by cloning the RoBERTa model and fixing the conversion script for Data2Vec. We then added the audio model using some components from Wav2Vec2. > > Conversion logs show identical forward passes for both text and audio: > > ``` > max_absolute_diff = 0.0 > Do both models output the same tensors? 🔥 > ``` > > Example usage for text: > > ```python > from transformers import RobertaTokenizer, Data2VecTextForSequenceClassification, Data2VecTextConfig > import torch > > tokenizer = RobertaTokenizer.from_pretrained("roberta-large") > config = Data2VecTextConfig.from_pretrained("facebook/data2vec-text-base") > model = Data2VecTextForSequenceClassification.from_pretrained("facebook/data2vec-text-base", config=config) > # Fine-tune this model > > inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") > outputs = model(**inputs) > > prediction_logits = outputs.logits > ``` > > `data2vec-text-base` converted weights are [here](https://huggingface.co/facebook/data2vec-text-base), from [fairseq original weights](https://dl.fbaipublicfiles.com/fairseq/data2vec/nlp_base.pt) `data2vec-audio-base` converted weights are [here](facebook/data2vec-audio-base) from [fairseq original weights](https://dl.fbaipublicfiles.com/fairseq/data2vec/audio_base_ls.pt) with no finetuning `data2vec-audio-base-960h` converted weights are [here](https://huggingface.co/facebook/data2vec-audio-base-960h) from [fairseq original weights](https://dl.fbaipublicfiles.com/fairseq/data2vec/audio_base_ls_960h.pt) fine-tuned on 960 hours of Librispeech NOTE: Data2Vec image model weights have not been released yet. > > Fixes # (issue) > > ## Who can review? > @patil-suraj @patrickvonplaten @mrm8488 @anton-l ~~Is this supposed to work? I'm getting `ImportError: cannot import name 'Data2VecTextConfig' from 'transformers' (/data/home/justincho/Data2Vec_lightning/transformers/src/transformers/__init__.py)` after cloning this branch and using it to install the transformers library : `https://github.com/edugp/transformers/tree/add-data2vec-from-roberta`~~ Never mind, I didn't checkout to the branch `add-data2vec-from-roberta` before doing `pip install -e .`<|||||>Does this update include changes necessary for training using the data2vec objective? I'm curious because I don't see an EMA implementation anywhere. <|||||>> Does this update include changes necessary for training using the data2vec objective? I'm curious because I don't see an EMA implementation anywhere. @wise-east it does not support pre-training yet, unfortunately! Only fine-tuning on text and audio tasks. I will update the PR description to reflect that.<|||||>@sgugger @osanseviero @julien-c - merging this now. Sorry, I'm not 100% sure yet how to handle the HF Hub <=> Transformers doc page linking for this model, but I think we can solve this in a follow-up PR.<|||||>Great job @edugp :partying_face: <|||||>Enorme, @edugp !!!
transformers
15,506
closed
[deepspeed docs] memory requirements
This PR documents memory requirements and estimators in Deepspeed. @sgugger
02-03-2022 17:36:14
02-03-2022 17:36:14
_The documentation is not available anymore as the PR was closed or merged._
transformers
15,505
closed
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.
Maybe @SaulLu can help? ## Information I am following the [text summarization](https://huggingface.co/course/chapter7/5) tutorial on hugging face website which uses the mt5-small model. It explains step by step on how to perform a text summarization task. ## To reproduce Steps to reproduce the behavior: 1. Run the [following notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/chapter7/section5_pt.ipynb) 2. cell # 32 should reproduce the following error. (it did for me) ``` ValueError Traceback (most recent call last) File ~/PycharmProjects/nlp-env/lib/python3.9/site-packages/transformers/tokenization_utils_base.py:707, in BatchEncoding.convert_to_tensors(self, tensor_type, prepend_batch_axis) 706 if not is_tensor(value): --> 707 tensor = as_tensor(value) 709 # Removing this for now in favor of controlling the shape with `prepend_batch_axis` 710 # # at-least2d 711 # if tensor.ndim > 2: 712 # tensor = tensor.squeeze(0) 713 # elif tensor.ndim < 2: 714 # tensor = tensor[None, :] ValueError: too many dimensions 'str' During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) Input In [72], in <module> ----> 1 data_collator(features) File ~/PycharmProjects/nlp-env/lib/python3.9/site-packages/transformers/data/data_collator.py:586, in DataCollatorForSeq2Seq.__call__(self, features, return_tensors) 583 else: 584 feature["labels"] = np.concatenate([remainder, feature["labels"]]).astype(np.int64) --> 586 features = self.tokenizer.pad( 587 features, 588 padding=self.padding, 589 max_length=self.max_length, 590 pad_to_multiple_of=self.pad_to_multiple_of, 591 return_tensors=return_tensors, 592 ) 594 # prepare decoder_input_ids 595 if ( 596 labels is not None 597 and self.model is not None 598 and hasattr(self.model, "prepare_decoder_input_ids_from_labels") 599 ): File ~/PycharmProjects/nlp-env/lib/python3.9/site-packages/transformers/tokenization_utils_base.py:2842, in PreTrainedTokenizerBase.pad(self, encoded_inputs, padding, max_length, pad_to_multiple_of, return_attention_mask, return_tensors, verbose) 2839 batch_outputs[key] = [] 2840 batch_outputs[key].append(value) -> 2842 return BatchEncoding(batch_outputs, tensor_type=return_tensors) File ~/PycharmProjects/nlp-env/lib/python3.9/site-packages/transformers/tokenization_utils_base.py:212, in BatchEncoding.__init__(self, data, encoding, tensor_type, prepend_batch_axis, n_sequences) 208 n_sequences = encoding[0].n_sequences 210 self._n_sequences = n_sequences --> 212 self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis) File ~/PycharmProjects/nlp-env/lib/python3.9/site-packages/transformers/tokenization_utils_base.py:723, in BatchEncoding.convert_to_tensors(self, tensor_type, prepend_batch_axis) 718 if key == "overflowing_tokens": 719 raise ValueError( 720 "Unable to create tensor returning overflowing tokens of different lengths. " 721 "Please see if a fast version of this tokenizer is available to have this feature available." 722 ) --> 723 raise ValueError( 724 "Unable to create tensor, you should probably activate truncation and/or padding " 725 "with 'padding=True' 'truncation=True' to have batched tensors with the same length." 726 ) 728 return self ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. ```
02-03-2022 17:32:00
02-03-2022 17:32:00
Ah actually this is linked to the summarization example in the course @lewtun <|||||>Hi @anum94, thanks for reporting this bug! The cause of the error is that the `tokenized_datasets` object has columns with strings, and the data collator doesn't know how to pad these. The fix is to add the following line before the data collator: ```python tokenized_datasets = tokenized_datasets.remove_columns(books_dataset["train"].column_names) ``` I'll post a fix in the website and Colab too - thanks!<|||||>Thank you. I think I have an older version of transformers so it worked for me when I used `tokenized_datasets = tokenized_datasets.remove_columns_(books_dataset["train"].column_names)` instead of `tokenized_datasets = tokenized_datasets.remove_columns(books_dataset["train"].column_names)` Thanks for your help and the prompt response. cheers<|||||>When I used `tokenized_datasets = tokenized_datasets.remove_columns(books_dataset["train"].column_names)` it gives `ZeroDivisionError: integer division or modulo by zero` because it can't access rows.<|||||>I'm following [this tutorial ](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2)and facing this same error of _Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length._ during model training. I'm not sure how to implement the solution which worked in the case above. Any help will be appreciated. <|||||>I am having the same issue^
transformers
15,504
closed
Add implementation of typical sampling
# What does this PR do? Adds an implementation of typical sampling (https://arxiv.org/abs/2202.00666) to the `generate` function
02-03-2022 14:49:23
02-03-2022 14:49:23
_The documentation is not available anymore as the PR was closed or merged._<|||||>Looks cool! Could you add one test? :-)<|||||>> Looks cool! Could you add one test? :-) Added a test and fixed the style issues :)<|||||>Of course! Are we waiting on @thomwolf at this point? Or is there something else that has to happen before merging?<|||||>I confirm, very clean! Good job @cimeister <|||||>@cimeister You are using `nansum` here, which is available in torch>=1.7.0. @patrickvonplaten Does approving this for main branch means `transformers` drop support for torch<1.7.0 ?<|||||>Ah nice catch @LSinev @patrickvonplaten should we change to something like: ``` plogp = p * normalized ent = - plogp[~torch.isnan(plogp)].sum(-1, keepdim=True) ```<|||||>Same answer here is given by @sgugger here: https://github.com/huggingface/transformers/pull/13292#discussion_r837529073 . Would be great if someone could open a PR to fix it :-)
transformers
15,503
closed
[Flax tests] Disable scheduled GPU tests
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> The GPU tests are running forever at the moment and take up the whole gpu-test machine all the time. @patil-suraj we should probably sit together next week to investigate this a bit, make them much faster and then re-enable them. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
02-03-2022 14:38:20
02-03-2022 14:38:20
_The documentation is not available anymore as the PR was closed or merged._<|||||>It's the compilation that takes for ever I think. I'll investigate a bit with @patil-suraj next week
transformers
15,502
closed
Timestamps for Wav2Vec 2.0 models and/or ASR pipelines
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> So the ASR pipeline (https://github.com/huggingface/transformers/blob/v4.16.2/src/transformers/pipelines/automatic_speech_recognition.py#L122) is great for leveraging Wav2Vec 2.0 for longer files. However it does not allow us to get the timestamps of the words, so when each word was spoken out. ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> This is relevant for many applications of ASR, such as automatic subtitles or anything else requiring this timing information. Since this information should be available somewhere "under the hood", it might be beneficial to many to include this in the output. This might not be specific to the pipelines, but also to the general output of Wav2Vec 2.0 models. ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md --> I'm not yet that familiar with HF + Wav2Vec 2.0, but this is https://github.com/lumaku/ctc-segmentation is a useful github page. Would be willing to help out though!
02-03-2022 13:42:59
02-03-2022 13:42:59
cc @Narsil @patrickvonplaten @anton-l <|||||>I very much agree - this would be a very welcoming feature to add and it actually shouldn't be too difficult for CTC. For CTC we know exactly what characters are predicted at what time because we know the `sampling_rate` of the model and we know the context window of each outputted token id. E.g. the first token id corresponds to the first 320 input samples which corresponds to 320 samles / 16_000 samples / sec -> 0.02 seconds. The second token id vector then corresponds more or less to the window 0.02 - 0.04 seconds and so on. We could then easily map each token id to a time window. Now knowing the id of the word delimiter token we can also easily retrieve the time window in which a word was spoken. In terms of the implementation details, I think `Wav2Vec2CTCTokenzier` should be responsible for returning time stamps for words. We could do the following - add a method `def retrieve_time_stamps(...)` that takes the `token_ids`, `stride` (the tuple of the config) and the feature extractor's `sampling_rate` as an input and retrieves a list of time stamps (one for each word) from it. We could then also integrate this into the tokenizer's `decode(...)` and `batch_decode(...)` eventually. @iskaj would you be interested in opening a PR for this? I think we could start by adding the following method to `Wav2Vec2CTCTokenizer`: ```py def retrieve_time_stamps(token_ids, stride, sampling_rate): # 1. compute total stride: `total_stride = reduce(stride, multiply)` # 2. time_frame_per_logit_in_s = total_stride / sampling_rate # 3. now we need to find the first non- `pad_token_id` in token_ids which represents the start_id. Then the first `word_delimeter_token` represents the first end_id. The next non-pad_token_id then represents the next start_id, the next word_delimiter_token` after the next end_id and so on. This can be done in a simple for loop # 4. that's pretty much it -> then we can return a list of tuples which correspond to the time stamps of the returned words. ``` Also interested in feedback from @anton-l @Narsil <|||||>> I very much agree - this would be a very welcoming feature to add and it actually shouldn't be too difficult for CTC. I agree too, it's a very welcome feature. The main concern I have is the actual implementation of this. Ideally, it would be finely manageable by users, because I can see (at least) a usage for video, where you want to add subtitles, and you need to put timestamps at definite boundaries (most likely sentence boundary and/or length of text). The ideal way would be to be highly transparent and maybe quite noisy: ```python pipe = pipeline(..., add_timestamps=True) # crashes on non CTC out = pipe(...) # out = {"text": "ABCD", "timestamps": [0.01, 0.03, 0.03, 0.04]} ``` Here I propose 1 float per character of the output. it's very noisy, but seems still quite simple to use and give everything needed for someone wanting fine control over timestamps. I imagine this float would correspond the the first TOKEN using that letter in CTC context. As an implementation, it might be tedious to properly add with chunking and striding. (Or not I am not sure)<|||||>I think what @Narsil proposed will work fine for what most people want indeed, so I agree with that sentiment. For me the interest lies in automatic subtitling and the noise in this solution would be fine. It is also nicely interpretable. I think it should work with the pipeline approach (chunking and striding), otherwise the purpose would be kind of lost right? I'm also not sure how that would work though... In the future I might be interesting in doing a pull request for this, but currently my priorities lay elsewhere. Hope I can help with this in the near feature. <|||||>@anton-l - thoughts on this?<|||||>I think we can add an alternative to `Wav2Vec2CTCTokenizer.decode()` to add timestamps to each character pretty easily. Basically implement `Wav2Vec2CTCTokenizer.decode_with_timestamps()` that returns a structure like this: ```json { "text": "I AM HERE", "tokens": [ { "token": "I", "time_start": <first_ctc_logit_index> * 0.02, "time_end": <last_ctc_logit_index> * 0.02 + 0.025, "probability": 0.7 }, { "token": " ", "time_start": <first_ctc_logit_index> * 0.02, "time_end": <last_ctc_logit_index> * 0.02 + 0.025, "probability": 0.4 }, { "token": "A", "time_start": <first_ctc_logit_index> * 0.02, "time_end": <last_ctc_logit_index> * 0.02 + 0.025, "probability": 0.6 }, .... ] } ``` where 0.02 is the frame stride, and 0.025 is the frame width in seconds (could be calculated from `stride` and `sample_rate` like @patrickvonplaten suggested above). Returning the word boundaries' offsets (whitespaces in this example) is also important for consistency IMO. `Probabilities` are optional, but they would be pretty handy for downstream applications like forced alignment to filter out low-confidence segments, so we can add them as a bonus while we're at it: ![image](https://user-images.githubusercontent.com/26864830/154249743-cc54c6d2-2a12-4efb-a9d2-8a7e0349d1ac.png) _(image taken from https://github.com/lumaku/ctc-segmentation)_ Since the whole step can be contained inside the tokenizer, it shouldn't be a problem to add it inside `AutomaticSpeechRecognitionPipeline.postprocess()` and support all modes of streaming as well :slightly_smiling_face: <|||||>@anton-l , Fully agree having both `start` and `stop` timings are better (and fine with throwing probabilities in there) I just realised though, will that approach be possible with `ctc_with_lm` ? Since we added a bunch recently, it would be nice if we could.<|||||>@Narsil it's possible for `ctc_with_lm`, but we might need to create a wrapper for the `pyctcdecode`'s `decode_beams()` function to create a common API. It can use pretty much the same logic as the ordinary CTC tokenizer, just with full words instead of granular characters, because it doesn't return per-character logit indices: ![image](https://user-images.githubusercontent.com/26864830/154253351-0d2cc3ae-bcae-482d-8144-0fbee83805ee.png) _(image from their tutorial)_ <|||||>Interesting idea @anton-l! I thought about it a bit - couple of remarks: 1.) I don't think we should entangle the probabilities with the time stamps too much to be honest and rather treat them as separate feature additions because: - Requiring time stamps doesn't mean that the user also needs probabilities and vice versa - In order to extract time stamps, for the normal Wav2Vec2Processor we need to work with the predicted ids IMO and not the logits. We group the ids and then know the boundaries from this. We cannot work directly with the logits for the normal Wav2Vec2Processor (for the `...WithLM` it's a different story) - => so I'd prefer if we keep this PR just for time stamps for now. It's nevertheless a good idea to think about how the design would work for the probs though. 2.) Instead of creating a new function `decode_with_timestamps` I think it's more in line with the library to add a `output_time_stamps_flag` to the decoding function 3.) Not sure actually if the tokenizer or directly the processor should be responsible for the function actually. I'm tending more and more towards adding the function to the processor. Will do a draft PR and share it here. <|||||>Also this could definitely be directly in the pipeline if it's going to be the main/sole user. Might make implementation easier.
transformers
15,501
closed
Add general vision docstrings
# What does this PR do? This is an attempt to add general docstrings for vision models, specifically for the base model and the image classification ones. To do: - [x] ViT - [x] DeiT - [x] BEiT - [x] SegFormer - [x] Swin Transformer
02-03-2022 12:51:44
02-03-2022 12:51:44
_The documentation is not available anymore as the PR was closed or merged._
transformers
15,500
closed
Allow training from multiple languages for multilingual seq2seq models (varying forced_bos_token_id)
# 🚀 Feature request Allow mBART and M2M100 to be easily fine-tuned with multiple target languages in the fine-tuning data set, probably by allowing forced_bos_token_id to be provided in the training dataset. ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> A number of multilingual models already exist in huggingface transformers (m2m100, mBART, mt5). These can do translation to and from multiple languages. However, at the moment, m2m100 and mBART can only easily be fine-tuned using training data with a single target language. This is because mBART and M2M100 required a "forced beginning of sequence token" [`forced_bos_token_id`](https://huggingface.co/docs/transformers/main_classes/model#transformers.generation_utils.GenerationMixin.generate.forced_bos_token_id) to be set indicating the target language. This is set on the model. Because of this, there is no obvious way to have different target language outputs to be used while training. This has been asked about by multiple people independently in the discussion forums with no response yet. I've found two at this point (and I would have been a third): - https://discuss.huggingface.co/t/m2m-model-finetuning-on-multiple-language-pairs/13203 - https://discuss.huggingface.co/t/how-to-force-bos-token-id-for-each-example-individually-in-mbart/8712 ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md --> I don't feel confident enough in my python or transformers expertise to contribute a pull request. However, to me it feels like this code should live in the trainer, rather than in the model. So for my own individual project I believe I have made a workaround to this by: ([Sample Colab notebook for the code below](https://colab.research.google.com/drive/11Wml-dOasQTuUYtk7dwKuU6B7bnQhwAq?usp=sharing)) - adding `forced_bos_token_id` as a column in my dataset, with one entry for each training example - subclassing Seq2SeqTrainer and making the following changes: - Overriding `prediction_step(...)` with a copy and paste of the original code, and adding code to read forced_bos_token_id from inputs, and add it as an argument in `generated_tokens = self.model.generate(**generation_inputs, **gen_kwargs, forced_bos_token_id = forced_bos_token_id)` - Overriding `_remove_unused_columns()` to be a no-op, so the forced_bos_token_id doesn't get removed (as it isn't in the model signature, as it isn't a parameter for the model) - Overriding `compute_loss()` to have an `inputs.pop("forced_bos_token_id")` to prevent this unexpected input breaking the forward step This seems to run, and I hope it is working, but I could have easily made a simple error. And all the copy and pasting of code makes this very fragile, which is why it would be nicer to have it in the transformers library.
02-03-2022 11:56:01
02-03-2022 11:56:01
Hi @nfortescue ! Thanks for the detailed issue! The example scripts are intended to be simple to follow and adapt instead of trying to cover every use case. Training a multi-lingual model is also a little more complicated than just setting `forced_bos_token_id`. For example, when you have a dataset with multiple languages it's also important to correctly sample the examples per language especially if the number of examples per language is not balanced. And there are different ways of doing this. So for this use case, it would be better to just fork the training script and modify it according to your needs. And if you have a working example, then would be awesome if you could share it with [community-examples](https://huggingface.co/docs/transformers/master/en/community#community-notebooks). Thanks! Also cc @sgugger <|||||>Understood all this. My understanding is you can't do this just by forking the training script. Even doing this, you still need to fork the Seq2SeqTrainer code (unless I've missed something). This fork to the Seq2SeqTrainer code is the part that makes this a maintenance difficulty.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,499
closed
Perceiver IO : How to preprocess raw audio into vector
@NielsRogge I am working with end to end application where I have to take raw video input and have to identify user sentiment from it . For that I want to know how to preprocess the raw audio to be used by PerceiverAudioPreprocessor.
02-03-2022 10:56:12
02-03-2022 10:56:12
I would really like to collaborate with @anton-l and @patrickvonplaten to make an audio example for Perceiver IO. Basically, the `PerceiverAudioPreprocessor` turns a batch of raw audio streams into a tensor of shape `(batch_size, seq_len, hidden_size)`. This is often called `hidden_states` in Transformers (it's like the initial tensor you provide to a Transformer encoder). The way you encode your audio to these hidden states doesn't matter, there are multiple ways. * `PerceiverAudioPreprocessor` is one example. It just takes the raw audio and adds position embeddings (trainable/Fourier) to it. * Another example is using Wav2Vec2's method, as follows: ``` from transformers import Wav2Vec2FeatureExtractor from transformers.models.wav2vec2.modeling_wav2vec2 import Wav2VecFeatureWav2Vec2FeatureEncoder from datasets import load_dataset dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") dataset = dataset.sort("id") sampling_rate = dataset.features["audio"].sampling_rate feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("facebook/wav2vec2-base-960h") config = Wav2Vec2Config.from_pretrained("facebook/wav2vec2-base-960h") feature_encoder = Wav2Vec2FeatureEncoder(config) # construct raw audio # audio file is decoded on the fly inputs = feature_extractor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") # turn raw audio into hidden_states of shape (B, L, C) hidden_states = feature_encoder(**inputs) ``` However, in this case, there are no position embeddings added it seems (not sure if Wav2Vec2 uses position embeddings). Next, you can do cross-attention between the `latents` of Perceiver, and the `hidden_states` we just created.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,498
closed
[torch_int_div] Correct true division in generation
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes https://github.com/huggingface/transformers/issues/14208#issuecomment-1028464415 Due to circular importants I've moved the function `torch_int_div` to a new file. Think this also makes sense as it's arguable not only restricted to models in PyTorch but general functionality in PyTorch. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
02-03-2022 10:44:09
02-03-2022 10:44:09
_The documentation is not available anymore as the PR was closed or merged._<|||||>> * apply_chunking_to_forward I'l leave that for a future PR - don't find the time to add this here currently sadly.
transformers
15,497
closed
ImportError: cannot import name 'AdamW' from 'transformers' (unknown location)
### Steps to reproduce - ```python3 !pip install --quiet transformers==4.5.0 !pip install --quiet pytorch-lightning==1.2.7 from transformers import ( AdamW, T5ForConditionalGeneration, T5TokenizerFast as T5Tokenizer ) ``` Throws error - ```bash --------------------------------------------------------------------------- ImportError Traceback (most recent call last) /tmp/ipykernel_3065/106495151.py in <module> ---> 22 from transformers import ( 23 AdamW, 24 T5ForConditionalGeneration, ImportError: cannot import name 'AdamW' from 'transformers' (unknown location) ```
02-03-2022 07:26:27
02-03-2022 07:26:27
Hi, I just tried this and was able to import `AdamW`, make sure you have `torch` installed.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I am also facing this problem when running the [code](https://colab.research.google.com/drive/1IDwie_Te_2GntHay_zZ-oPvriNsacQ8d#) for pretraining wav2vec2. edit: found the solution: use transformers==4.20.0<|||||>the version of the torch must be >=1.2.0, "AdamW" optim can be used
transformers
15,496
closed
run_summarization fails with RuntimeError: CUDA error: device-side assert triggered when using multi GPU
Hi 👋 ## Environment info - `transformers` version: 4.17.0.dev0 - Platform: Linux-4.15.0-140-generic-x86_64-with-glibc2.17 - Python version: 3.8.10 - PyTorch version (GPU?): 1.10.1+cu102 (True) - Using GPU in script?: Yes, 4x Tesla K80 ### Who can help @sgugger, @patil-suraj ## Information Model I am using: BART(base) The problem arises when using: ✅ the official example scripts I'm trying to run [this script](https://github.com/huggingface/transformers/blob/master/examples/pytorch/summarization/run_summarization.py) with cnn dailymail dataset. It works fine when using single GPU, but crashes with RuntimeError: CUDA error: device-side assert triggered when I try to use multi-GPU. Also, I don't think the model (BART) is the problem itself because I also tried T5-small and it gave the same result. Finally, I looked up this error here in `transformers` github issues, so I tried to pass different values of the `max_source_length` parameter (128, 512) as input length seems to cause this error in some cases, but it didn't help either. ## To reproduce Run the `run_summarization.py` script with this command: ``` python3 run_summarization.py \ --model_name_or_path facebook/bart-base \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --output_dir /tmp/tst-summarization \ --overwrite_output_dir \ --per_device_train_batch_size=2 \ --per_device_eval_batch_size=2 \ --predict_with_generate \ --max_source_length 512 ``` ⚠Important to notice, I also pass env var `NCCL_P2P_DISABLE=1`, otherwise training on my GPU setting gets stuck in a deadlock. ## Complete stack trace ``` /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [33,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed. Traceback (most recent call last): File "run_summarization.py", line 698, in <module> main() File "run_summarization.py", line 617, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/home/dev/anaconda3/envs/kenv/lib/python3.8/site-packages/transformers/trainer.py", line 1373, in train tr_loss_step = self.training_step(model, inputs) File "/home/dev/anaconda3/envs/kenv/lib/python3.8/site-packages/transformers/trainer.py", line 1948, in training_step loss = self.compute_loss(model, inputs) File "/home/dev/anaconda3/envs/kenv/lib/python3.8/site-packages/transformers/trainer.py", line 1980, in compute_loss outputs = model(**inputs) File "/home/dev/anaconda3/envs/kenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/home/dev/anaconda3/envs/kenv/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 168, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/home/dev/anaconda3/envs/kenv/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 178, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/home/dev/anaconda3/envs/kenv/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply output.reraise() File "/home/dev/anaconda3/envs/kenv/lib/python3.8/site-packages/torch/_utils.py", line 434, in reraise raise exception RuntimeError: Caught RuntimeError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/dev/anaconda3/envs/kenv/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker output = module(*input, **kwargs) File "/home/dev/anaconda3/envs/kenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/home/dev/anaconda3/envs/kenv/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py", line 1326, in forward outputs = self.model( File "/home/dev/anaconda3/envs/kenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1120, in _call_impl result = forward_call(*input, **kwargs) File "/home/dev/anaconda3/envs/kenv/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py", line 1198, in forward encoder_outputs = self.encoder( File "/home/dev/anaconda3/envs/kenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/home/dev/anaconda3/envs/kenv/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py", line 824, in forward layer_outputs = encoder_layer( File "/home/dev/anaconda3/envs/kenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/home/dev/anaconda3/envs/kenv/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py", line 319, in forward hidden_states = nn.functional.dropout(hidden_states, p=self.activation_dropout, training=self.training) File "/home/dev/anaconda3/envs/kenv/lib/python3.8/site-packages/torch/nn/functional.py", line 1169, in dropout return _VF.dropout_(input, p, training) if inplace else _VF.dropout(input, p, training) RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. ``` ## Expected behavior Expecting this scripts to work fine on multiple GPUs as well as on a single GPU. Thanks!
02-03-2022 06:57:26
02-03-2022 06:57:26
It's very possible that this model does not work with `DataParallel`. Have you tried to launch a distributed training, as is the [recommended way](https://pytorch.org/docs/stable/notes/cuda.html#use-nn-parallel-distributeddataparallel-instead-of-multiprocessing-or-nn-dataparallel) from the PyTorch team.<|||||>@sgugger Yes, this indeed turned out to be the problem. I actually used code snippet from [here](https://huggingface.co/docs/transformers/v4.16.2/en/main_classes/trainer#:~:text=Issue%20of%20FairScale.-,Usage,-%3A) and it worked. Thanks!
transformers
15,495
closed
Does Hugging face defaults allow to log mlflow artifacts and name every run of mlflow log?
I am training a simple binary classification model using Hugging face models using pytorch. Bert PyTorch HuggingFace. Here is the code: ``` import transformers from transformers import TFAutoModel, AutoTokenizer from tokenizers import Tokenizer, models, pre_tokenizers, decoders, processors from transformers import AutoTokenizer from transformers import AdamW from transformers import get_linear_schedule_with_warmup from transformers import BertTokenizerFast as BertTokenizer, BertModel, AdamW, get_linear_schedule_with_warmup,BertConfig ``` ``` def compute_metrics(eval_pred): logits, labels = eval_pred predictions = np.argmax(logits, axis=-1) acc = np.sum(predictions == labels) / predictions.shape[0] return {"accuracy": acc, 'precision': metrics.precision_score(labels, predictions), 'recall': metrics.recall_score(labels, predictions), 'f1': metrics.f1_score(labels, predictions)} training_args = tr.TrainingArguments( #report_to = 'wandb', output_dir='/home/pc/proj/Exp2_conv_stampy_data/results_exp0', # output directory overwrite_output_dir = True, num_train_epochs=2, # total number of training epochs per_device_train_batch_size=32, # batch size per device during training per_device_eval_batch_size=32, # batch size for evaluation learning_rate=2e-5, warmup_steps=200, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir='./logs_exp0', # directory for storing logs logging_steps=137, evaluation_strategy="epoch" ,save_strategy="epoch" ,load_best_model_at_end=True ,fp16=True ,run_name="final_model0" ) # counter = 0 # results_lst = [] from transformers import TrainerCallback from copy import deepcopy model = tr.XLMRobertaForSequenceClassification.from_pretrained("/home/pc/multilingual_toxic_xlm_roberta",problem_type="single_label_classification", num_labels=2,ignore_mismatched_sizes=True, id2label={0: 'negative', 1: 'positive'}) train_encodings = tokenizer(train_texts, truncation=True, padding=True, max_length=512, return_tensors="pt") val_encodings = tokenizer(val_texts, truncation=True, padding=True, max_length=512, return_tensors="pt") train_data = SEDataset(train_encodings, train_labels) val_data = SEDataset(val_encodings, val_labels) model.to(device) class CustomCallback(TrainerCallback): def __init__(self, trainer) -> None: super().__init__() self._trainer = trainer def on_epoch_end(self, args, state, control, **kwargs): if control.should_evaluate: control_copy = deepcopy(control) self._trainer.evaluate(eval_dataset=self._trainer.train_dataset, metric_key_prefix="train") return control_copy trainer = tr.Trainer( model=model, # the instantiated Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_data, # training dataset eval_dataset=val_data, # evaluation dataset compute_metrics=compute_metrics # the callback that computes metrics of interest ) trainer.add_callback(CustomCallback(trainer)) train = trainer.train() trainer.save_model("/home/pc/proj/Exp2_conv_stampy_data/result_toxic_model_exp0") ``` I see by default `mlruns` directory is created. <img width="691" alt="Screenshot 2022-02-03 at 12 20 49 PM" src="https://user-images.githubusercontent.com/11159549/152294935-8ee464fa-122a-42ae-b546-c4d907baa473.png"> **What is `0' and what are these 2 folders inside `0`?** **How can rename to something useful and understandable.?** **If I run multiple runs, how can I log every run of model with something like `run1`, `run2` under same experiment?** **Also I see artifact folder is empty, how to log final model?**
02-03-2022 06:55:39
02-03-2022 06:55:39
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,494
closed
fix TFMarianMTModel output
# What does this PR do? One line change: Current `TFMarianMTModel` returns `decoder`'s `last_hidden_state` as `encoder_last_hidden_state`. This PR fixes it. ``` encoder_last_hidden_state=outputs.last_hidden_state, # index 0 of encoder outputs ```
02-03-2022 06:20:41
02-03-2022 06:20:41
_The documentation is not available anymore as the PR was closed or merged._
transformers
15,493
closed
[deepspeed] fix a bug in a test
This PR fixes a small bug in a test - was using zero3 for both stages by mistake, so fixing to test zero2 as well. @sgugger
02-03-2022 05:50:34
02-03-2022 05:50:34
_The documentation is not available anymore as the PR was closed or merged._
transformers
15,492
closed
Remove loss from some flax models docs & examples
# What does this PR do? Tiny change: remove `loss` & `return_loss` in flax docs & examples. @patil-suraj
02-03-2022 05:48:53
02-03-2022 05:48:53
_The documentation is not available anymore as the PR was closed or merged._
transformers
15,491
open
Support for Monotonic Mulithead Attention based Simultaneous Speech-to-text Translation
# 🌟 New model addition Simultaneous Speech-to-text Translation using Monotonic Multihead Attention(MMA). I am wondering if anybody is working on implementing this model for now. However, I am worried that if this model is going to be supported by Hugging Face systems, since inference works in a particular way using frameworks like [SimulEval](https://github.com/facebookresearch/SimulEval) to simulate streaming input which may not be compatible with current Hugging Face's inference system? ## Model description [MMA(Ma et al., 2019)](https://arxiv.org/abs/1909.12406) has been used to handle streaming text/speech inputs mostly for translation, where MMA extends the monotonic attention mechanism to multihead. <!-- Important information --> ## Open source status * [x] the model implementation is available: [Fairseq Implementation is available here](https://github.com/pytorch/fairseq/blob/fcca32258c8e8bcc9f9890bf4714fa2f96b6b3e1/examples/simultaneous_translation/models/convtransformer_simul_trans.py#L29~#L63) * [ ] the model weights are available: (give details) * [ ] who are the authors: (mention them, if possible by @gh-username) : Xutai Ma(@xutaima), Juan Pino, James Cross, Liezl Puzon, Jiatao Gu Inference framework : [Facebook Research SimulEval](https://github.com/facebookresearch/SimulEval)
02-03-2022 04:59:01
02-03-2022 04:59:01
@beomseok-lee can I work on this issue?
transformers
15,490
closed
How to convert tf_model.h5 to tf_model.ckpt
bert model folder containd these files: config.json tf_model.h5 tokenizer_config.json tokenizer.json vocab.txt instead of these if we require these files bert_config.json bert_model.ckpt.data-00000-of-00001 bert_model.ckpt.index bert_model.ckpt.meta vocab.txt then how to do this
02-03-2022 02:11:35
02-03-2022 02:11:35
Hey @kusumlata123 👋 The format you'd like to have BERT in is a TensorFlow 1.x format, which we don't support. There are two possible paths you could follow: 1. Migrate to Tensorflow 2.x, and get full compatibility with our models; 2. Attempt to convert our Keras `.h5` model into a TF 1.x checkpoint (e.g. see [here](https://stackoverflow.com/questions/45466020/how-to-export-keras-h5-to-tensorflow-pb)). However, we have no guarantees this will work :) I'm closing the issue as there are no actions we can do from our end.
transformers
15,489
closed
Create a custom model guide
First draft of a guide for how to create a model without using any of the `Auto...` classes to give users a better idea of what's happening behind the automagic. The goal of this guide is to show users alternative methods for creating a model. It also demonstrates how users can instantiate these classes themselves if they want to customize or experiment with the default attributes/parameters loaded from a pretrained model. So in a sense, it is also a guide for creating a custom model. Some feedback I would appreciate: - Would adding a graphic showing `configuration -> model <- tokenizer/feature extractor/processor` help show how a model is created? - Would adding an end-to-end code sample (setting up a custom configuration, model and tokenizer) at the end help tie everything together? - Are there more details I can add around creating a custom model? - How is my tone for training a model from scratch? I feel like it might be a bit too harsh (see line 118). 😅 Thanks and let me know if I'm missing anything!
02-03-2022 00:28:48
02-03-2022 00:28:48
_The documentation is not available anymore as the PR was closed or merged._
transformers
15,488
closed
[parallelism docs] Megatron-Deepspeed info
This PR adds: - BigScience fork of Megatron-Deepspeed - Super important paper on Megatron-Deepspeed @sgugger
02-03-2022 00:27:09
02-03-2022 00:27:09
_The documentation is not available anymore as the PR was closed or merged._
transformers
15,487
closed
502 Server Error: Bad Gateway for url: https://huggingface.co/api/models/t5-base
Saw similar issue in this forum before, and it seems that a reboot is needed for the data centre? It happens when I try this - t5_tokenizer = T5Tokenizer.from_pretrained("t5-base")
02-03-2022 00:20:32
02-03-2022 00:20:32
I have the same issue when I try to train a distilbert model: 502 Server Error: Bad Gateway for url: https://huggingface.co/distilbert-base-uncased/resolve/main/config.json<|||||>Currently, it seems that the huggingface server is down. <|||||>it's back up now
transformers
15,486
closed
[deepspeed docs] DeepSpeed ZeRO Inference
A good demo example has been worked out for using DeepSpeed ZeRO Inference w/o using HF Trainer [here](https://github.com/huggingface/transformers/issues/15399#issuecomment-1025240005), so let's add it to the doc as we didn't have any. @sgugger
02-03-2022 00:12:38
02-03-2022 00:12:38
_The documentation is not available anymore as the PR was closed or merged._
transformers
15,485
open
Number-specific tokenization changes
# 🌟 New model addition ## Model description I wanted to contribute a bunch of number-specific LMs proposed in recent work. Most of these are not architecture changes but simple tokenization tricks such as converting a number `329` to `3.29e2` (scientific; [Zhang et al. 2020](https://aclanthology.org/2020.findings-emnlp.439/)) or `3 2 9` (digit splitting; [Nogueira et al. 2021](https://arxiv.org/abs/2102.13019)) or `e2` (exponent only; [Spokoyny et al. 2020](https://aclanthology.org/2020.emnlp-main.385/) and [Thawani et al. 2021](https://aclanthology.org/2021.emnlp-main.557/)). The motivation is that several industrial applications require number-heavy NLP but struggle with existing models. I discussed a specific way to do this on team slack with @SaulLu for, say, NumBERT (scientific notation) which involves adding a new model (tokenizer-only) and uploading the pretrained weights to the hub. I wanted to open a broader discussion here about more such number-tokenizer-only methods, some of which may not even have pretrained weights. The hope would be to make some abstract intervention (perhaps at the tokenizer level) to let the user configure GPT or BERT tokenizer as `number_tokenizer=exponent`. But perhaps clubbing methods from different papers into one model/tokenizer is against HF's philosophy? If so, I could proceed with trying to simply incorporate them as individual models - is it fine if some of them do not have pretrained weights available? ## Open source status * [X] the model implementation is available: [T5 fine-tuned on arithmetic, based on HF transformers](https://github.com/castorini/transformers-arithmetic) and [NumBERT/scientific, based on google-bert original code](https://github.com/google-research/google-research/tree/master/numbert) * [X] the model weights are available: [BERT pretrained with scientific notation](https://console.cloud.google.com/storage/browser/gresearch/numbert) * [X] who are the authors: @spokoyny @XikunZhang @DeepakRamachandran @iftenney @yanaiela @rodrigonogueira4 @lintool
02-02-2022 20:56:03
02-02-2022 20:56:03
transformers
15,484
closed
ASR pipeline never takes into account the beam width of ngram
I am using the eval.py script mentioned [here](https://github.com/huggingface/transformers/blob/master/examples/research_projects/robust-speech-event/eval.py) I try to pass the beam_width parameter at lines 84 and 88 but the results are the same no matter whatever beam you give. Am i doing something wrong?
02-02-2022 19:20:54
02-02-2022 19:20:54
Hey @harveenchadha, yeah this is indeed not possible at the moment sadly. We could make this easily possible and I would be fine with adding some code to allow passing those decoding hyper-parameters. I'm just wondering at what point the ASR pipeline becomes too much of a black magic vehicle. @Narsil - would you be fine with allowing one to define inputs to the `processor's` `batch_decode()` method in eval? E.g. do you think we could allow users to pass the following parameters: https://github.com/huggingface/transformers/blob/45cac3fade34cb7134b080c5060c250f810db5e2/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L247 right through the pipeline? Happy to open a PR if you're ok with it<|||||>What I don't think is nice, is blindly passing kwargs, Because if multiple places want to receive arbitrary kwargs and have the same name, well, bad things happen. IMO the options are: - Make those arguments available within a config and get their default from there. It's transparent for pipeline (and every object in transformers ecosystem which seems nice) - Whitelist a subset of commonly used parameters. It's simple, no risk of clashing. But the list can grow very fast and then it becomes hard to use the pipeline because of so many parameters. - Pass directly a `decode_kwargs`, `generate_kwargs` as a single argument which is a dict. No clashing, all arguments supported by default, but less convenient to use `pipeline(..., decoder_kwargs = {....})`. - Support 1 and only 1 `**kwargs` and ban every other kwargs, force them to use a whitelist. This is what currently happens. I tend to dislike this because 1 location of kwargs is more important than others (and decision is pretty arbitrary). Here `decoder` definitely does not seem like a good fit for this prioritized spot. For this particular problem I would tend to lean on option 3, wdyt ?<|||||>Yes, I see the point and I think it makes sense to add a `decoder_kwargs={}` object. Regarding the `generate_kwargs` however, if we decide to go for: ```py def __call__(self, inputs, decoder_kwargs, generate_kwargs, **kwargs) ``` it wouldn't be fully in line with our other generation pipelines for which the generate kwargs can directly be passed to `**kwargs`. E.g.: ```py generator = pipeline("text-generation") generater("This is a prompt", max_length=50, do_sample=True) ``` is possible! Wouldn't it be good to have the same behavior for both the Generation and ASR pipeline for consistency and to not surprise the user? Maybe we could go for a: ```py def __call__(self, inputs, decoder_kwargs, **kwargs): ``` syntax? Another option is the 1 and only 1 `**kwargs` design, which I think can work nicely as well. We could heavily mitigate the risk of clashing names though by adding a prefix to the decoder kwargs (they are less important IMO). *E.g.* `decoder_beam_width` would set the decoder's `beam_width` here: https://github.com/huggingface/transformers/blob/5ec368d79ecfa1467358f5b42da2cecbe23d6b0d/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L251 We do more or less the same thing for encoder-decoder models here: https://github.com/huggingface/transformers/blob/5ec368d79ecfa1467358f5b42da2cecbe23d6b0d/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L348 For this approach we'd obviously need good documentation, but it would be relatively save and user-friendly IMO. What do you think @Narsil ?<|||||> `decoder_kwargs` is better IMO. It's cleaner from a code perspective and even for the user and the doc (Please refer to this other object which describes its parameters, less likely to go out of sync). `decoder_beam_width` is very magical, and users tend to read the doc... fast, when they do... (We do need a solid documentation for sure, but regular consistent code and proper errors is almost always superior to doc IMHO). That's another source of confusion in information flow. Right now, there is logic dedicated to feeding the correct kwargs to the correct function (`preprocess`, `_forward`, `postprocess`) and with magic args it gets confusing pretty quick. You're also not really preventing any clashing since `generate_kwargs` contain `decoder_start_ids` and your `lm_decoder` might one day contain `start_ids` :D I really don't like that option. `generate_kwargs` **are** special, and I don't want to break anything there for other pipelines. But they don't seem that necessary right now for ASR (most models are CTC right ?). I think we can postpone the debate until the need arise right ? (Right now I think you are right, if we wanted to add it, consistency for `generate_kwargs` would trump cleanliness, but I would rather delay that choice)<|||||>Cool sounds good! Let's go for `decoder_kwargs(...)` then now<|||||>@harveenchadha - would you be interested in opening a PR ? Otherwise maybe @Narsil or @anton-l ? I'm sadly off in a bit and won't have time tomorrow :-/<|||||>Off for a week starting tomorrow.<|||||>Hi @patrickvonplaten @anton-l @Narsil ! I'm working on the last changes for the robust speech event and we (dbdmg Italian team) really wanted the decoder_kwargs feature. Following your discussion, I implemented a ```python def __call__(self, inputs, decoder_kwargs, **kwargs): ``` version. You can find it on my latest [commit](https://github.com/g8a9/transformers/commit/1f25864edf0319a9984b3051b420b86e287d949b). It's my first contribution and I don't know if I added it properly. Does it feel right to you? <|||||>Hi @g8a9 Thanks for taking a stab at it. Ultimately I don't think your commit is the correct route for this (https://github.com/huggingface/transformers/issues/15484#issuecomment-1028826853) I created a new PR which hopefully is ok for you to work with (look at the test for usage right now) https://github.com/huggingface/transformers/pull/15646<|||||>Hi @Narsil I've looked at your new PR, seems ok to me. I was missing the way params should have been handled in the parent class. But I hope it helped nonetheless :) <|||||>Definitely very help ful @g8a9 ! <|||||>Sorry guys, couldn't contribute as was not well from couple of days!<|||||>No worries! Hope you're feeling better :-)
transformers
15,483
closed
Isn't `transformers.utils.fx` compatible with torch 1.10+ ?
``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/conda/lib/python3.7/site-packages/transformers/utils/fx.py", line 579, in symbolic_trace tracer = HFTracer(batch_size=batch_size, sequence_length=sequence_length, num_choices=num_choices) File "/opt/conda/lib/python3.7/site-packages/transformers/utils/fx.py", line 244, in __init__ f"Found an incompatible version of torch. Found version {torch_version}, but only version " ImportError: Found an incompatible version of torch. Found version 1.10.0, but only version 1.9 is supported. ``` torch 1.10+ is also providing fx, but why the Transformers limits it to torch 1.9? cc @stas00 @thomasw21 @michaelbenayoun
02-02-2022 18:39:35
02-02-2022 18:39:35
I found [this](https://github.com/huggingface/transformers/pull/14321).<|||||>cc @michaelbenayoun <|||||>Yeah, it had to do with torch doing not back-compat changes in the FX API - since this is all experimental. But yes, Michael is your man. Since FX API is in flux still I think ideally we should target pt-1.11 which should be out in probably a month or so.<|||||>I think fx became stable as of torch 1.10, so we should maybe drop that constraint?<|||||>Hi, you are right #14321 solves your issue, and as @thomasw21 said, if FX indeed became stable at torch 1.10, we will relax that constraint to something more like 1.10+. Should merge the PR soon!
transformers
15,482
closed
Fix labels stored in model config for token classification examples
# What does this PR do? This PR fixes the labels stored inside the config a model in our token classification example. Currently, when the dataset used comes from the dataset library, the correspondence stored is the identity (0->0, 1->1, etc) which is not super useful. Fixes #15474
02-02-2022 18:33:47
02-02-2022 18:33:47
_The documentation is not available anymore as the PR was closed or merged._
transformers
15,481
closed
Fic docstring of ASR pipeline
# What does this PR do? This should fix the current problem in the documentation. cc @Narsil
02-02-2022 16:19:25
02-02-2022 16:19:25
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for taking care of it ! Sorry about missing it before merging.
transformers
15,480
closed
fix error posted in issue #15448
Signed-off-by: bugface <[email protected]> # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # 15448 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @LysandreJik <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
02-02-2022 15:06:03
02-02-2022 15:06:03
_The documentation is not available anymore as the PR was closed or merged._<|||||>@LysandreJik it looks good to me as well. thanks.<|||||>Perfect, thank you!
transformers
15,479
closed
Wrong/inconsistent behaviour in EncoderDecoderModel and generate method
Hi guys, When creating my own EncoderDecoder Abstraction to have more flexibility initializing custom huggingface based models as encoder and decoder, I noticed a couple of issues that imho should be fixed/changed. I think this is most relevant to @patrickvonplaten since he seems to be in charge of `EncoderDecoderModel` in `modeling_encoder_decoder.py` and `generation_utils.py`. I tag @LysandreJik as well since I detected some other potential improvement related to `BERT` (maybe also other models if the behaviour is the same). 1. **Problem**: When using `EncoderDecoderModel` the underlying encoder and decoder could have different tokenizers and thus different `pad_token_id`. That means `self.config.encoder.pad_token_id` and `self.config.decoder.pad_token_id` might be different. When generating with an `encoder_decoder_instance` and no `attention_mask` is provided to the `generate` function, the attention mask is internally created. This happens in `_prepare_attention_mask_for_generation()` in `generation_utils.py`. However, this function does not distinguish the encoder-decoder vs decoder only case. Hence, it uses the `pad_token_id` that is registered in `self.config.pad_token_id`. This can cause a problem if `self.config.pad_token_id` is not equal to `self.config.encoder.pad_token_id`. **Proposed Solution**: Imho `_prepare_attention_mask_for_generation()` should check for `self.config.is_encoder_decoder` and if true tha padding token should be taken from `self.config.encoder.pad_token_id` instead of `self.config.pad_token_id`. 2. **Problem**: The decoder attention mask is created/updated on each generation step in `generate()` by calling `prepare_inputs_for_generation()` which is implemented in the corresponding model instance, e.g. encoder_decoder, bert, etc. However, `BERT` implements this function to simply create an all-ones mask that mimics the shape of the current `input_ids`, irrespective of previously predicted ids. Assuming that at some point in the generation process a `pad_token_id` is predicted, the attention mask update should take this into account and place a 0 at that position in the mask. **Proposed Solution**: All models that implement `prepare_inputs_for_generation()` should imho take their corresponding `pad_token_id` in `input_ids` into account when updating the `attention_mask` between generation steps. 3. **Problem** : The `attention_mask` creation in e.g. `BERT` and in `generate()` is not aligned if the user does not provide a mask himself. `BERT` (and maybe other models) simply create a mask of all-ones (same shape as `input_ids`). As described in 1. `generate()` takes the `pad_token_id` into account and creates a proper mask based on `input_ids`. At the moment I don't need to provide the mask to `generate()` but I have to provide the mask to a simple `forward()` during training because if it is not provided an all-ones mask is created. I feel this should be aligned -> Model creates the correct mask internally if not provided. **Proposed Solution**: Imho each model should create the correct `attention_mask` if the user does not provide any. The `pad_token_id` is known to the model, so implementing this should be no problem. Some feedback about these thoughts would be great. Maybe I missed something in the code base and the issues are not relevant afterall. Thanks for your work and I'm looking forward hearing from you. Lars
02-02-2022 13:10:11
02-02-2022 13:10:11
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @LarsHill, Thanks a lot for the feedback and I'm very sorry to be so late to answer here. 1. Very good point! It's a tricky thing to solve because: - We don't want to much model-specific code in `generate()` and this use-case is quite edge-casy, model-specific - On the other hand, we also don't want silent errors (they are the worst to debug). Your solution makes a lot of sense! The problem is however that most encoder-decoder models, like T5, BART don't have a `config.encoder` variable so we quickly arrive at 2, 3 if statements. I think for now the best we can do here is to write some nice docs that explain how to use the Encoder-Decoder architecture. E.g. if we would put a warning on the `EncoderDecoderModel` card and write a nice `How-to-guide`, I'm quite sure that people won't run into this issue too much. Also it's not really a bug in the code, but something the user should be made aware of IMO. Ideally, the user passes an `attention_mask` in which case there is no need to have the padding token correctly defined. 2. Also a good point, but this behavior really should not occur. The only time there could be padding token ids in the decoder_input_ids tensor is when the user passes those her/himself and that's quite an edge-case for encoder-decoder models. I.E. passing both `input_ids` (like an article to summarize) and a prompt that should be used to start the summarization with is still quite an edge case for me. In this case, the user should (again) also pass the decoder_attention_mask so that this problem would be solved correctly IMO. Note that no model should *generate* a pad_token_id -> this would be a modeling bug and then we really can't expect the model to generate anything useful at all. 3.Good point. I agree that we should probably align the methods, but it's quite difficult now because of possible backward breaking changes. We've more or less decided to not automatically create the attention_mask if a padding token is provided because: - some models don't have a padding token id like GPT2, what do we do then? - it might be possible that a user would want to attend to a padding token and by force creating the padding token this use case is not possible anymore (think for QA this is the case). - However, I do agree that in 95% of the cases the padding token should be masked, so it does make sense to create a warning in every model if no attention_mask is provided but the input_ids contain the padding token. <|||||>Also cc https://github.com/huggingface/transformers/issues/4483#issuecomment-1066600797<|||||>@LarsHill - would you maybe be interested in tackling this issue by improving the docs a bit: https://github.com/huggingface/transformers/issues/16135 ? :-)<|||||>Added some action items following the discussion here: - 1. https://github.com/huggingface/transformers/issues/16135 - 2. https://github.com/huggingface/transformers/issues/16136<|||||>> @LarsHill - would you maybe be interested in tackling this issue by improving the docs a bit: #16135 ? :-) Hi, First of all, thanks for the extensive reply! I agree, that most of my concerns could be tackled by improving the documentation. After all, it is not code breaking since there are work arounds. It is just, that I ran into some of the mentioned problems myself and had to deeply check the code base to understand what was going on. Regarding, contributing to the documentation, I cannot promise to get on it any time soon, since I'm quite occupied with project and research work at the moment. But I keep this issue in mind and get back to it. If noone else contributed in the meantime I'll take a shot.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
15,478
closed
Pretrained model for sequence to sequence question answering
Hi, Is there a model available which has been trained on sequence to sequence question answering with the SQUAD v1/v2 dataset? I could not find any in the huggingface model hub.
02-02-2022 12:50:30
02-02-2022 12:50:30
Hi, Yes there are a few available. I filtered on "question-answering", and the following seq2seq models: * "bart": https://huggingface.co/models?pipeline_tag=question-answering&sort=downloads&search=bart * "t5": https://huggingface.co/models?pipeline_tag=question-answering&sort=downloads&search=t5<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi Thanks for your response @NielsRogge. I meant a generative question answering model trained on SQuAD. This implies that the model is trained with an objective to generate answer sequence unlike the objective to predict start and end index of the answer. <|||||>Hi, There are a few available, like this one: https://huggingface.co/valhalla/t5-small-qa-qg-hl<|||||>Thanks @NielsRogge This is exactly what I am looking for.
transformers
15,477
closed
tf.concatenate does not exist
## Environment info Any ### Who can help @Rocketknight1 ## Information BERT TFBertSelfAttention layer calls a function `tf.concatenate` when given a `past_key_value` parameter. This function does not exist in the TF 2.0 API. The correct function name is `tf.concat`. ## To reproduce Call a TFBertLayer layer with a non nil `past_key_value` argument. ## Expected behavior The code should be modified to use `tf.concat` instead.
02-02-2022 12:28:36
02-02-2022 12:28:36
Good spot! Would you be willing to submit a PR? If not, don't worry, let us know and we'll do it instead.<|||||>I will try to submit a PR. It will probably take me a few days to craft a test case that exercises this functionality.<|||||>Hi, this is my fault in my PR: #13222. There are a few files involved, in particular, the TF template file ``` transformers\templates\adding_a_new_model\cookiecutter-template-{{cookiecutter.modelname}}\test_modeling_tf_{{cookiecutter.lowercase_modelname}}.py ``` I am sorry about this mistake. @pedro-r-marques If you still want to open the PR, could you also fix the other models that have `tf.concatenate`, please? Thank you!<|||||>Hi @pedro-r-marques Would it be OK for me to submit a PR for this issue? Or you want to contribute once you have the time?<|||||>@ydshieh Please do go ahead. I didn't have the opportunity to reproduce the use case for `past_key_value`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,476
closed
Add Adapter Weighs to Flax
# 🚀 Feature request Currently it's possible to add an adapter on the top of PyTorch Wav2Vec2 (https://github.com/huggingface/transformers/blob/1d94d575461a76cb1dcb3ebe6e85f1c85d1dafcd/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1033) - however an equivalent module is missing in Flax: https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py. The adapter is essentially used to reduce the time dimension further so that the encoder's output hidden states have a time context window which is more similar to that of a subword token instead of just a character (as done for CTC). This was introduced for the XLS-R paper: https://arxiv.org/abs/2111.09296 and can be found in the original fairseq code here: https://github.com/pytorch/fairseq/blob/5d2be954bb7531bff92c195e61aa50a8ddd0baab/fairseq/models/speech_to_text/xm_transformer.py#L245 We should add this to Flax as well for the Seq2Seq experiments. ## Goal the following script should give identical results: ```python import torch import numpy as np from transformers import FlaxWav2Vec2Model, Wav2Vec2Model model_fx = FlaxWav2Vec2Model.from_pretrained("patrickvonplaten/dummy_wav2vec2_with_adapter", from_pt=True) model_pt = Wav2Vec2Model.from_pretrained("patrickvonplaten/dummy_wav2vec2_with_adapter") input_torch = torch.ones((2, 5000), dtype=torch.float32) input_fx = input_torch.cpu().numpy() with torch.no_grad(): output_logits_pt = model_pt(input_torch).last_hidden_state output_logits_flax = model_fx(input_fx).last_hidden_state print("Check if shapes are equal") print(f"Shape PyTorch {output_logits_pt.shape} | Shape Flax {output_logits_flax.shape}") print("Check if output values are equal") print(f"Diff {np.max(np.abs(output_logits_pt.numpy()) - np.asarray(np.abs(output_logits_flax)))})") ``` This script fails at the moment because both the shape and the output logits are different. You can also see when loading the model in Flax that some weights are not used since the implementation of FlaxWav2Vec2Adaptor is missing. Traceback: ```bash Some weights of the model checkpoint at patrickvonplaten/dummy_wav2vec2_with_adapter were not used when initializing FlaxWav2Vec2Model: {('adapter', 'layers', '2', 'conv', 'bias'), ('adapter', 'layers', '1', 'conv', 'bias'), ('adapter', 'layers', '0', 'conv', 'kernel'), ('adapter', 'layers', '1', 'conv', 'kernel'), ('adapter', 'layers', '2', 'conv', 'kernel'), ('adapter', 'layers', '0', 'conv', 'bias')} - This IS expected if you are initializing FlaxWav2Vec2Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing FlaxWav2Vec2Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Check if shapes are equal Shape PyTorch torch.Size([2, 2, 768]) | Shape Flax (2, 15, 768) Check if output values are equal Traceback (most recent call last): File "/home/patrick/./wav2vec2_flax_add_adapter.py", line 20, in <module> print(f"Diff {np.max(np.abs(output_logits_pt.numpy()) - np.asarray(np.abs(output_logits_flax)))})") ValueError: operands could not be broadcast together with shapes (2,2,768) (2,15,768) ```
02-02-2022 11:19:19
02-02-2022 11:19:19
@sanchit-gandhi
transformers
15,475
closed
Log custom mlflow artifact using trainer
Hello. Is it currently possible to log some custom artifact while using `Trainer` with `report_to='mlflow'`? I tried to do the following: ``` class YAML_SaverCallback(TrainerCallback): def __init__(self, trainer) -> None: super().__init__() self._trainer = trainer def on_train_begin(self, args, state, control, **kwargs): mlflow.log_artifact("train.yaml", artifact_path="train.yaml") trainer.add_callback(YAML_SaverCallback(trainer)) ``` But it looks like it does not use the MLFlow server that was selected in `mlflow.set_tracking_uri` (I'm using a remote logging server). Apparently, it tries to log it locally: ``` Traceback (most recent call last): File "/home/ilya/XXX/train.py", line 333, in <module> trainer.train() File "/home/ilya/anaconda3/lib/python3.9/site-packages/transformers/trainer.py", line 1275, in train self.control = self.callback_handler.on_train_begin(args, self.state, self.control) File "/home/ilya/anaconda3/lib/python3.9/site-packages/transformers/trainer_callback.py", line 349, in on_train_begin return self.call_event("on_train_begin", args, state, control) File "/home/ilya/anaconda3/lib/python3.9/site-packages/transformers/trainer_callback.py", line 390, in call_event result = getattr(callback, event)( File "/home/ilya/Desktop/XXX/train.py", line 318, in on_train_begin mlflow.log_artifact("train.yaml", artifact_path="train.yaml") File "/home/ilya/anaconda3/lib/python3.9/site-packages/mlflow/tracking/fluent.py", line 605, in log_artifact MlflowClient().log_artifact(run_id, local_path, artifact_path) File "/home/ilya/anaconda3/lib/python3.9/site-packages/mlflow/tracking/client.py", line 955, in log_artifact self._tracking_client.log_artifact(run_id, local_path, artifact_path) File "/home/ilya/anaconda3/lib/python3.9/site-packages/mlflow/tracking/_tracking_service/client.py", line 355, in log_artifact artifact_repo.log_artifact(local_path, artifact_path) File "/home/ilya/anaconda3/lib/python3.9/site-packages/mlflow/store/artifact/local_artifact_repo.py", line 37, in log_artifact mkdir(artifact_dir) File "/home/ilya/anaconda3/lib/python3.9/site-packages/mlflow/utils/file_utils.py", line 113, in mkdir raise e File "/home/ilya/anaconda3/lib/python3.9/site-packages/mlflow/utils/file_utils.py", line 110, in mkdir os.makedirs(target) File "/home/ilya/anaconda3/lib/python3.9/os.py", line 215, in makedirs makedirs(head, exist_ok=exist_ok) File "/home/ilya/anaconda3/lib/python3.9/os.py", line 215, in makedirs makedirs(head, exist_ok=exist_ok) File "/home/ilya/anaconda3/lib/python3.9/os.py", line 215, in makedirs makedirs(head, exist_ok=exist_ok) [Previous line repeated 2 more times] File "/home/ilya/anaconda3/lib/python3.9/os.py", line 225, in makedirs mkdir(name, mode) PermissionError: [Errno 13] Permission denied: '/opt/mlflow' ```
02-02-2022 10:45:10
02-02-2022 10:45:10
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,474
closed
token-classification example looses ner_tags labels in trained model
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.16.2 - Platform: Linux-5.10.81.1-microsoft-standard-WSL2-x86_64-with-debian-bullseye-sid - Python version: 3.6.13 - PyTorch version (GPU?): 1.10.1+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help @sgugger, @patil-suraj ## Information Model I am using (Bert, XLNet ...): distilbert-base-uncased The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Run script run_ner.py with for example the https://huggingface.co/datasets/darentang/sroie dataset 2. The generated model does not contain expected id2label mapping but the labels are also the id numbers. The correct names are available in the `'ner_tags'` column of the dataset. The incorrect mapping happens when [setting up the config](https://github.com/huggingface/transformers/blob/master/examples/pytorch/token-classification/run_ner.py#L312). It only feeds the number of labels and not a proper label_to_id/id_to_label mapping. Here is a rough patch to work around it and to illustrate the issue: ```patch Index: examples/pytorch/token-classification/run_ner.py IDEA additional info: Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP <+>UTF-8 =================================================================== diff --git a/examples/pytorch/token-classification/run_ner.py b/examples/pytorch/token-classification/run_ner.py --- a/examples/pytorch/token-classification/run_ner.py (revision 1d94d575461a76cb1dcb3ebe6e85f1c85d1dafcd) +++ b/examples/pytorch/token-classification/run_ner.py (date 1643797832215) @@ -48,7 +48,7 @@ # Will error if the minimal version of Transformers is not installed. Remove at your own risks. -check_min_version("4.17.0.dev0") +check_min_version("4.16.2.dev0") require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/token-classification/requirements.txt") @@ -309,9 +309,13 @@ # Distributed training: # The .from_pretrained methods guarantee that only one local process can concurrently # download model & vocab. + id_to_label = {str(i): label for i, label in enumerate(label_list)} + label_to_id = {v: k for k, v in id_to_label.items()} config = AutoConfig.from_pretrained( model_args.config_name if model_args.config_name else model_args.model_name_or_path, num_labels=num_labels, + id2label=id_to_label, + label2id=label_to_id, finetuning_task=data_args.task_name, cache_dir=model_args.cache_dir, revision=model_args.model_revision, @@ -354,22 +358,6 @@ "requirement" ) - if model.config.label2id != PretrainedConfig(num_labels=num_labels).label2id: - label_name_to_id = {k: v for k, v in model.config.label2id.items()} - if list(sorted(label_name_to_id.keys())) == list(sorted(label_list)): - label_to_id = {k: int(label_name_to_id[k]) for k in label_keys} - else: - logger.warning( - "Your model seems to have been trained with labels, but they don't match the dataset: ", - f"model labels: {list(sorted(label_name_to_id.keys()))}, dataset labels: {list(sorted(label_list))}." - "\nIgnoring the model labels as a result.", - ) - else: - label_to_id = {k: i for i, k in enumerate(label_keys)} - - model.config.label2id = label_to_id - model.config.id2label = {i: l for l, i in label_to_id.items()} - # Map that sends B-Xxx label to its I-Xxx counterpart b_to_i_label = [] for idx, label in enumerate(label_list): @@ -404,12 +392,12 @@ label_ids.append(-100) # We set the label for the first token of each word. elif word_idx != previous_word_idx: - label_ids.append(label_to_id[label[word_idx]]) + label_ids.append(label[word_idx]) # For the other tokens in a word, we set the label to either the current label or -100, depending on # the label_all_tokens flag. else: if data_args.label_all_tokens: - label_ids.append(b_to_i_label[label_to_id[label[word_idx]]]) + label_ids.append(b_to_i_label[label[word_idx]]) else: label_ids.append(-100) previous_word_idx = word_idx ``` ## Expected behavior The label2id map should be properly populated by the `'ner_tags'`.
02-02-2022 10:34:23
02-02-2022 10:34:23
I can reproduce and see the problem. The fix is a tiny bit more complex than what you're suggesting (to work in all settings), I'll work on that this morning.<|||||>Thanks for looking into it. I expected my solution to be a bit to focused. Great to see such a quick turnaround. :)<|||||>@sgugger While running the updated example it became apparent that it solves the issue correctly but leads to unexpected logs. Since the initial [loading of the AutoConfig](https://github.com/huggingface/transformers/blob/45cac3fade34cb7134b080c5060c250f810db5e2/examples/pytorch/token-classification/run_ner.py#L315) is done without id2Labels/labelsToId the logs print a relation like `1: Label_1,...`. Only after the loading process is the `config.model.label2id/id2label` corrected. It would be great if the config could be correctly configured upfront, leading to correct logs. Atm it is a bit confusing since what you see is not what you get.<|||||>That's nothing something we can do as the log is generated by the config load, but we need to analyze the labels that may be present in that config in order to finalize the `label2id`/`id2label` fields. We can add another log when we update the config however. Would you like to make a PR for that?
transformers
15,473
closed
Add preprocess_logits_for_metrics Trainer param
# What does this PR do? In addition to what I told in the issue, I thought adding labels as a parameter to the preprocess function could be useful with no cost. This forced me to change the order of the accumulation of labels and logits, which creates an asymmetry with respect to the other code blocks. I could change the order in those too, but I tried to change as little as possible. With regards to adding the computation of perplexity in the examples run_clm or run_mlm, do you mean accuracy? Maybe I'm missing something, but the way I calculate perplexity having the loss is enough. Moreover, those examples don't pass `compute_metrics` either to the Trainer. I can add the computation of accuracy to the examples, using the new parameter to precompute the argmax, although it could still slow down the examples a bit. cc @sgugger Fixes #15466
02-02-2022 07:06:19
02-02-2022 07:06:19
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you! I just added the accuracy to the language modeling examples and the doc change you requested. I haven't worked with masked language models, so please check the `run_mlm` example is ok. I think the unmasked labels get the same -100 id as the padding, but maybe there's something I'm not thinking about.<|||||>I don't think you pushed your changes, you mentioned adding content.<|||||>No, sorry, maybe I expressed it poorly, I meant that I had added the same comment originally.<|||||>Thanks for your contribution!<|||||>Suspecting this PR introduced a breakage - please see: https://github.com/huggingface/transformers/issues/15898
transformers
15,472
closed
Wav2Vec2 - TypeError: Concatenation operation is not implemented for NumPy arrays
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.16.2 - Platform: Ubuntu 20.04.3 LTS (GNU/Linux 5.11.0-44-generic x86_64) - Python version: 3.8 - PyTorch version (GPU?): 1.8.0+cu111 (Yes) - Tensorflow version (GPU?): - - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help @patrickvonplaten ## Information Model I am using (Bert, XLNet ...): Wav2Vec2ForPreTraining The problem arises when using: * [ x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behaviour: 1. I tried running the [pretraining script](https://github.com/huggingface/transformers/blob/d1fd64e7aa40d6a3c69cb21f7fd411a2a3141e04/examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py) with the given [command ](https://github.com/huggingface/transformers/tree/d1fd64e7aa40d6a3c69cb21f7fd411a2a3141e04/examples/pytorch/speech-pretraining)(except that I used the dummy dataset to first test it with less data) for the "base-sized" Wav2Vec2 model: ```` accelerate launch run-wav2vec2-pretrain-notrainer.py \ --dataset_name="hf-internal-testing/librispeech_asr_dummy" \ --dataset_config_name="clean" \ --dataset_split_names validation \ --model_name_or_path="patrickvonplaten/wav2vec2-base-v2" \ --output_dir="./wav2vec2-pretrained-demo" \ --max_train_steps="20000" \ --num_warmup_steps="32000" \ --gradient_accumulation_steps="8" \ --learning_rate="0.005" \ --weight_decay="0.01" \ --max_duration_in_seconds="20.0" \ --min_duration_in_seconds="2.0" \ --logging_steps="1" \ --saving_steps="10000" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="8" \ --adam_beta1="0.9" \ --adam_beta2="0.98" \ --adam_epsilon="1e-06" \ --gradient_checkpointing \ --validation_split_percentage="10"\ ```` ## Error I then get the error: ```` ^M 0% 0/20000 [00:00<?, ?it/s]Traceback (most recent call last): File "/ceph/csedu-scratch/project/agansen/run-wav2vec2-pretrain-notrainer.py", line 724, in <module> main() File "/ceph/csedu-scratch/project/agansen/run-wav2vec2-pretrain-notrainer.py", line 566, in main for step, batch in enumerate(train_dataloader): File "/ceph/csedu-scratch/project/agansen/venv2/lib/python3.8/site-packages/accelerate/data_loader.py", line 303, in __iter__ for batch in super().__iter__(): File "/ceph/csedu-scratch/project/agansen/venv2/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 517, in __next__ data = self._next_data() File "/ceph/csedu-scratch/project/agansen/venv2/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 557, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/ceph/csedu-scratch/project/agansen/venv2/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch return self.collate_fn(data) File "/ceph/csedu-scratch/project/agansen/run-wav2vec2-pretrain-notrainer.py", line 323, in __call__ sampled_negative_indices = _sample_negative_indices( File "/ceph/csedu-scratch/project/agansen/venv2/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 329, in _sample_negative_indices sampled_negative_indices[batch_idx] += batch_idx * sequence_length TypeError: Concatenation operation is not implemented for NumPy arrays, use np.concatenate() instead. Please do not rely on this error; it may not be given on all Python implementations. ```` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behaviour I ran similar scripts using '_sample_negative_indices()' before and did not get the error. I am not sure what changed. Any ideas would be very appreciated!!
02-02-2022 04:32:07
02-02-2022 04:32:07
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>cc @ArthurZucker @sanchit-gandhi - this issue popped up again. It seems like it works with Transformers v4.15. In case someone has time to look into it to check if it works on current master that would be great. Otherwise happy to take a look!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @annagansen22, Sorry could you try updating your transformers version and then starting the script again? I think this error should not be present anymore for newer versions<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @patrickvonplaten the issue still persists, see https://github.com/speechbrain/speechbrain/issues/1787 I provide there a short summary only: ``` File "speechbrain/recipes/CommonVoice/self-supervised-learning/wav2vec2/train_hf_wav2vec2.py", line 111, in fit_batch predictions = self.compute_forward(batch, sb.Stage.TRAIN) ... File "transformers/models/wav2vec2/modeling_wav2vec2.py", line 285, in _sample_negative_indices sampled_negative_indices[batch_idx] += batch_idx * sequence_length TypeError: Concatenation operation is not implemented for NumPy arrays, use np.concatenate() instead. Please do not rely on this error; it may not be given on all Python implementations. ``` Environment: ``` datasets 2.7.1 huggingface-hub 0.11.1 numpy 1.23.4 scipy 1.8.1 torch 1.12.1 torchaudio 0.12.1 transformers 4.25.1 Python 3.9.13 PyTorch 1.12.1+cu102 ``` Titouan restricted the dependency `transformers==4.15` – which I dropped to see if we really need this limitation (or if we can use simply the latest dependencies for all SpeechBrain recipes). As the error log suggests, there are other Python/PyTorch versions which with that works?<|||||>Hey @anautsch, Could you maybe open a new issue that includes a reproducible code snippet that only uses `transformers`? Thanks!
transformers
15,471
closed
RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling cublasSgemmStridedBatched (handle, opa, opb, m, n, k, &alpha, a, lda, stridea, b, ldb, strideb, &beta, c, ldc, stridec, num_batches)
## Environment info NVIDIA-SMI 495.46 Driver Version: 460.32.03 CUDA Version: 11.2 - `transformers` version: 4.16.2 - Platform: Google Colab Pro + - Python version: 3.7 - PyTorch version (GPU?): 1.10.0+cu111 - Using GPU in script?: Yes Error: ``` Saving model checkpoint to ../models/checkpoints/checkpoint-287 Configuration saved in ../models/checkpoints/checkpoint-287/config.json Model weights saved in ../models/checkpoints/checkpoint-287/pytorch_model.bin --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-3-f7f41780dc73> in <module>() 157 ) 158 --> 159 trainer.train() 160 161 trainer.save_model() 14 frames /usr/local/lib/python3.7/dist-packages/transformers/models/roberta/modeling_roberta.py in forward(self, hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, past_key_value, output_attentions) 273 attention_probs = attention_probs * head_mask 274 --> 275 context_layer = torch.matmul(attention_probs, value_layer) 276 277 context_layer = context_layer.permute(0, 2, 1, 3).contiguous() RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemmStridedBatched( handle, opa, opb, m, n, k, &alpha, a, lda, stridea, b, ldb, strideb, &beta, c, ldc, stridec, num_batches)` ``` Models: - RoBERTa Library: - Trainer: @sgugger ## To reproduce Steps to reproduce the behavior: I'm calling the trainer with `trainer.train()` and training a model for 10 epochs. I'm %99 percent sure I don't have a memory issue since I'm using a high ram node. The model successfully gets trained for one epoch and checkpoints are saved (as seen in the log,) but as soon as the second epoch starts, I see the error. ## Expected behavior Completing the train without error! P.S. Just to add, the very script in the very environment with the same GPU works perfectly fine with `bert-large-*` models. So I wonder if this error could have anything to do with the RoBERTa model?
02-02-2022 04:23:28
02-02-2022 04:23:28
With no information on how to reproduce the error, there is little we can do to help.<|||||>> With no information on how to reproduce the error, there is little we can do to help. Thanks for your response. Unfortunately, the data needed for reproducibility is proprietory and not public, so I can't share it for reproducibility. I thought this might be a common or known issue.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I meet similar problem on an A100 GPU with 11.7 cuda.
transformers
15,470
closed
Issue with long contexts or AutoTokenizer.. Unsure which one is it is more of
Having an issue where I used a QA model from transformers, tweaked the model in label studio making some annotations and then tried to load the model back again. The model is pulling out the correct answers but seemingly `handle_impossible_answers` isn't working because it gives an answer for every questions even when the question is irrelevant.. What's even weirder is that it doesn't do this in label studio's interface so seemingly this handle_impossible_answers is working on that side. My contexts I pass through the pipeline are also quite long Made sure I have the most up to date transformers model, and the models were saved out with save_pretrained. ``` # Inside some function code model_to_save = model.module if hasattr(model, "module") else model # Take care of distributed/parallel training # noqa # If we save using the predefined names, we can load using `from_pretrained` model_to_save.save_pretrained(workdir) self.tokenizer.save_pretrained(workdir) ``` And I end up seeing all the relevant files in a local dir as follows: ``` # In the model directory job_result.json special_tokens_map.json tokenizer.json train_data.json config.json pytorch_model.bin tokenizer_config.json train_data_info.json vocab.txt ``` I then try and load that model using the exact same transformers model/tokenizer in the way I did before, except swapping it out for the local files. (Also read some bug about local path vs absolute paths when loading models and that made no difference either way for me) ``` model_path = "/some/path/to/model_dir/" model = AutoModelForQuestionAnswering.from_pretrained(model_path) tokenizer = AutoTokenizer.from_pretrained(model_path) QA = pipeline('question-answering', model=model, tokenizer=tokenizer) # Other code... #.... answers = QA(question=questions, context=context, top_k=1, max_answer_len=32, handle_impossible_answer=True) ``` The answers that I get out of here have an answer for every question, even when it doesn't need to be answered. I've tried loading the models multiple ways - the only way I got handle_impossible_answers to work as intended is if the tokenizer wasn't the AutoTokenizer that I had used here (swapped it out for Bert). But then the answers it gave me were complete garbage which was kind of expected. Anybody else run into this issue with AutoTokenizer?? If the context is short (most of the contexts I hit it with are fairly long) then it actually does do a good job of only pulling out the valid answers. So I'm not sure if this is a long context issue, a problem with AutoTokenizer, or even a problem with transformers.pipeline
02-01-2022 21:43:23
02-01-2022 21:43:23
transformers
15,469
closed
Standardize semantic segmentation models outputs
# What does this PR do? This PR standardizes the model outputs for semantic segmentation models and creates an `AutoModelForSemanticSegmentation` class. As discussed internally, models for semantic segmentation should return `logits` of shape `batch_size, num_labels, height, width`, one logit per pixel. **Breaking change:** The `BeitForSemanticSegmentation` and `SegformerForSemanticSegmentation` models have logits with the same height and width as the input after this PR (instead of height/4 and width/4). To maintain some level of backward compatibility, the `SemanticSegmentationModelOutput` has a field `legacy_logits` that users can pick to get the old logits value. Another possible road to be less breaking is to create new classes `BeitForPixelClassification` and `SegformerForPixelClassification` while deprecating the current ones, then name "instance segmentation" "pixel classification" everywhere. It has the benefit of being more understandable to the beginner and look like our "ForTokenClassification" classes but it might surprise an expert more used to the "semantic segmentation" name.
02-01-2022 21:23:48
02-01-2022 21:23:48
_The documentation is not available anymore as the PR was closed or merged._
transformers
15,468
closed
[Wav2Vec2FeatureExtractor] Align documentation with code
In the code, `do_normalize` defaults to True: https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/feature_extraction_wav2vec2.py#L74 The documentation describe `do_normalize` as a desirable option to set to true, so this PR fixes the documentation to match the code. ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
02-01-2022 19:56:20
02-01-2022 19:56:20
_The documentation is not available anymore as the PR was closed or merged._<|||||>https://moon-ci-docs.huggingface.co/docs/transformers/pr_15468/en/model_doc/wav2vec2#transformers.Wav2Vec2FeatureExtractor.do_normalize 🎉 <|||||>Sorry for being so incredible late here<|||||>Happy to help make things better 🤗 🚀
transformers
15,467
open
New and better T5 checkpoints from scaling transformers paper
# 🌟 New model addition ## Model description This paper explores different ways of scaling T5: Scaling Efficiently: Insights from Pre-training and Finetuning Transformers https://arxiv.org/abs/2109.10686 They found new checkpoints (DeepNarrow) that perform significantly better than the previous models on downstream applications. Here is a table from the paper that compares the old Base, Large, XL and XXL models to the new ones: <img width="551" alt="image" src="https://user-images.githubusercontent.com/37597043/152038969-de1fa56e-991d-493b-b94c-96cbc42d69be.png"> The checkpoints were released today here: https://github.com/google-research/google-research/tree/master/scaling_transformers <!-- Important information --> ## Open source status * [x] the model implementation is available: the current T5 implementation in transformers * [x] the model weights are available: https://github.com/google-research/google-research/tree/master/scaling_transformers * [x] who are the authors: Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler
02-01-2022 19:45:49
02-01-2022 19:45:49
cc @patil-suraj @patrickvonplaten <|||||>Holy moly 170 checkpoints at those sizes. I'll give it a try tomorrow. Official link: https://console.cloud.google.com/storage/browser/scenic-bucket/scaling_explorer/<|||||>Okey can't get the models to work out of the box. Will dive a bit deeper into the code next week<|||||>The conversion seems to work now. I've added a first model here: https://huggingface.co/NewT5/t5-efficient-base-el4 I have some internal conversions scripts for https://github.com/google-research/text-to-text-transfer-transformer => Transformers that allows me to quickly convert mesh-tensorflow checkpoints, verify them and upload them. All I need for a conversion now is the following: - name of the original TF checkpoint, e.g. `bi_v1_el4_law_03-17-20-56` as shown on https://console.cloud.google.com/storage/browser/scenic-bucket/scaling_explorer/ - name for the hf checkpoint, e.g. `t5-efficient-base-el4` as decided in https://huggingface.co/NewT5/t5-efficient-base-el4 . - config for the t5 model, e.g.: https://huggingface.co/NewT5/t5-efficient-base-el4/blob/main/config.json . Note often the config is very similar to an existing one. E.g. the one here is the same as https://huggingface.co/t5-base **but** with the following values changed/added `num_layers=4` , `num_decoder_layers=12`. So it's enough to give a reference config and some proposed changes. Now, there are over a hundred new checkpoints which makes a manual conversion too slow and time-consuming. I think we should do one of the following: 1. Write a script that does the conversion automatically. This can definitely be done and it also shouldn't be too hard. In order to do this we have to do two things: a. Find a good name pattern, e.g. `t5-efficient-{config}` b. (This is the time consuming part). Prepare the model configs for each checkpoint to be uploaded. E.g. we would have to look at each checkpoint and define the model config depending on their changes. I won't have time to do this alone - I could need some help here - @Xirider, would you be interested in helping here? Would be happy to decide on a good strategy together to port all of the models :-) c. Decide how to write good model cards in an automated way 2. Only do the conversion for the most important models. In this case we should decide what those important models are. What approach do you think is best here? @LysandreJik @patil-suraj @craffel @Xirider ? In general, I think it's important to find a good name here for the new checkpoints? What do you think would be a good name? `google/t5-base-efficient-{config}`? or `google/t5-base-scale-efficient-{config}`? Better ideas? <|||||>Love that sentence from the paper `"since there is a lack of representation of transformers at lower compute regions."` -> very true! Think those small checkpoints here can be very impactful<|||||>I would advocate for porting all the models, though take it with a grain of salt because I'm not volunteering to do the manual work. Regarding an automatic naming convention, if the original TF checkpoint names are somewhat sane/follow some kind of reasonable pattern, we could just do `t5-efficient-{t5 checkpoint name}`.<|||||>Having talked to @patil-suraj, it should actually be possible to fully automate porting all those models. In a first step, I've preprocessed the naming of each of the folder available online to come up with the following names: ```bash t5-efficient-xxl-nl4 t5-efficient-xxl t5-efficient-xl-nl12 t5-efficient-xl-nl16 t5-efficient-xl-nl28 t5-efficient-xl-nl2 t5-efficient-xl-nl4 t5-efficient-xl-nl6 t5-efficient-xl-nl8 t5-efficient-xl t5-efficient-xl-sh t5-efficient-xl-skv t5-efficient-base t5-efficient-base-dm1000 t5-efficient-base-dm256 t5-efficient-base-dm2000 t5-efficient-base-dm512 t5-efficient-base-dml2 t5-efficient-base-dml4 t5-efficient-base-dml6 t5-efficient-base-dml8 t5-efficient-base-el16 t5-efficient-base-el2 t5-efficient-base-el4 t5-efficient-base-el6 t5-efficient-base-el8 t5-efficient-base-ff12000 t5-efficient-base-ff1000 t5-efficient-base-ff2000 t5-efficient-base-ff6000 t5-efficient-base-ff9000 t5-efficient-base-nh16 t5-efficient-base-nh24 t5-efficient-base-nh32 t5-efficient-base-nh8 t5-efficient-base-kv128 t5-efficient-base-kv16 t5-efficient-base-kv256 t5-efficient-base-kv32 t5-efficient-base-l16 t5-efficient-base-l24 t5-efficient-base-l2 t5-efficient-base-l32 t5-efficient-base-l36 t5-efficient-base-l40 t5-efficient-base-l48 t5-efficient-base-l4 t5-efficient-base-l8 t5-efficient-large t5-efficient-base t5-efficient-large-dm128 t5-efficient-large-dm256 t5-efficient-large-dm2000 t5-efficient-large-dm512 t5-efficient-large-dm768 t5-efficient-large-dl12 t5-efficient-large-dl16 t5-efficient-large-dl2 t5-efficient-large-dl32 t5-efficient-large-dl4 t5-efficient-large-dl6 t5-efficient-large-dl8 t5-efficient-large-el12 t5-efficient-large-el2 t5-efficient-large-el4 t5-efficient-large-el6 t5-efficient-large-el8 t5-efficient-large-nh12 t5-efficient-large-nh24 t5-efficient-large-nh2 t5-efficient-large-nh32 t5-efficient-large-nh4 t5-efficient-large-nh8-nl16 t5-efficient-large-nh8-nl32 t5-efficient-large-nh8 t5-efficient-large-kv128 t5-efficient-large-kv16 t5-efficient-large-kv256 t5-efficient-large-kv32 t5-efficient-large-nl10 t5-efficient-large-nl12 t5-efficient-large-nl16 t5-efficient-large-nl20 t5-efficient-large-nl2 t5-efficient-large-nl32 t5-efficient-large-nl36 t5-efficient-large-nl4 t5-efficient-large-nl8 t5-efficient-large-sh t5-efficient-large-skv t5-efficient-mini-nl12 t5-efficient-mini-nl24 t5-efficient-mini-nl6 t5-efficient-mini-nl8 t5-efficient-mini t5-efficient-base-sh t5-efficient-base-skv t5-efficient-small-dm128 t5-efficient-small-dm1000 t5-efficient-small-dm256 t5-efficient-small-dm2000 t5-efficient-small-dm768 t5-efficient-small-dl12 t5-efficient-small-dl16 t5-efficient-small-dl2 t5-efficient-small-dl4 t5-efficient-small-dl8 t5-efficient-small-el12 t5-efficient-small-el16 t5-efficient-small-el16-dl1 t5-efficient-small-el16-dl2 t5-efficient-small-el16-dl4 t5-efficient-small-el16-dl8 t5-efficient-small-el2 t5-efficient-small-el32 t5-efficient-small-el48 t5-efficient-small-el4 t5-efficient-small-el64 t5-efficient-small-el8 t5-efficient-small-el8-dl1 t5-efficient-small-el8-dl2 t5-efficient-small-el8-dl4 t5-efficient-small-ff12000 t5-efficient-small-ff1000 t5-efficient-small-ff3000 t5-efficient-small-ff6000 t5-efficient-small-ff9000 t5-efficient-small-kv128 t5-efficient-small-kv16 t5-efficient-small-kv256 t5-efficient-small-kv32 t5-efficient-small-nl16 t5-efficient-small-nl20 t5-efficient-small-nl22 t5-efficient-small-nl24 t5-efficient-small-nl2 t5-efficient-small-nl32 t5-efficient-small-nl36 t5-efficient-small-nl40 t5-efficient-small-nl48 t5-efficient-small-nl4 t5-efficient-small-nl8 t5-efficient-small-sh t5-efficient-small-shkv t5-efficient-small t5-efficient-tiny-dl2 t5-efficient-tiny-dl6 t5-efficient-tiny-dl8 t5-efficient-tiny-el12 t5-efficient-tiny-el2 t5-efficient-tiny-el6 t5-efficient-tiny-el8 t5-efficient-tiny-ff12000 t5-efficient-tiny-ff2000 t5-efficient-tiny-ff3000 t5-efficient-tiny-ff6000 t5-efficient-tiny-ff9000 t5-efficient-tiny-nh16 t5-efficient-tiny-nh1 t5-efficient-tiny-nh32 t5-efficient-tiny-nh8 t5-efficient-tiny-nl12 t5-efficient-tiny-nl16 t5-efficient-tiny-nl24 t5-efficient-tiny-nl2 t5-efficient-tiny-nl32 t5-efficient-tiny-nl6 t5-efficient-tiny-nl8 t5-efficient-tiny t5-efficient-tiny-sh t5-efficient-tiny-skv ``` -> think this is pretty clear with `t5-efficient-{default_size}-{change_to_default_size_1}(-{change_to_default_size_2}), where as the default sizes codenames follow those of Table 2 of the paper. <|||||>Automatically parsed and uploaded all configs now here: https://huggingface.co/NewT5 . Will now look into automatically uploading the weights.<|||||>Have 157/169 now correctly converted and uploaded: https://huggingface.co/models?other=t5-new-success . Something seems to be wrong with the "shared heads" **SH** checkpoints in the conversion. @craffel - do you know what exactly shared heads means? Does it mean that each transformer block uses the same head weights or that each head within a transformer block is shared with each other?<|||||>Hm, I'm not sure, but that would be my guess. If you point me to the operative config for one of the shared heads checkpoints, I can try to hunt down what it means in the original codebase.<|||||>Sorry for the off-topic, but to have available such a conversion script would be awesome. I'm struggling with the conversion myself.<|||||>Uploaded some of my scripts here: https://github.com/patrickvonplaten/t5-mtf-to-hf-converter . The repo is not very clean and heavily lacks comments / explanations. Could you try to see whether those scripts help you though in any way?<|||||>I'll test them and will report back :) Thanks! 🙏🏼 <|||||>Okey, 159/169 checkpoints are now correct. Given that the others might not be that useful/practical for now: https://github.com/google-research/google-research/issues/986#issuecomment-1035051145 , I'll go ahead with 159 of 169 checkpoints now. So will convert them to TF and Flax, write a nice README and then we can publish I think :-)<|||||>Hi @patrickvonplaten , I've trained a 32EL model with the T5 Mesh codebase (model is actually training, so I'm using a checkpoint). Now I wanted to convert the TF checkpoint into PyTorch, but the following error is thrown: ```bash Initialize PyTorch weight ['decoder', 'block_000', 'layer_001', 'rms_norm', 'scale'] Skipping decoder/block_000/layer_001/rms_norm/scale_slot_v Skipping decoder/block_000/layer_002/DenseReluDense/wi/kernel Traceback (most recent call last): File "/home/stefan/model-hub/transformers/src/transformers/models/t5/convert_t5_original_tf_checkpoint_to_pytorch.py", line 59, in <module> convert_tf_checkpoint_to_pytorch(args.tf_checkpoint_path, args.config_file, args.pytorch_dump_path) File "/home/stefan/model-hub/transformers/src/transformers/models/t5/convert_t5_original_tf_checkpoint_to_pytorch.py", line 34, in convert_tf_checkpoint_to_pytorch load_tf_weights_in_t5(model, config, tf_checkpoint_path) File "/home/stefan/model-hub/transformers/src/transformers/models/t5/modeling_t5.py", line 122, in load_tf_weights_in_t5 pointer = getattr(pointer, "weight") File "/home/stefan/.venvs/dev/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1177, in __getattr__ raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'T5DenseGatedGeluDense' object has no attribute 'weight' ``` I used the created `config.json` from your [create_config](https://github.com/patrickvonplaten/t5-mtf-to-hf-converter/blob/master/create_config.py) script (and used `google/t5-v1_1-base`, because I couldn't find the template json). Conversion is done with latest Transformers `master`, do you have any hint what's missing here :thinking: Many thanks!<|||||>Hey @stefan-it, You need to base your config on `t5-base` (the original T5-model) instead of `goolge/t5-v1_1-base` I believe. Could you try this instead?<|||||>You should be able to use this script: https://github.com/patrickvonplaten/t5-mtf-to-hf-converter/blob/master/create_config.py where you load from the `t5-base` config :-)<|||||>Hi @patrickvonplaten , thanks, it is working with `t5-base`. Another question: why is the vocab size in the config set to 32128, whereas the spm model has a size of 32000? Is it because of the integrated tasks (such as translation), because my T5 model demands 32000 in the config (otherwise is throws an error).<|||||>Maybe related: according to this comment, it was rounded to a multiple of 128 for TPU efficiency. https://github.com/google-research/t5x/blob/main/t5x/examples/scalable_t5/t5_1_1/base.gin#L45<|||||>100 IDs were added for sentinel tokens (for the pre-training objective), and then as @versae said it was rounded to nearest the 128 for TPU efficiency.<|||||>Hi @versae and @craffel , thanks for that hint! Do you accidentally know how to add these sentinel ids in `t5_mesh_transformer` command or in the gin file (I'm not using T5X) :thinking: Can this be configured in the `seqio` Task :thinking: <|||||>It's `vocabularies.Vocabulary.extra_ids = 100` in gin.<|||||>Hi @craffel, thanks for that! I could solve the problem by using `seqio.SentencePieceVocabulary(SPM_VOCAB, extra_ids=100)` in the task description. I've checked the converted checkpoint and it now has the desired 32128 shape :+1: But I have another question regarding to the Scaling Efficiently paper: it seems that `c4_v220_unsupervised` is used as mixture/task description is used in the GIN files, but I can't find this recipe in T5 (or T5X) repository. Do you accidentally know how it could be structured or do you know a comparable task from T5 library, such as: ``` # ================================ Wikipedia =================================== TaskRegistry.add( "wikipedia_20190301.en_v003_unsupervised", source=seqio.TfdsDataSource(tfds_name="wikipedia/20190301.en:1.0.0"), preprocessors=[ functools.partial( preprocessors.rekey, key_map={ "inputs": None, "targets": "text" }), seqio.preprocessors.tokenize, seqio.CacheDatasetPlaceholder(), preprocessors.unsupervised, seqio.preprocessors.append_eos_after_trim, ], output_features=DEFAULT_OUTPUT_FEATURES, metric_fns=[]) ``` I'm highly interested in the `preprocessors` part of it. Many thanks! (/cc @vanzytay)<|||||>It's here: https://github.com/google-research/text-to-text-transfer-transformer/blob/main/t5/data/tasks.py#L106 But it needs further gin configuration if you actually want to use it as a pre-training task. If you want to use the standard T5 pre-training task, use https://github.com/google-research/text-to-text-transfer-transformer/blob/main/t5/data/tasks.py#L46<|||||>How does t5-efficient-xxl-nl4 perform to say medium sized models? While the xxl model file is 45 gb, this one is smaller than 4gb. Googling anything for this model didn't help me. 3 results in total. Not spoken of in the paper as far as I can see. No performance comparison I could find. 4 transformer blocks instead of 24 sounds like a quite radical change, so properly a performance penalty, but then again, they shared the model, is it any good? <|||||>Gently pinging the original author @vanzytay here
transformers
15,466
closed
Preprocess/transform logits before caching them for computing metrics.
# 🚀 Feature request I think it'd be nice to have a simple way to preprocess the logits before caching them for computing metrics. ## Motivation When the `Trainer` `compute_metrics` are set, during evaluation the logits are accumulated (some in GPU memory, for `args.eval_accumulation_steps` steps; all in RAM). For some models, it will almost certainly lead to out of memory problems. For instance, for a language model, this means storing in RAM a tensor of size [eval ds size, sequence length, vocab size]. In many cases, what is needed to compute metrics is just some reduction of the logits. For example: `logits.argmax(dim=-1)`. I know I can subclass `Trainer` for this and redefine `evaluation_loop`, just wanted to know if you'd consider a more generic solution that prevents everyone that needs the feature from duplicating the rest of the code of `evaluation_loop`. I've seen more people running into the same issue. For instance: https://github.com/huggingface/transformers/issues/8476 https://discuss.huggingface.co/t/cuda-out-of-memory-when-using-trainer-with-compute-metrics/2941 https://discuss.huggingface.co/t/cuda-out-of-memory-during-evaluation-but-training-is-fine/1783/4 ## Your contribution I was thinking about something like adding a `preprocess_logits_for_metrics` parameter to `TrainingArguments` of type Callable If you don't set the parameter, the default is None and everything would work as always. If you set it, the logits are passed to `args.preprocess_logits_for_metrics` and its output is what's cached. The main modification would be this in `Trainer.evaluation_loop`: ``` # Update containers on host ... if logits is not None: logits = self._pad_across_processes(logits) logits = self._nested_gather(logits) if self.args.preprocess_logits_for_metrics is not None: logits = self.args.preprocess_logits_for_metrics(logits) preds_host = logits if preds_host is None else nested_concat(preds_host, logits, padding_index=-100) ``` Do you think it's worth it? If you do, I can submit a PR. I tag @sgugger because I think he's worked quite a lot with the training loop, but I'm open to receive feedback from anyone.
02-01-2022 19:05:11
02-01-2022 19:05:11
I think it would be a valuable addition, as you describe the problematic situation very well, when someone wants to compute perplexity with a language model having a very large vocab size, for instance. The `TrainingArguments` can't have a new argument of type callable, but I think we could have a new argument in the init `preprocess_logits_for_metrics`. I'm happy to review a PR for this, and if you could show inside how to use it in the examples `run_clm` or `run_mlm` to get the perplexity at each evaluation without getting OOM, that would be a very compelling argument for this new API! cc @LysandreJik for info.
transformers
15,465
closed
[Wav2Vec2ProcessorWithLM] add alpha & beta to batch decode & decode
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Improves the way `alpha` and `beta` weights can be set for LM-boosted decoding. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
02-01-2022 16:54:17
02-01-2022 16:54:17
_The documentation is not available anymore as the PR was closed or merged._
transformers
15,464
closed
Convert T5x models to PyTorch
# 🚀 Feature request Googles new Flax implementation of T5, called [T5x](https://github.com/google-research/t5x) is creating models/checkpoints in a custom format. The config is stored in .gin files, and the current T5 conversion scripts like this [byT5 conversion script](https://github.com/huggingface/transformers/blob/master/src/transformers/models/byt5/convert_byt5_original_tf_checkpoint_to_pytorch.py) is not working. Would it be possible to create a script for converting the T5x checkpoints/models? @patrickvonplaten @anton-l
02-01-2022 16:19:04
02-01-2022 16:19:04
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Think @stefan-it has a working script :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@stefan-it, can you share that script? <|||||>Hi @dirkgr , the script was merged into current master of Transformers with this #16853 and is available here: https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/convert_t5x_checkpoint_to_flax.py :)<|||||>@stefan-it , hey could you please tell me how exactly does the conversion script works. Actually i tired run the conversion script and I seems like the config file in t5x is in . gin format and the script expects the config file to be in .json format. Hence I was stuck from converting my t5x model to HF. Could you please show me how it's done and provide some details<|||||>Hi @StephennFernandes , could you please try to use these steps, mentioned in the corresponding PR: https://github.com/huggingface/transformers/pull/16853#issuecomment-1105004694 The config file needs to be in JSON format, yes :)<|||||>If you get any errors, please post them here, so we can try to find a solution :hugs: <|||||>@stefan-it , thanks for replying. I followed the steps as instructed in [#16853](https://github.com/huggingface/transformers/pull/16853#issuecomment-1105004694) and tried converting my pretrained t5_1_1_base model to hugginface. But i get the following error: ``` /home/stephen/anaconda3/lib/python3.9/site-packages/jax/_src/tree_util.py:188: FutureWarning: jax.tree_util.tree_multimap() is deprecated. Please use jax.tree_util.tree_map() instead as a drop-in replacement. warnings.warn('jax.tree_util.tree_multimap() is deprecated. Please use jax.tree_util.tree_map() ' Traceback (most recent call last): File "/home/stephen/Desktop/t5_test_run/t5x/t5x_convert_to_hf.py", line 234, in <module> convert_t5x_checkpoint_to_flax(args.t5x_checkpoint_path, args.config_name, args.flax_dump_folder_path) File "/home/stephen/Desktop/t5_test_run/t5x/t5x_convert_to_hf.py", line 27, in convert_t5x_checkpoint_to_flax t5x_model = checkpoints.load_t5x_checkpoint(t5x_checkpoint_path) File "/home/stephen/Desktop/t5_test_run/t5x/t5x/checkpoints.py", line 1674, in load_t5x_checkpoint state_dict = _run_future_tree(future_state_dict) File "/home/stephen/Desktop/t5_test_run/t5x/t5x/checkpoints.py", line 162, in _run_future_tree leaves = loop.run_until_complete(asyncio.gather(*future_leaves)) File "/home/stephen/anaconda3/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete return future.result() File "/home/stephen/Desktop/t5_test_run/t5x/t5x/checkpoint_importer.py", line 82, in _get_and_cast arr = await self._get_fn() # pytype: disable=bad-return-type File "/home/stephen/Desktop/t5_test_run/t5x/t5x/checkpoints.py", line 1502, in _read_ts t = await ts.open(tmp_ts_spec_dict, open=True) ValueError: Error opening "zarr" driver: Error reading local file "./T5_1_1_base_hindi/checkpoint_100000/state.param_states.decoder.decoder_norm.scale.v/.zarray": Invalid key: "./T5_1_1_base_hindi/checkpoint_100000/state.param_states.decoder.decoder_norm.scale.v/.zarray" ``` <|||||>Hi @StephennFernandes could you try to install: ``` pip3 install --upgrade tensorstore==0.1.13 ``` The `tensorstore` package was the reason for that `zarr driver` error message in my conversion experiments.<|||||>@stefan-it , hey i tried that but i didnt work for me, i still get the same error. I came across this issue in the t5x repo [#452](https://github.com/google-research/t5x/issues/452) i am currently using ubuntu 20.04 with linux kernel 5.13.0 <|||||>Hi @StephennFernandes , I think I have a working solution now. I installed everything in a fresh new virtual environment, but I got bazel errors (hopefully Google will stop using bazel someday...) when trying to build `tensorstore==0.1.13`. What I did then: ```bash pip3 install --upgrade tensorstore ``` to install latest version of `tensorstore`. The non-working conversion script call looks like: ```bash python3 convert_t5x_checkpoint_to_flax.py --t5x_checkpoint_path ./t5_1_1_small --config_name ./config_1_1.json --flax_dump_folder_path ./t5x_1_1_exported ``` But `tensorstore` is not able to handle it. The magic trick here is to use the absolute path to the t5x checkpoint path. So instead of using `./t5_1_1_small` fetch the absolute path via: ```bash realpath ./t5_1_1_small ``` this returns something like: ```bash /home/stefan/transformers/src/transformers/models/t5/t5_1_1_small ``` then use this path for the `t5x_checkpoint_path` argument. I hope this works! It worked under my local setup. (Oh, and in case you get some strange `torch.fx` import errors, just run `pip3 install --upgrade torch --extra-index-url https://download.pytorch.org/whl/cpu` to fix them)<|||||>@stefan-it , it worked 🎉 Thanks a ton for all the help 🙏 **Actually i still have a couple of other questions:** - The current conversion only works on flax models, supposed I'd have to finetune the model in Huggingface using Pytorch. Is there a way to convert HF flax models to Pytorch internally ? Or would I have to first convert t5x model to Pytorch and then convert it to HF ? - Also I am a bit confused about the tokenizer, did this conversion script also convert the tokenizer ? ( I don't think the sentencepiece .model file existed in the model dir ) If not, how should I get going in converting the tokenizer to Huggingface ? <|||||>@StephennFernandes Here is a link to a convenience script that I am using for creating the PyTorch and TF models. https://github.com/peregilk/north-t5/blob/main/create_pytorch_tf_and_vocab.py Do not expect it to run directly though. It was really not meant for the public. However, it should give you the basic idea about how to load the models and then save them in the correct format.<|||||>@peregilk , thanks for sharing. actually the link isnt available, apparently i believe its private. could you please check and confirm. <|||||>@StephennFernandes Sorry about that. Now it is public. As a side note, especially to @patrickvonplaten: Wouldnt it be nice to put a wrapper around the great script that @stefan-it have made. A script that also loads the models in HuggingFace and saves them in PyTorch and TF format, as well as creates the necessary tokenizers. Maybe it can even copy over the training-logs that are saved in the t5x-checkpoint directory. I have done this manually on these models: https://huggingface.co/north/t5_large_NCC. As you see, the tensorboard logs from t5x integrates nicely with the Training Metrics in HF.<|||||>I think this would indeed be a great idea! Maybe we can open a `T5X` folder under https://github.com/huggingface/transformers/tree/main/examples/research_projects with lots of functionality for conversion ?<|||||>@stefan-it @patrickvonplaten hey were you able to convert the scalable_t5 models ? actualy i have pretrained a mt5-base `t5x/examples/scalable_t5/mt5/base.gin` using t5x But i am unable to convert it to huggingface. i tried several huggingface config.json files from the t5-efficient-base but none-of them worked. **the following is my error when converting:** ``` convert_t5x_checkpoint_to_flax(args.t5x_checkpoint_path, args.config_name, args.flax_dump_folder_path) File "/home/stephen/Desktop/mt5_finetuning_preliminary_tests/t5x_to_hf.py", line 12, in convert_t5x_checkpoint_to_flax split_mlp_wi = "wi_0" in t5x_model["target"]["encoder"]["layers_0"]["mlp"] KeyError: 'layers_0' ```<|||||>Hi @StephennFernandes , really interesting, I haven't tried it with the Scaled T5X models yet (Those efficient T5 models that can be found on the Model Hub are converted from the TensorFlow checkpoints, because they are trained with the official T5 implementation and not with T5X). Please give me some time to investigate that :)<|||||>Does this script support the transformation of XL or XXL models?<|||||>@joytianya I have been using this script a lot for converting both XL and XXL models. Works fine.<|||||>@peregilk thank your answer. I tried it and generated the following files in /content/flan_t5x_xl_exported, and then I used this below code (T5ForConditionalGeneration) to load the dir and happen error(Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found). How do I solve it? ```python model = T5ForConditionalGeneration.from_pretrained("/content/flan_t5x_xl_exported", from_flax=True) # Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in # directory /content/flan_t5x_xl_exported. ```` /content/flan_t5x_xl_exported: " *model-00001-of-00002.msgpack *model-00002-of-00002.msgpack *model.msgpack.index.json config.json "<|||||>@stefan-it @peregilk Does the script support T5X converted into pytorch? if not, Is there any other solution?<|||||>@joytianya Try open the files here: https://huggingface.co/north/t5_xl_NCC. All these are converted using the script written by @stefan-it. Note that the large PyTorch files are split into multiple smaller files. <|||||>@peregilk Thank you for your reply I want to convert my finetuned model into pt, In addition, when I use scripts to convert t5x to flax, xl and xxl are divided into multiple files. Can they not be divided into multiple files or merge them to a single file?<|||||>@joytianya. I do not think this splitting really is related to the conversion script that @stefan-it wrote. Transformers does this automatically with large files. <|||||>ok, thank you
transformers
15,463
closed
`Trainer.push_to_hub` always tries to push to the Hub
# What does this PR do? As pointed out in #15431, when a user tries to call `Trainer.push_to_hub` with the corresponding training argument set to `False`, the call fails. However, the intent of the user calling that method is clear, so we should try to honor it. This PR makes the call succeed, as long as it's possible to create the repo from the output dir (which will fail if the repo in which the user wants to push to the Hub already exists and output dir is not a local clone).
02-01-2022 16:00:56
02-01-2022 16:00:56
_The documentation is not available anymore as the PR was closed or merged._
transformers
15,462
closed
Update README.md
fix typo # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger @patil-suraj
02-01-2022 14:23:34
02-01-2022 14:23:34
_The documentation is not available anymore as the PR was closed or merged._
transformers
15,461
closed
[BartTokenizer] remove inheritance on RobertaTokenizer
# What does this PR do? Continue making tokenizers independent. This PR refactors the `BartTokenizer` to remove dependency on `RobertaTokenizer`
02-01-2022 13:22:05
02-01-2022 13:22:05
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for tackling this! Should we maybe also add some `# Copied from ...` statements here to make the code more robust to changes?
transformers
15,460
closed
Inspect inner layers of Transformer models as in TensorFlow/Keras
In `keras` (and `tensorflow`), we can inspect a model as follows: ``` resnet = tensorflow.keras.applications.ResNet152V2( include_top=True, weights="imagenet", input_tensor=None, input_shape=None, pooling=None, classes=1000, classifier_activation="softmax", ) resnet.summary() ``` This will print a handy summary of the model's inner layers: ``` Model: "resnet152v2" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 224, 224, 3) 0 __________________________________________________________________________________________________ conv1_pad (ZeroPadding2D) (None, 230, 230, 3) 0 input_1[0][0] __________________________________________________________________________________________________ conv1_conv (Conv2D) (None, 112, 112, 64) 9472 conv1_pad[0][0] __________________________________________________________________________________________________ pool1_pad (ZeroPadding2D) (None, 114, 114, 64) 0 conv1_conv[0][0] ``` When instantiating a Transformer model, such as in the following snippet ``` from transformers import TFAutoModelForSequenceClassification checkpoint = "bert-base-uncased" model = TFAutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2) ``` the same operation (`model.summary()`) prints reduced information: ``` Layer (type) Output Shape Param # ================================================================= bert (TFBertMainLayer) multiple 109482240 _________________________________________________________________ dropout_75 (Dropout) multiple 0 _________________________________________________________________ classifier (Dense) multiple 1538 ================================================================= ``` I'd like to see "inside" the `bert` block and work with the model as with any `keras` model. This means iterating over the layers, getting the weights, etc. I am aware that we can approach this by saying, e.g., ``` for w in model.weights: print(w.name, w.shape) ``` However, is there any way to see the inner layers (the layers inside the `bert` block above) or, in general, use the Transformer model like a regular `TensorFlow` model? An example of what I want to "see" is in [this](https://keras.io/examples/nlp/text_classification_with_transformer/) pure-TensorFlow example.
02-01-2022 12:52:48
02-01-2022 12:52:48
cc @Rocketknight1 @gante <|||||>Assigning to me, I can see if there is good solution for this<|||||>This is tricky, I think. We spoke briefly about this with fchollet - I don't think there's a good way to get this without refactoring our models from subclassing-style to functional-style, but I might be wrong!<|||||>Yup, was reaching the same conclusion. Enabling a clean `model.summary()` would imply a massive refactor in all models, such that everything is under the `Functional` model framework 👀 @phrasenmaeher, we're sorry that we are not much help, but it would imply a massive change for a minor benefit. I'm closing the issue, but please, if you think there is another way to solve the question, feel free to reopen :)
transformers
15,459
closed
Masking in T5Attention
@patil-suraj @patrickvonplaten Hello! I was wondering why have you added mask in T5Attention to position_bias????? look at https://github.com/huggingface/transformers/blob/db7d6a80e82d66127b2a44b6e3382969fdc8b207/src/transformers/models/t5/modeling_t5.py#L510
02-01-2022 11:52:33
02-01-2022 11:52:33
Hey, I think simply corresponds to how it is done in the original implementation. It essentially means that we don't add any **positional** information for tokens that are masked.<|||||>ok understand thanks!
transformers
15,458
closed
Support dynamical input size and shape for TrOcr input image
# 🚀 Feature request Support dynamical input size and shape for TrOcr input image. ## Motivation TrOcr has input shape for image 384x384, but can recognise only single line. So It does not effective utilize resources.
02-01-2022 11:30:27
02-01-2022 11:30:27
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,457
closed
apply torch int div to layoutlmv2
# What does this PR do? Linked to #14853 (now closed - bad rebase) @LysandreJik
02-01-2022 11:10:25
02-01-2022 11:10:25
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15457). All of your documentation changes will be reflected on that endpoint.<|||||>REALM tests seem to be failing - doesn't seem like they should be running ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@ManuelFay can you rebase with master such that we can merge your PR?<|||||>Done, and I changed the path to the custom divide function (compatible with older torch versions). @NielsRogge <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @ManuelFay, apologies, for some reason this is still not merged. Can you rebase your PR with the main branch such that we can merge it? Apologies again for asking you this a second time. Thanks!<|||||>No problem @NielsRogge ! Here you go, rebased ;)
transformers
15,456
closed
fix set truncation attribute in `__init__` of `PreTrainedTokenizerBase`
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR fixes #15440. @LSinev I've putted you as co-author of this change. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Would love to have your review @Narsil and @LysandreJik or @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
02-01-2022 10:28:56
02-01-2022 10:28:56
_The documentation is not available anymore as the PR was closed or merged._
transformers
15,455
closed
Fixing overriding from_pretrained with `truncation_side`.
# What does this PR do? Make sure users can override `truncation_side` on load. Fixes #15440 @sgugger <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
02-01-2022 10:26:44
02-01-2022 10:26:44
_The documentation is not available anymore as the PR was closed or merged._<|||||>Closed in favor of https://github.com/huggingface/transformers/pull/15456
transformers
15,454
closed
replace assert with exception for padding_side arg in `PreTrainedTokenizerBase` `__init__`
# What does this PR do? This PR propose to replace an assert inside `PreTrainedTokenizerBase`'s `__init__` method with a ValueError exception. I took also this opportunity to test this behavior as it wasn't tested in the existing test. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Would love to have your review @LysandreJik and @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
02-01-2022 10:15:34
02-01-2022 10:15:34
_The documentation is not available anymore as the PR was closed or merged._<|||||>> I think this is included in the other PR I just reviewed no? @sgugger, indeed part of this PR was in the PR you just reviewed but I misunderstood narsil comment so I'll removed it from the other PR and merge this PR first. TL;DR, now the other PR is only about truncation and this one only about padding. :slightly_smiling_face:
transformers
15,453
closed
fix from_vision_text_pretrained doc example
# What does this PR do? Fix checkpoint in `FlaxVisionTextDualEncoderModel`'s doc example. @patil-suraj
02-01-2022 08:06:21
02-01-2022 08:06:21
_The documentation is not available anymore as the PR was closed or merged._
transformers
15,452
closed
Unseen decoded words during inference with Wav2Vec2ProcessorWithLM
## Environment info - `transformers` version: 4.15.0 - Platform: Linux-5.8.0-36-generic-x86_64-with-glibc2.29 - Python version: 3.8.5 - PyTorch version (GPU?): 1.8.1+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help - Wav2Vec2: @patrickvonplaten, @anton-l ## Information I have been playing with Wav2Vec2ProcessorWithLM mainly with the help of your [helpful blog](https://huggingface.co/blog/wav2vec2-with-ngram) @patrickvonplaten In the output of Wav2Vec2ProcessorWithLM, I observed words in the decoded string that were not part of the training text corpus. I am not sure if this is a bug or a feature (OOV detection)? I didn't find any documentation regarding this. So can you verify? **Reference**: am i speaking to anushka bohra **Predicted**: am i speaking to anushka **bhra** The word **bhra** doesn't exist in the arpa LM used.
02-01-2022 06:30:15
02-01-2022 06:30:15
Hey @sharathadavanne, Good question and good job investigating the mistake used so far. I assume that you used the LM in combination with a trained acoustic model via `Wav2Vec2ProcessorWithLM` meaning that you used the `pyctcdecode` library: https://github.com/kensho-technologies/pyctcdecode/tree/main/pyctcdecode . In a nutshell what is happening here is that the predicted word (or letter) is dependent on both the language model (LM) **and** the acoustic model (AM). It's essentially a weighted sum of both the LM and AM's prediction. Note that the AM is a CTC model which predicts the next letter. The LM is a n-gram which predicts the next word. `pyctcdecode` takes this into account under the hood an applied a beam search algorithm through the possible transcribed output letters while taking into account the LM. Now in case the AM is not very performant or the output just does sound very much like `bhra` , it might give a very high probability to this sequence of letters. The LM will give a probability of close to 0 for this word as it doesn't exist. But since the overall probability is a sum of both AM and LM, it's very well possible that the probability of the AM is so high that `bhra` is decoded.<|||||>Got it. So if I want to make sure all my decoded words are part of my LM, I basically have to use a higher `alpha` in `build_ctcdecoder()`.<|||||>Yes I think this could be a way! BTW - it might be a good idea to actually ask the `pyctcdecode` authors directly on their repo :-)<|||||>BTW, here is also a PR that allows you to change `alpha` at every forward step: https://github.com/huggingface/transformers/pull/15465 . This should make things a bit easier.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,451
open
Adding RelationExtraction head to layoutLMv2 and layoutXLM models
# 🌟 New model head addition Relation Extraction Head for LayoutLMv2/XLM ## Addition description Hey all, I've see a bunch of different requests across huggingface issues [[0]](https://github.com/huggingface/transformers/issues/14330), unilm issues [[0]](https://github.com/microsoft/unilm/issues/286)[[1]](https://github.com/microsoft/unilm/issues/465) and on @NielsRogge Transformer Tutorials issues [[0]](https://github.com/NielsRogge/Transformers-Tutorials/issues/6)[[1]](https://github.com/NielsRogge/Transformers-Tutorials/issues/39) about adding the relation extraction head from layoutlmv2 to the huggingface library. As the model is quite difficult to use in it's current state I was going to write my own layer ontop but I saw in this [issue](https://github.com/NielsRogge/Transformers-Tutorials/issues/39) that it may be a good idea to add it to transformers as a separate layoutlmv2/xlm head and thought it would be a good way to contribute back to a library I use so much. I've gone ahead and added it under my own [branch](https://github.com/R0bk/transformers/tree/layoutlm-relation-extraction) and got it successfully working with the library. [Here](https://colab.research.google.com/drive/16wqA3oTUf7yzUKsSSZxiMf1443_ZO3wC?usp=sharing) is a colab using my branch of transformers if you want to test it yourself. Before I add tests/ write more docs I just wanted to post here first to see if there's interest in potentially merging this in. If there is interest I have a few questions that it would be helpful to get some info on to ensure that I've correctly done the integration.
02-01-2022 04:52:44
02-01-2022 04:52:44
Hi, That's great to read :) it was a bit unclear to me how to use the model at inference time (the authors only provided a [script](https://github.com/microsoft/unilm/blob/761acb436cf7c3d92091776c1f499c13b8a3eb27/layoutlmft/examples/run_xfun_re.py) for training and evaluation, i.e. when labels are available). Can you show how to use the model when you don't have any labels available? More specifically, what are the `entities` and `relations` one needs to provide at inference time? I assume that the model needs all possible entities, as well as all possible relations in order to classify them pairwise. In that case, we can add it. There was already an effort to do this (see #15173). <|||||>Hey Niels, I've added to the bottom of this notebook [here](https://colab.research.google.com/drive/16wqA3oTUf7yzUKsSSZxiMf1443_ZO3wC?usp=sharing) an inference example (please ignore the accuracy, I didn't spend much time finetuning). For running the inference we just require an empty relations dict as we calculate what all possible relations could be based on the entity labels (the current model only links between entities with labels 1 (the key) and 2 (the value)). We do however require all the entities to be labelled with the start token index, end token index and a label so we would probably suggest to users in the docs to run LayoutLMv2ForTokenClassification first and then run this based on the results of that. I'm not really experienced enough with the library to review the previous effort but I think there may be a few things missing there. In terms of going forward would you prefer if I made a new PR from my branch or tried to modify that PR to conform?<|||||>Also just forgot to add but on the detailed form that entities and relations should be in I put it all in model input and output docstring: https://github.com/R0bk/transformers/blob/9c0e0ba9ccc0d32b795c2c0e0130931b92230292/src/transformers/models/layoutlmv2/outputs_layoutlmv2.py#L26-L74<|||||>Awesome work!! I'll have to deep dive a bit into this, but looks very nice. Let's add the head model to the library. I guess we can continue our discussion on #15173? Btw, Detectron2 now has support for torch 1.10, you can now easily install it in Colab using: `!python -m pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu113/torch1.10/index.html ` <|||||>Ahh that's so much easier for detectron thanks for that :) . Also stoked to hear that we can integrate this. There's a few things that I thought I should mention where I'm not sure where to put them so I thought I'd just comment them here and get some advice from you. Questions: - I had to add an argument `_configuration_file` to the model init but this is only required in transformers post v16.0 (inclusive), works without in v15.0 (think this is related to edits to `kwargs` in `configuration_utils.py`). Are we meant to add this? - Using the default tokenizer and padding seems to use the default huggingface pad token `[PAD]` but this token isn't in the `microsoft/layoutxlm-base` tokenizer's vocab so doing padding results in a OOV error. The pad token `<pad>` has the id `1` so I've been setting the tokenizer to use that token. I feel like I've been missing something obvious here, have I? - I believe a lot of users would want to use the model direct on the outputs of their ForTokenization head. Unfortunately this RelationExtraction head in it's current state requires the user to select some entities on their page as a key and some other entities as a value. I feel as though we can make the model a lot more applicable to general users if we allow the model to not only map between keys and values but between all entities. I believe the original authors only limited the linking because in their dataset they are given all keys on a page and all values on a page, hence they only had to map between keys and values and in order to maximize accuracy they only did this mapping. For users who want to do a common task, for example identify which entities detected belong in the same table row they won't necessarily have a key which the other entities can map to. In fact in this case the users are probably better off than the original authors as they have only selected entities they are interested in where the original authors have all key value pairs highlighted. So even if we increase the big O complexity of all relation mapping from `O(keys*values) where at largest max(|keys|, |values|)=|entities|/2` to `O(entities^2)` in general this should be fine as `|entities|` should smaller. The following line could be put under a config flag and then the user could have the option of how they want the model to operate. https://github.com/R0bk/transformers/blob/d9fe818083017d49487a3a45ca99f52123d68628/src/transformers/models/layoutlmv2/modeling_layoutlmv2.py#L1431 - We could also allow the user to specify a dictionary of labels that are allowed to match with each other. Do you think something like this would be a good idea to add? I can write an example colab if that would be helpful in demonstrating what I mean. - With the work I've done I've noticed the current model with the RE head is a bit sensitive to train, collapsing in about 1/4 of the runs that I've done (on XFUN dataset) It really requires running with lr warmup (based on my testing I'd recommend linear warmup [0, 5e-5] with about 15% of total steps) otherwise it is even more sensitive, collapsing in just over 1/2 the runs. Is there somewhere we can put this as advice for users who may not have dealt with things like this before? - The data collator I have in the colab may not be immediately ovbious to write for users who want to train the model, do you ever add data collators to the library? Or do you think it would potentially be a good idea to add some code to the `LayoutLMv2FeatureExtractor` to make it a bit easier for users? - Post merge, all working and a bit of cleanup do you think it would be a good idea to do a PR to add that colab as an example to your transformer tutorials repo? - Also I noticed in #15173 a lot of the comments are already fixed in the branch I have, since I can't edit that PR is there any way I can pull those changes in? Notes: - I noticed in the official implementation (unilm one) we do the `RGB` to `BGR` swap twice, once in the dataset, once in the feature extractor. Not sure if this makes much of a difference as most documents are greyscale. But I fixed this along with using the newer tokenizer and outputting a hd image in my dataset [here](https://huggingface.co/datasets/R0bk/XFUN/tree/main) <|||||>Also one more general LayoutLMv2/XLM question based on what I saw when writing the dataset. From my understanding the current processor/ feature extractor splits on words, tokenizes and then returns a flattened list of the tokens along with the original bounding boxes duplicated for where there was multiple tokens. With character based languages I think this may cause some issues hence why the original authors did the processing differently in the XFUN dataset code. I believe that if we split by words most software will split the characters on their own, if we pass this result to the processor/ feature extractor then the tokenizer can't run correctly as it can't group multiple characters together into a single token id. And if we pass in a whole line at once the processor/ tokenizer will create the token ids correctly but will just duplicate the bounding box of the entire line over and over. Is my understanding correct? And if so do you think we could create a different way of using the processor/ feature extractor where you can pass in a whole line along with the bounding boxes for each character in that line and then use the offset mappings from the tokenizer to remap the bounding boxes correctly?<|||||>I'm experimenting with LayoutLMv2 and LayoutLMForRelationExtraction. I referred to https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/FUNSD/True_inference_with_LayoutLMv2ForTokenClassification_%2B_Gradio_demo.ipynb for entity detection/predictions using LayoutLMv2 Can someone help me how can I convert these predictions from LayoutLMv2 to entity dict ( which is input to LayoutLMv2ForRelationExtraction) { 'start': `torch.IntTensor` of shape `(num_entites)`, Each value in the list represents the id of the token (element of range(0, len(tokens)) where the entity starts 'end': `torch.IntTensor` of shape `(num_entites)`, Each value in the list represents the id of the token (element of range(0, len(tokens)) where the entity ends 'label': `torch.IntTensor` of shape `(num_entites)` Each value in the list represents the label (as an int) of the entity }<|||||>> I had to add an argument _configuration_file to the model init but this is only required in transformers post v16.0 (inclusive), works without in v15.0 (think this is related to edits to kwargs in configuration_utils.py). Are we meant to add this? This is weird, might be a bug. cc @sgugger > Using the default tokenizer and padding seems to use the default huggingface pad token [PAD] but this token isn't in the microsoft/layoutxlm-base tokenizer's vocab so doing padding results in a OOV error. The pad token <pad> has the id 1 so I've been setting the tokenizer to use that token. I feel like I've been missing something obvious here, have I? You mean using `tokenizer = LayoutXLMTokenizer.from_pretrained("microsoft/layoutxlm-base")`? Normally this tokenizer should appropriately pad sequences. Can you provide a code snippet that reproduces your issue? > I believe a lot of users would want to use the model direct on the outputs of their ForTokenization head. This is fine for me! > We could also allow the user to specify a dictionary of labels that are allowed to match with each other. Do you think something like this would be a good idea to add? I can write an example colab if that would be helpful in demonstrating what I mean. Yes this seems very useful. > With the work I've done I've noticed the current model with the RE head is a bit sensitive to train, collapsing in about 1/4 of the runs that I've done (on XFUN dataset) It really requires running with lr warmup (based on my testing I'd recommend linear warmup [0, 5e-5] with about 15% of total steps) otherwise it is even more sensitive, collapsing in just over 1/2 the runs. Is there somewhere we can put this as advice for users who may not have dealt with things like this before? Yes we usually have a tips section on each model's documentation page, e.g. LayoutLMv2's one can be found [here](https://huggingface.co/docs/transformers/model_doc/layoutlmv2) (right below the abstract of the paper). We can link to additional notebooks for more info. > The data collator I have in the colab may not be immediately ovbious to write for users who want to train the model, do you ever add data collators to the library? Or do you think it would potentially be a good idea to add some code to the LayoutLMv2FeatureExtractor to make it a bit easier for users? We do have data collators in the library, you can find them [here](https://github.com/huggingface/transformers/blob/master/src/transformers/data/data_collator.py). Alternatively, we can include code in the feature extractor as long as we don't break existing code of our users. Maybe the data collator could be the best option here. > Post merge, all working and a bit of cleanup do you think it would be a good idea to do a PR to add that colab as an example to your transformer tutorials repo? Yes sure, lot's of people have been asking for this, so let's add a clean notebook with additiona documentation such that people really know how the model works. Feel free to open a new PR! <|||||>> I had to add an argument _configuration_file to the model init but this is only required in transformers post v16.0 (inclusive), works without in v15.0 (think this is related to edits to kwargs in configuration_utils.py). Are we meant to add this? No this is not a public-facing argument, and it's for configurations only anyway. It's not used anywhere in the code for pretrained models, so I don't see why it should be needed. You can check every other model in the library and see for yourself that it's not been added :-) <|||||>@sgugger I can reproduce the error with `_configuration file`: ``` !pip install -q transformers from transformers import LayoutLMv2ForTokenClassification model = LayoutLMv2ForTokenClassification.from_pretrained('microsoft/layoutxlm-base', num_labels=10) ``` gives: ``` TypeError Traceback (most recent call last) [<ipython-input-5-b9f90523681c>](https://localhost:8080/#) in <module>() 1 from transformers import LayoutLMv2ForTokenClassification ----> 2 model = LayoutLMv2ForTokenClassification.from_pretrained('microsoft/layoutxlm-base', num_labels=10) [/usr/local/lib/python3.7/dist-packages/transformers/modeling_utils.py](https://localhost:8080/#) in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 1487 else: 1488 with no_init_weights(_enable=_fast_init): -> 1489 model = cls(config, *model_args, **model_kwargs) 1490 1491 if from_pt: TypeError: __init__() got an unexpected keyword argument '_configuration_file' ```<|||||>Looking into it, thanks for the repro!<|||||>The problem should be fixed on master. We'll make a patch release on Monday with the fix.<|||||>@R0bk Thank you for the great work. There were a lot of missing points I had for RE inference, now mostly clarified. But I still having difficulty to understand the 'entities' and 'relations' (such as 'start_index' and 'end_index'). Could you give an example of what they represent in a given sentence? I couldn't find a clear answer in the original paper and in other reference papers authors mention. You added this [docstring](https://github.com/R0bk/transformers/blob/9c0e0ba9ccc0d32b795c2c0e0130931b92230292/src/transformers/models/layoutlmv2/modeling_layoutlmv2.py#L1549), but it would be great if you exemplify those Here is the only info from the [paper](https://arxiv.org/pdf/2104.08836.pdf) that mention about RE process: `Relation Extraction: Equipped with the document D and the semantic entity label set C, relation extraction aims to predict the relation between any two predicted semantic entities. Defining R = {r0, r1, .., rm} as the semantic relation labels, we intend to find a function FRE : (D, C, R, E) → L, where L is the predicted semantic relation set: L = {(head0, tail0, r0), ...,(headk, tailk, rk)} where headi and taili are two semantic entities. In this work, we mainly focus on the key-value relation extraction.` and `Relation Extraction: Following Bekoulis et al. (2018) , we first incrementally construct the set of relation candidates by producing all possible pairs of given semantic entities. For every pair, the representation of the head/tail entity is the concatenation of the first token vector in each entity and the entity type embedding obtained with a specific type embedding layer. After respectively projected by two FFN layers, the representations of head and tail are concatenated and then fed into a bi-affine classifier. ` Thank you<|||||>> Hey Niels, > > I've added to the bottom of this notebook [here](https://colab.research.google.com/drive/16wqA3oTUf7yzUKsSSZxiMf1443_ZO3wC?usp=sharing) an inference example (please ignore the accuracy, I didn't spend much time finetuning). > > For running the inference we just require an empty relations dict as we calculate what all possible relations could be based on the entity labels (the current model only links between entities with labels 1 (the key) and 2 (the value)). > > We do however require all the entities to be labelled with the start token index, end token index and a label so we would probably suggest to users in the docs to run LayoutLMv2ForTokenClassification first and then run this based on the results of that. > > I'm not really experienced enough with the library to review the previous effort but I think there may be a few things missing there. In terms of going forward would you prefer if I made a new PR from my branch or tried to modify that PR to conform? So you mean that we need to train 2 models , one is for token classification,one uses the results of the previous model to do the relation Extraction?<|||||>> I'm experimenting with LayoutLMv2 and LayoutLMForRelationExtraction. > > I referred to https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/FUNSD/True_inference_with_LayoutLMv2ForTokenClassification_%2B_Gradio_demo.ipynb for entity detection/predictions using LayoutLMv2 > > Can someone help me how can I convert these predictions from LayoutLMv2 to entity dict ( which is input to LayoutLMv2ForRelationExtraction) { 'start': `torch.IntTensor` of shape `(num_entites)`, Each value in the list represents the id of the token (element of range(0, len(tokens)) where the entity starts 'end': `torch.IntTensor` of shape `(num_entites)`, Each value in the list represents the id of the token (element of range(0, len(tokens)) where the entity ends 'label': `torch.IntTensor` of shape `(num_entites)` Each value in the list represents the label (as an int) of the entity } Did you get the answer for your question?<|||||>> So you mean that we need to train 2 models , one is for token classification,one uses the results of the previous model to do the relation Extraction? I'm pretty sure the answer to this question is yes ;) <|||||>> > I'm experimenting with LayoutLMv2 and LayoutLMForRelationExtraction. > > I referred to https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/FUNSD/True_inference_with_LayoutLMv2ForTokenClassification_%2B_Gradio_demo.ipynb for entity detection/predictions using LayoutLMv2 > > Can someone help me how can I convert these predictions from LayoutLMv2 to entity dict ( which is input to LayoutLMv2ForRelationExtraction) { 'start': `torch.IntTensor` of shape `(num_entites)`, Each value in the list represents the id of the token (element of range(0, len(tokens)) where the entity starts 'end': `torch.IntTensor` of shape `(num_entites)`, Each value in the list represents the id of the token (element of range(0, len(tokens)) where the entity ends 'label': `torch.IntTensor` of shape `(num_entites)` Each value in the list represents the label (as an int) of the entity } > > Did you get the answer for your question? Not yet @NielsRogge, Can you please help here<|||||>> hi, I saw your amazing work https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/FUNSD/True_inference_with_LayoutLMv2ForTokenClassification_%2B_Gradio_demo.ipynb#scrollTo=AttFR_dMNVEL I was just thinking can we take these ques ans as key-value pair in a dictionary? Eagerly waiting for your response. Thanks.<|||||>I've implemented @NielsRogge's comments in #15173 in my own [fork](https://github.com/woflowinc/transformers/tree/add-layoutlmv2-re). I'm happy to open a PR, or to let someone else take it from here.<|||||>@quasimik Great work! Could you provide a step by step of how we use your new class `LayoutLmv2RelationExtractionDecoder`? [I see you have added to this part here](https://github.com/woflowinc/transformers/blob/2a8b0da433f8cbe62809ab13da7197b5c419429d/src/transformers/models/layoutlmv2/modeling_layoutlmv2.py#L558)<|||||>my aim is also to predict key-value pairs according to colab notebbok ,we have to train both token classification and entity detection model first,and then use that ouput as input in model = LayoutLMv2ForRelationExtraction.from_pretrained('microsoft/layoutxlm-base') am i right<|||||>Hello guys, any update on this new component?<|||||>Hi @R0bk thanks for the amazing work! I was able to train a RE component with custom data using your fork and the collab notebook that you provided and the results looks very promising! Though at the moment I'm just able to train the model with entities of types 1 & 2, if I set other types of entities inside the "label" field of the "entities" key lists, I got an error. I tried to comment the line that you suggested: https://github.com/R0bk/transformers/blob/d9fe818083017d49487a3a45ca99f52123d68628/src/transformers/models/layoutlmv2/modeling_layoutlmv2.py#L1431 but it didn't work. Can you please point me on some direction on this? Kind regards.<|||||>hi, I'm having trouble understanding how to use this... can someone guide me? I have an image of a random invoice, how do I get the key-value pairs? In other words how do I use this notebook step by step? <|||||>Hello, Thanks a lot for sharing your work with us :) . On my side, I do not see how we get the ids of tokens where entities start/end for the inference part? When running LayoutLMForTokenClassification before RE, we only get the label of every token in the input text image. Could you please share more details on this part? Thanks! <|||||>Hi @R0bk thanks for this work, this helped me train on my data for different usecases and get better results until recently where I happened to update the transformers module in my environment by mistake and then getting again back to your version is giving me **RuntimeError: CUDA out of memory.** even if my batch_size is 1. For the same data, I was able to train it for RE before. Not sure how to fix the problem tried creating a fresh environment still the problem persists. Environment details: python - 3.7.5 pytorch - 1.8.1+cu111 transformers - 4.17.0.dev0 detectron2 - 0.6 Kindly suggest what could be the problem or possibly if I've missed something in the new environment Thanks!<|||||>Hi @R0bk ,@NielsRogge Thanks for the amazing work Do you guys have any plans to add the RelationExtraction to the layoutLMV3? Since there is a huge difference between the results of layoutLM2 & LayoutLMv3.<|||||>Any updates on model head addition for inference? The output for LayoutLMV2 is not in line with the input for RE. Can these 2 heads be combined for RE task?<|||||>> > > I'm experimenting with LayoutLMv2 and LayoutLMForRelationExtraction. > > > I referred to https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/FUNSD/True_inference_with_LayoutLMv2ForTokenClassification_%2B_Gradio_demo.ipynb for entity detection/predictions using LayoutLMv2 > > > Can someone help me how can I convert these predictions from LayoutLMv2 to entity dict ( which is input to LayoutLMv2ForRelationExtraction) { 'start': `torch.IntTensor` of shape `(num_entites)`, Each value in the list represents the id of the token (element of range(0, len(tokens)) where the entity starts 'end': `torch.IntTensor` of shape `(num_entites)`, Each value in the list represents the id of the token (element of range(0, len(tokens)) where the entity ends 'label': `torch.IntTensor` of shape `(num_entites)` Each value in the list represents the label (as an int) of the entity } > > > > > > Did you get the answer for your question? > > Not yet @NielsRogge, Can you please help here Hi @Isha09Garg, were you able to use LAyoutLMv2 for RE task? (on FUNSD or other datasets)<|||||>Hi, has anyone tried to implement RE head on LayoutLM V1?
transformers
15,450
closed
Add class weight support for classification
# 🚀 Feature request Add support for the class re-weighting parameters provided in the current loss functions of PyTorch and TF. ## Motivation In supervised ML classification problems, specially multiclass/multilabel, the classes in the datasets used to train a model may have very different distribution of discrete values, which is usually called class imbalance. The class imbalance may cause biases in the training of the models. DL frameworks like PyTorch or TF, allow in their loss functions (e.g. `CrossEntropyLoss`, `BCEWithLogitsLoss`...) certain parameters to adjust the possible imbalance of data in the classes. Currently, the HF framework does not allow to take advantage of these parameters, so it would be nice to have support of a class re-weighting mechanism at the HF level to improve the possible biases in the models when doing fine-tunings. As I did in the past with the multi-label support, I’ve had to develop my own workarounds and specific headers for adding the class re-weight support to fine-tune my models. The class weight support basically requires a configuration parameter (e.g. `class_weights`) and some logic in the classification headers to basically: 1) Add the class weights only when training ```python … class_weights = None # Class weights if self.training and self.class_weights is not None: class_weights = self.class_weights.to(self.device) … ``` …and… 2) Pass the weights in the loss function ```python … elif self.config.problem_type == "single_label_classification": loss_fct = CrossEntropyLoss(weight=class_weights) loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) elif self.config.problem_type == "multi_label_classification": loss_fct = BCEWithLogitsLoss(pos_weight=class_weights) loss = loss_fct(logits, labels) … ``` In order to use it, it would only be required to pass in the model configuration the `class_weights` parameter, which is basically a list of floats (one for each class). In this PR (#15449) I provide a tentative implementation (based on my workarounds) with some basic tests, adding the required scaffolding for the common configuration class and for the BertForSequenceClassification header. Please let me know if you would be interested in this feature! I can continue working and refining the PR if you think this feature is important. In case you think it’s worth it and the approach is correct, I’ll probably need some advice on how to proceed next, as this feature -as it was the multilabel support- is header (classification mainly) and model dependent, so many model files would need to be modified potentially. Thanks in advance!
02-01-2022 01:20:29
02-01-2022 01:20:29
Hey @francisco-perez-sorrosal! We encourage defining your own loss outside of the model for this use-case. See https://github.com/huggingface/transformers/issues/9625#issuecomment-762167788 and https://github.com/huggingface/transformers/issues/7024#issuecomment-689625449 for more context.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @LysandreJik, thanks for your pointers! (and sorry for the late response) Yes, I already saw that but, the solution implying the override in `compute_loss()` in the Trainer implies that, if you are using the `AutoModel*` classes (e.g. AutoModelForSequenceClassification) the model's `forward()` method and the logic for deciding the problem_type (using `self.config.problem_type` and which includes the call to the loss function in most model heads) will be executed twice: 1. in the `forward()` method of each model head (e.g. `BertForSequenceClassification`), because it's predefined in the logic of each model's head. 2. in the override `compute_loss()` method, as you want to add the weights there; so you have to invoke `outputs = model(**inputs)` and `logits = outputs.get("logits")` to get the logits, and then call the loss function again this time with the weights you want to consider for the labels. Am I missing something?<|||||>Would be happy to see this PR accepted at some point in the future. From my point of view imbalanced datasets and class weights are a common problem and I dont think the current way to solve this problem is very intuitive.<|||||>Bumping this. I'm using `ViTForImageClassification` on a multi-label problem with many classes and few positive classes per example. I understand the sentiment in the comments linked above - that the goal is to provide simple and minimal interfaces; but in this particular regime, it's absolutely necessary to be able to set `pos_weight` for `BCEWithLogitsLoss`. UPDATE: @francisco-perez-sorrosal for what it's worth, when I override `compute_loss` to incorporate weights, the compute time per iteration seems comparable. So maybe forward is called twice... But more likely it is not called twice. This is likely why you have the option to return `output` when you override `compute_loss` - precisely because it is not available anywhere else. Final thought - I think having `pos_weight` exposed for the multilabel case is still worthwhile because it's such a common use case - and like I mentioned - without, there are many multilabel regimes that are not feasible.
transformers
15,449
closed
Add class re-weighting mechanism for classification tasks
In supervised ML classification problems, specially multiclass/multilabel, the classes in the datasets used to train a model may have very different distribution of discrete values, which is usually called class imbalance. The class imbalance may cause biases in the training of the models. DL frameworks like PyTorch or TF, allow in their loss functions (e.g. CrossEntropyLoss, BCEWithLogitsLoss...) certain parameters to adjust the possible imbalance of data in the classes. Currently, the HF framework does not allow to take advantage of these parameters, so it would be nice to have support of a class re-weighting mechanism at the HF level to improve the possible biases in the models when doing fine-tunings. # What does this PR do? Tentative implementation for a class re-weighting mechanism for classification tasks. WIP
02-01-2022 01:19:01
02-01-2022 01:19:01
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15449). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,448
closed
line 111 in convert_megatron_bert_checkpoint.py cause error "AttributeError: 'Namespace' object has no attribute 'get'"
The new script convert_megatron_bert_checkpoint.py for converting Megatron checkpoint to HF model has a minor error at line 111 ``` config.intermediate_size = ds_args.get("ffn_hidden_size", 4 * ds_args.hidden_size) ``` ds_args is <class 'argparse.Namespace'> which does not have "get" function change to ``` config.intermediate_size = ds_args.ffn_hidden_size if "ffn_hidden_size" in ds_args else 4 * ds_args.hidden_size ``` fix the issue.
01-31-2022 20:26:53
01-31-2022 20:26:53
hey @bugface, thanks for opening an issue! Do you want to open a PR to fix this? :)
transformers
15,447
closed
Tokenizers can not pad tensorized inputs
### Who can help @SaulLu or possibly @patrickvonplaten, who committed 538b3b46075dce22a61aeeafd2131979150359a9. ## To reproduce Observe the following MWE: ```python3 >>> import transformers >>> tokenizer = transformers.BartTokenizer.from_pretrained("facebook/bart-large") >>> tokenizer.pad(tokenizer("hello world")) # works as expected {'input_ids': [0, 42891, 232, 2], 'attention_mask': [1, 1, 1, 1]} >>> tokenizer.pad(tokenizer("hello world", return_tensors="pt")) # does not work as expected Traceback (most recent call last): File "<stdin>", line 1, in <module> File ".../python3.8/site-packages/transformers/tokenization_utils_base.py", line 2742, in pad if not required_input: RuntimeError: Boolean value of Tensor with more than one value is ambiguous >>> tokenizer.pad(tokenizer("hello world", return_tensors="np")) # same problem here Traceback (most recent call last): File "<stdin>", line 1, in <module> File ".../python3.8/site-packages/transformers/tokenization_utils_base.py", line 2742, in pad if not required_input: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() ``` The issue is not specific to the `BartTokenizer`, as it is present in `PreTrainedTokenizerBase`. I'm quite sure this affects tensorflow's tensors as well. ## Expected behavior The `pad` method of the tokenizer can handle any type of 'tensor'-type, not just regular lists. ## Explanation As the exceptions suggest, the problem arises here: https://github.com/huggingface/transformers/blob/0f69b924fbda6a442d721b10ece38ccfc6b67275/src/transformers/tokenization_utils_base.py#L2766-L2771 At that point, `required_input` points to the entry of `input_ids` (for BART in my case), which may be a tensor/array that cannot be evaluated as a boolean, as one can with a regular list. I presume that the point of the problematic code snipped is to provide correct results for empty inputs. In that case `if len(required_input) <= 0:` or similar would probably be a solution that works in all cases, but I may misunderstand what the code is designed to archive. ## Further Details <details> <summary>These are probably not that relevant to the problem, so I moved them down here:</summary> ## Environment info - `transformers` version: 4.15.0 - Platform: Linux-5.13.0-27-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyTorch version (GPU?): 1.10.1+cu102 (False) ## Information Model I am using: BART, PEGASUS The problem arises when using: * [x] my own modified scripts I am implementing the pre-training procedures for BART and PEGASUS similar to the original papers, which require custom masking algorithms that need to run before converting tokens to ids and adding special tokens. Because of this I'm only adding padding later on and stumbled across this problem. </details>
01-31-2022 19:51:14
01-31-2022 19:51:14
Hey @Impelon, Coulcd you clarify why you would do: ``` tokenizer.pad(tokenizer("hello world")) # works as expected ``` instead of ```python tokenizer("hello world", padding=True, truncation=True) ``` ? The `__call__` method of `tokenzier` internally calls the `.pad(...)` method so it'd be a bit weird to wrap `tokenizer(...)` into `tokenizer.pad(...)`.<|||||>Hey @patrickvonplaten, yes, you are right,`tokenizer.pad(tokenizer("hello world"))` is more or less for demonstration purposes, as that's not actually why I stumbled upon this problem. As mentioned in _further details_, I'm in the process of implementing the pretraining procedures for BART and PEGASUS more or less as described in the papers. For this I need to be able to work with the (tokenized) texts before they are converted to sequences of ids and special tokens are added (at the very least it makes things much easier). My workflow actually looks more like this: ```python3 tokens = tokenizer.tokenize(sentence) # do custom masking for pre-training ids = tokenizer.convert_tokens_to_ids(tokens) batch = tokenizer.prepare_for_model(ids) ``` I then combine multiple sentences together. Because texts are of different lengths I may need to do some padding afterwards: ```python3 batch = tokenizer.pad(batch) ``` which will fail if I used tensors in `prepare_for_model` before (which I don't need to necessarily). Truth be told, I'm not sure I necessarily need a fix for this. But this comment and the code following it suggests to me that the `pad(...)` method is expected to work with tensors, so I though I'd report this problem: https://github.com/huggingface/transformers/blob/0f69b924fbda6a442d721b10ece38ccfc6b67275/src/transformers/tokenization_utils_base.py#L2773-L2775<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.